DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This is a NonFinal Action on the merits in response to the application filed on 09/25/2025.
Claims 1, 3 – 8, 10 – 15, and 17 – 23 are currently pending in this application.
Claims 1, 8, and 15 have been amended.
Response to Remarks
Examiner’s Response to Remarks of Claim Rejections:
Claim Rejections Under 35 U.S.C. § 101.
Claim Rejections Under 35 U.S.C. § 103.
Claim Rejections Under 35 U.S.C. § 101
Applicant argues the amended claims are no longer directed to an abstract idea or mental process, but instead recite a specific, concrete technical implementation of a machine learning-driven training compliance system..
Examiner respectfully disagrees. Applicant’s amended claims are directed to a method, which is a statutory category. However, Applicant’s claim 1 recites abstract ideas. The limitations of independent claim 1, under its broadest reasonable interpretation, recites certain methods of organizing human activity but for the recitation of generic computer components. For example claim 1 recites receiving behavior data associated with a user wherein the behavior data includes at least a time of day, a session duration, and a notification to which the user previously responded to initiate a training unit associated with a previously completed training unit by the user; computing a responsiveness score based on the user's historical responses to prior notifications, and selecting as the reminder the channel whose responsiveness score exceeds a predetermined threshold a training duration, and a reminder time for the user; generating a reminder based on the user's predicted behavior, wherein the reminder comprises an uncompleted sub-unit; generating a completion time within a threshold of the training duration for the user; transmitting the reminder to the user at the reminder time; and analyzing the user's activity after the reminder was transmitted to determine whether the user completed the sub-unit; however the claim merely recites the activity of managing interactions where there are management of interactions between a human and a computer. Accordingly claim 1 recites certain methods of organizing human activity.
Claim 1 also recites mathematical concepts, as we have iteratively updating a personalized predictive behavior model associated with the user iteratively updating a personalized predictive behavior model associated with the user using a multi-layer neural network having multiple hidden layers trained with the processed behavior data; generating, using the personalized predictive behavior model, a training duration, a reminder type, and a reminder time for the user; and updating the personalized predictive behavior model to improve future predictions, as these limitations involve mathematical relationships of variables that represent some modeled characteristic. Accordingly claim 1 recites mathematical concepts.
Applicant’s claim 1 does not integrate the judicial exceptions into a practical application that uses the judicial exceptions in a manner that imposes a meaningful limit on the judicial exception. Applicant’s claim 1 is not significantly more than the abstract idea and provides no improvement to the computer nor to a technological field. Applicant recites in Remarks, “the use of machine learning to create and refine predictive behavior models, dynamically segments training units into sub-units based on user availability, and generates personalized reminders tailored to the user's behavior and progress.” However, Applicant is merely analyzing data and performing correlation analysis; as recited from Applicant’s Spec. ¶ 0050, uses one or more machine learning algorithms to modify the personalized predictive behavior model using the behavior data of the user. There are no additional elements recited that amount to significantly more than the judicial exception. There is no improvement to the computer nor technological field; and Applicant’s claims resolve a business problem where the business problem is managing employee training compliance. The limitations of the dependent claims 3 – 7, 10 – 14, and 17 – 23, are not integrated into a practical application because none of the additional elements set forth any limitations that meaningfully limit the abstract idea implementation. Claims 8 and 15 are similar to claim 1 and recite the same abstract ideas.
For the reasons above, claims 1, 3 – 8, 10 – 15, and 17 – 23, are rejected under 35 U.S.C. § 101.
Claim Rejections Under 35 U.S.C. § 103.
Applicant argues the amendments to claim 1 introduce features that are absent from the prior art and that yield a non-obvious improvement in training compliance management systems.
Examiner respectfully disagrees. Applicant has amended independent claims 1 and 21. A new search was necessitated due to the amendments to the independent claims and new art has been applied to the claims. Accordingly, 1, 3 – 8, 10 – 15, and 17 – 23, are rejected under 35 U.S.C. § 103.
Claim Rejections: 35 U.S.C. § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3 – 8, 10 – 15, and 17 – 23, are rejected under 35 U.S.C. §101 because the claimed invention is directed towards an abstract idea without significantly more.
Claims 1, 8, and 15:
receiving behavior data associated with a user wherein the behavior data includes at least a time of day, a session duration, and a notification to which the user previously responded;
computing a responsiveness score based on the user's historical responses to prior notifications, and selecting as the reminder the channel whose responsiveness score exceeds a predetermined threshold;
generating, a training duration, a reminder, and a reminder time for the user;
generating a completion time within a threshold of the training duration for the user;
transmitting the reminder to the user at the reminder time;
analyzing the user's activity after the reminder was transmitted to determine whether the user completed the sub-unit.
The limitations of claim 1, under its broadest reasonable interpretation recites certain methods of organizing human activity. The claim particularly recites the activity of managing interactions where there are management of interactions between a human and a computer. For example, we have receiving behavior data associated with a user wherein the behavior data includes at least a time of day, a session duration, and a notification to which the user previously responded to initiate a training unit associated with a previously completed training unit by the user; computing a responsiveness score based on the user's historical responses to prior notifications, and selecting as the reminder the channel whose responsiveness score exceeds a predetermined threshold a training duration, and a reminder time for the user; generating a reminder based on the user's predicted behavior, wherein the reminder comprises an uncompleted sub-unit; generating a completion time within a threshold of the training duration for the user; transmitting the reminder to the user at the reminder time; and analyzing the user's activity after the reminder was transmitted to determine whether the user completed the sub-unit; where these limitations all involve certain activity between a person and a computer. Accordingly, claim 1 recites certain methods of organizing human activity.
The limitations of claim 1, under its broadest reasonable interpretation recite mathematical concepts but for the recitation of generic computer components such as a training compliance management system, and multi-layer neural network having hidden layers trained. For example, claim 1 recites iteratively updating a personalized predictive behavior model associated with the user using a multi-layer neural network having multiple hidden layers trained with the processed behavior data, to refine predictions of a behavior of the user over time; computing, for each candidate notification channel, a responsiveness score based on the user's historical responses to prior notifications, and selecting as the reminder type the channel whose responsiveness score exceeds a predetermined threshold; generating, using the personalized predictive behavior model, a training duration, a reminder type, and a reminder time for the user; dynamically evaluating sub-units of a training unit that each have a completion time within a threshold of the training duration for the user, wherein the sub-units are generated based on a predicted availability of the user; generating a reminder based on the user's predicted behavior wherein the reminder comprises an uncompleted sub-unit of the sub-units of the training unit; analyzing the user's activity after the personalized reminder was transmitted to determine whether the user completed the sub-unit, and updating the personalized predictive behavior model to improve future predictions all involve mathematical relationships where the claim limitations constitute mathematical concepts to manipulate the data using mathematical functions, and the concept of using known data to set and adjust coefficients and mathematical relationships of variables that represent some modeled characteristic. Accordingly, claim 1 recites mathematical concepts.
The dependent claims encompass the same abstract ideas as well. For instance, claims 3, 10, and 17 are directed towards observing a set of users with a common characteristic, and evaluating a semi-specialized predictive behavior model using personalized predictive behavior models corresponding to each user of the set of users; claims 4, 11, and 18 are directed towards observing an indication of a new user associated with the common characteristic, and evaluating a training reminder for the new user using the semi-specialized predictive behavior model; claims 5, 12, and 19 are directed towards observing the reminder type is a text message, a social media message, an email, or a phone call; claims 6, 13, and 20 are directed towards in response to identifying a different uncompleted sub-unit of the training unit associated with the user, evaluating a different reminder comprising the different uncompleted sub-unit of the training unit; claims 7 and 14 are directed towards in response to determining that there are no uncompleted sub-units associated with the training unit, evaluating a notification indicating completion of the training unit; claim 21 is directed towards evaluating sub-units of the training unit includes dividing the training unit into sub- units that each have a duration that is based on the predicted availability of the user; claim 22 is directed towards evaluating adjusting the threshold completion time of the sub-units based on the user's historical session durations and predicted attention span; and claim 23 is directed towards observing the personalized predictive behavior model is further updated based on external data sources, including calendar availability, weather conditions, or other contextual factors affecting the user's availability. Accordingly, the dependent claims encompass the same abstract ideas.
These judicial exceptions are not integrated into a practical application. Claim 1 recites the additional elements of a training compliance management system, a notification type to which the user previously responded to initiate a training unit, personalized reminder, reminder type, processing the behavior data using vector instructions resident in a processor set, personalized reminder, the processor set including processing circuitry configured for parallel execution, iteratively updating a personalized predictive behavior model associated with the user using a multi-layer neural network having multiple hidden layers trained with the processed behavior data, to refine predictions of a behavior of the user over time, and generating, using the personalized predictive behavior model, and updating the personalized predictive behavior model to improve future predictions. In addition to reciting the additional elements of claim 1, claim 8 recites the additional elements of a system, a memory, one or more processors for executing the computer readable instructions, the computer readable instructions controlling the one or more processors, a training compliance management system, and a computer readable storage medium; in addition to reciting the additional elements of claim 1, claim 15 recites the additional elements of a computer readable storage medium and a processor. However, the additional elements of a notification type to which the user previously responded to initiate a training unit, a personalized reminder, processing the behavior data using vector instructions resident in a processor set, the processor set including processing circuitry configured for parallel execution, iteratively updating a personalized predictive behavior model associated with the user using a multi-layer neural network having multiple hidden layers trained with the processed behavior data, to refine predictions of a behavior of the user over time, generating, using the personalized predictive behavior model, updating the personalized predictive behavior model to improve future predictions, a memory, one or more processors, a training compliance management system, one or more processors for executing the computer readable instructions, the computer readable instructions controlling the one or more processors, and a computer readable storage medium are considered generic computer components as per Applicant’s Specifications shown below:
“[0026] Client computer 101 may take the form of a desktop computer, laptop computer, tablet computer, smart phone, smart watch or other wearable computer, mainframe computer, quantum computer or any other form of computer or mobile device now known or to be developed in the future that is capable of running a program, accessing a network or querying a database, such as remote database 130. As is well understood in the art of computer technology, and depending upon the technology, performance of a computer-implemented method may be distributed among multiple computers and/or between multiple locations. On the other hand, in this presentation of computing environment 100, detailed discussion is focused on a single computer, specifically computer 101, to keep the presentation as simple as possible. Computer 101 may be located in a cloud, even though it is not shown in a cloud in Figure 1. On the other hand, computer 101 is not required to be in a cloud except to any extent as may be affirmatively indicated.”
and thus is not practically integrated nor significantly more.
The claims do not include additional elements that are sufficient to amount significantly more than the judicial exception. As stated above, a notification type to which the user previously responded to initiate a training unit, processing the behavior data using vector instructions resident in a processor set, the processor set including processing circuitry configured for parallel execution, iteratively updating a personalized predictive behavior model associated with the user using a multi-layer neural network having multiple hidden layers trained with the processed behavior data, to refine predictions of a behavior of the user over time, generating, using the personalized predictive behavior model, updating the personalized predictive behavior model to improve future predictions, personalized reminder, a memory, one or more processors, a training compliance management system, one or more processors for executing the computer readable instructions, the computer readable instructions controlling the one or more processors, and a computer readable storage medium are considered generic computer components performing generic computer functions and amount to no more than mere instructions using generic computer components to implement the judicial exception. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept.
Dependent claims 3 – 7, and 10 – 14, and 17 – 23, when analyzed both individually and in combination are also held to be ineligible for the same reason above and the additional recited limitations fail to establish that the claims are not directed to an abstract idea. The additional limitations of the dependent claims when considered individually and as an ordered combination do not amount to significantly more than the abstract idea.
Looking at these limitations as ordered combination and individually add nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use generic computer components, to “apply” the recited abstract idea. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claim as a whole amount to significantly more than the abstract idea itself. Therefore, claims 1, 3 – 8, 10 – 15, and 17 – 23, are not patent eligible.
Claim Rejections: 35 U.S.C. § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness
rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3 – 8, 10 – 15, 17 – 20, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Korenblit, Shmuel et al. (U.S. Publication No. 2007/0195944) hereinafter “Korenblit” in view of Soffer, Ronen Aharon (U.S. Publication 2017/0372267) hereinafter “Soffer” in view of Wright, David et al. (U.S. Publication No. 2023/0351435) hereinafter “Wright”.
Claims 1, 8, and 15:
Korenblit teaches the following:
A computer-implemented method comprising: receiving behavior data associated with a user of a training compliance management system and a notification type to which the user previously responded to initiate a training unit; Korenblit teaches in ¶ 0004, customer centers record agent interactions during phone, e-mail, video conferencing, and messaging applications with customers; and the recordings may be reviewed later; supervisors may receive the interaction data of the agent to monitor for compliance. Korenblit teaches in ¶ 0010, an integrated customer center system; Korenblit teaches in ¶ 0032, scheduling to select the best time to administer training; Korenblit teaches in ¶ 0072, compliance with business or government regulations; Korenblit teaches in ¶ 0162, an exemplary playback window, which is displayed after a user has selected a specific drill through option on the pulse screen. The playback user interface includes a play button, pause button, stop button, rewind button, fast forward button, back to the start button, and to the end button. It also includes a start time that includes the date and time of the recording and includes the end time that includes the date and time of the end of the recording. The playback user interface further includes the name of the person being recorded, the site that the name of the person is working, the phone number or extension of the name person being recorded, among others. It should be noted that the playback cannot only playback voice recordings but also activities that occurred on a display device of a PC such as during text messaging. Korenblit teaches in ¶ 0074, most preferably, the adherence component 360 provides a real-time view of every activity across each channel in the customer center, including those in the front and back office, so supervisors/customer centers can see how their staff spends its time. In an enhancement, alerts can be set to notify supervisors when agents are out-of-adherence and exception management can help ensure agents are correctly recognized for work they have performed. Korenblit teaches in ¶ 0150, the drill through engine determines whether to include a drill through option, which provides a link to obtain information indicating the root cause of the modification of the schedule. In block 1530, responsive to determining that the drill through option is to be included, the drill through engine includes the drill through option in the graphical user interface, and in block 1535, the user selects the drill through option, which obtains information indicating the root cause of the modification of the schedule. The drill through engine can further provide a link to obtain information indicating whether the agent completed or not completed the training activity.
generating a personalized reminder based on the reminder type and the user's predicted behavior, wherein the personalized reminder comprises an uncompleted sub-unit of the sub-units of the training unit; Korenblit teaches in ¶ 0032, customer centers can use advanced workforce management forecasting and scheduling to select the best time to administer training (which is proven to be more effective than classroom or group learning) as well as freeing the supervisors from working one-on-one with agents. Korenblit teaches in ¶ 0079, a customer center manager developing and assigning training lessons for an agent. Korenblit teaches in ¶ 0083, a quality manager monitoring the agent’s training and performance and determine when training has not been met. Korenblit teaches in ¶ 0084, sending a drill through engine link to the agent’s user interface indicating a training has not been met, where the reminder type is a drill through link sent to the agent’s user interface.
While Korenblit teaches receive the interaction data of the agent, retrieve searches for further refinement and analysis, refine forecasts and performance goals based on the collected data, preferences for shift assignment, a lesson duration, subsystem, and completed training, Korenblit does not explicitly teach creating and iteratively updating a personalized predictive behavior model associated with the user using machine learning algorithms and the behavior data to refine predictions of a behavior of the user over time. However, Soffer teaches the following
wherein the behavior data includes at least a time of day, a session duration, and associated with a previously completed training unit by the user; Soffer teaches in ¶ 0039, behavior data reflecting patterns in the user's behavior regarding voice call events during a time of day such as 10:00 am; Soffer teaches a voice call event duration between 10:00 am and 10:30 am; Soffer teaches in ¶ 0040, user Bob prefers to take calls after exercising, as determined from a prior recorded behavior;
dynamically generating sub-units of a training unit that each have a completion time within a threshold of the training duration for the user, wherein the sub-units are generated based on a predicted availability of the user; Soffer teaches in ¶ 0020, the offline pre-trained data model may apply the inputs for a training process from the user's calendar availability offering available time slots based on the user's agenda; Soffer further teaches in ¶ 0020, recommended option may be produced from a ranking of the most relevant action options, where the top N options may be suggested to the user, based on user interface and device limitations, the user context, and the intent itself. Soffer teaches in ¶ 0030, real-time data model evaluation; Soffer teaches in ¶ 0158, embodiments may feature a subset of said features;
transmitting the personalized reminder to the user at the reminder time; Soffer teaches in ¶ 0085, outputting a reminder of the task at the proposed reschedule time; Soffer teaches in ¶ 0050, The techniques described herein provide an adaptive learning of a contacting and a contacted user's actions over time to a given intent, to suggest the most relevant and personalized action option(s) for that intent given the changing context of event rescheduling;
analyzing the user's activity after the personalized reminder was transmitted to determine whether the user completed the sub-unit, and updating the personalized predictive behavior model to improve future predictions; Soffer teaches in ¶ 0020, the online logic engine may be used to generate a recommended (and personalized) option in response to user activation of a reminder; Soffer teaches in ¶ 0050, the use of the techniques indicated in sequence diagram may be used for: contextual snoozing (e.g., a user wishes to conduct the communication event at a later time); Soffer teaches in ¶ 0058, operations for training may occur prior to a request for scheduling or rescheduling of a particular communication event (or, after the performance of a particular communication event, to provide updated training data for use in processing future requests); Soffer teaches subset in the above limitation.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine a context drilling process to optimize operations for performing workforce management, quality monitoring, e-learning, performance management, and analytics functionality at a customer center to monitor an agent’s compliance and competence of Korenblit with techniques for performing contextual event rescheduling and reminders with an event scheduling service of Soffer to assist businesses with refining and improving a personalized behavioral machine learning model (Soffer, Spec. ¶ 0031).
While Korenblit teaches receive the interaction data of the agent, a lesson duration, subsystem, and completed training; and Soffer teaches implementing circuitry and structural electronic components that may be configured for implementation of the techniques with configurations, the system may have these devices operably coupled (e.g., communicatively coupled) with one another, iteratively updating, processing circuitry to perform the respective operations, and a trained machine learning model that is specific to a user operated by an event scheduling service, neither Korenblit nor Soffer explicitly teach vector(s) nor neural network. However, Wright teaches the following:
processing the behavior data using vector instructions resident in a processor set, the processor set including processing circuitry configured for parallel execution; Wright teaches in ¶ 0033, the processing device 120, and other processors described herein, generally include circuitry for implementing communication and/or logic functions of the mobile device 106. For example, the processing device 120 may include a digital signal processor, a microprocessor, and various analog to digital converters, digital to analog converters, and/or other support circuits. Control and signal processing functions of the mobile device 106 are allocated between these devices according to their respective capabilities. The processing device 120 thus may also include the functionality to encode and interleave messages and data prior to modulation and transmission. The processing device 120 can additionally include an internal data modem. Further, the processing device 120 may include functionality to operate one or more software programs, which may be stored in the memory device 122. For example, the processing device 120 may be capable of operating a connectivity program, such as the previously described web browser application. The web browser application may then allow the mobile device 106 to transmit and receive web content, such as, for example, location-based content and/or other web page content, according to a Wireless Application Protocol (WAP), Hypertext Transfer Protocol (HTTP), and/or the like. The application 132 related to the enterprise system 200 may be configured to operate in similar fashion for transmitting such web content. Wright teaches in ¶ 0063, a machine learning program may be configured to implement stored processing, such as decision tree learning, association rule learning, artificial neural networks, recurrent artificial neural networks, long short term memory networks, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, genetic algorithms, k-nearest neighbor (KNN), and the like. In some embodiments, the machine learning algorithm may include one or more image recognition algorithms suitable to determine one or more categories to which an input, such as data communicated from a visual sensor or a file in JPEG, PNG or other format, representing an image or portion thereof, belongs. Additionally or alternatively, the machine learning algorithm may include one or more regression algorithms configured to output a numerical value given an input. Further, the machine learning may include one or more pattern recognition algorithms, e.g., a module, subroutine or the like capable of translating text or string characters and/or a speech recognition module or subroutine. In various embodiments, the machine learning module may include a machine learning acceleration logic, e.g., a fixed function matrix multiplication logic, in order to implement the stored processes and/or optimize the machine learning logic training and interface. Wright teaches in ¶ 0072, the machine learning program may include one or more support vector machines. A support vector machine may be configured to determine a category to which input data belongs. For example, the machine learning program may be configured to define a margin using a combination of two or more of the input variables and/or data points as support vectors to maximize the determined margin. Such a margin may generally correspond to a distance between the closest vectors that are classified differently. The machine learning program may be configured to utilize a plurality of support vector machines to perform a single classification. For example, the machine learning program may determine the category to which input data belongs using a first support vector determined from first and second data points/variables, and the machine learning program may independently categorize the input data using a second support vector determined from third and fourth data points/variables. The support vector machine(s) may be trained similarly to the training of neural networks, e.g., by providing a known input vector (including values for the input variables) and a known output classification. The support vector machine is trained by selecting the support vectors and/or a portion of the input vectors that maximize the determined margin.
computing, for each candidate notification channel, a responsiveness score based on the user's historical responses to prior notifications, and selecting as the reminder type the channel whose responsiveness score exceeds a predetermined threshold, generating, using the personalized predictive behavior model, a training duration, a reminder type, and a reminder time for the user; Wright teaches in ¶ 0136, each event predicted by the predictive model may be associated with a probability within the prediction data, and such probabilities of certain events occurring may represent the triggering condition for the computing system 206 to take further action with respect to a certain product and/or service. For example, each different product and/or service associated with the predictive model may have a unique threshold value that must be met or exceeded for the computing system 206 to take action as described hereinafter. If multiple products are being evaluated by the predictive model, only those products indicated by the predictive model has having a certain likelihood of engagement (such as purchase thereof) may be associated with a communication from the computing system 206. The computing system 206 may also be configured to only send communications with respect to those products and/or services ranked as being the most likely to be positively engaged with by the user 110, so as to avoid overwhelming the user 110 with excessive offers or promotions. Wright teaches in ¶ 0137, The communication to the corresponding user 110 from the enterprise system 200 may occur using any known communication method. For example, an email, text message, push notification, or the like may be generated by the computing system 206 for communication to the corresponding user 110. Such a communication may be communicated from the computing system 206 to the user device 104, 106 of the user 110 using any of the methods described hereinabove in describing the communication capabilities of the devices 104, 106 and systems 200, 206 within Fig. 1. The user 110 may then review such a communication via interaction with the corresponding user device 104, 106, which provides a perceptible expression of the content of the communication. Such a perceptible expression of the content of the communication may include the information being communicated being visually perceptible, such as in the form of readable text able to be displayed on the user device 104, 106, or audibly perceptible, such as in the form of an audio file able to be played by the user device 104, 106. The display 140 of the user device 106 or the speaker 144 of the user device 106 may be utilized in perceiving the content of the communication. Wright teaches in 0145, The predictive model may be configured to make a prediction regarding the preference of the user 110 regarding various different account settings or the like associated with the manner in which the user 110 interacts with the computing system 206 or the enterprise system 200. For example, the personal data profile of the user 110 may indicate that the user 110 is likely to adopt or prefer a specific account setting relating to the number, type, or form of communications occurring between the computing system 206 and the user device 104, 106. In reaction to this prediction, the computing system 206 may request confirmation from the user 110 of such a change to the account setting, or the computing system 206 may automatically make the adjustment to the account setting in the absence of user 110 approval, where applicable.
creating and iteratively updating a personalized predictive behavior model associated with the user using a multi-layer neural network having multiple hidden layers trained with the processed behavior data, to refine predictions of a behavior of the user over time; Wright teaches in ¶ 0065 a feedforward network (see, e.g., feedforward network 260 referenced in Fig. 2A) may include a topography with a hidden layer 264 between an input layer 262 and an output layer 266. The input layer 262, having nodes commonly referenced in Fig. 2A as input nodes 204 for convenience, communicates input data, variables, matrices, or the like to the hidden layer 264, having nodes 274. The hidden layer 264 generates a representation and/or transformation of the input data into a form that is suitable for generating output data. Adjacent layers of the topography are connected at the edges of the nodes of the respective layers, but nodes within a layer typically are not separated by an edge. In at least one embodiment of such a feedforward network, data is communicated to the nodes 204 of the input layer, which then communicates the data to the hidden layer 264. The hidden layer 264 may be configured to determine the state of the nodes in the respective layers and assign weight coefficients or parameters of the nodes based on the edges separating each of the layers, e.g., an activation function implemented between the input data communicated from the input layer 262 and the output data communicated to the nodes 276 of the output layer 266. It should be appreciated that the form of the output from the neural network may generally depend on the type of model represented by the algorithm. Although the feedforward network 260 of Fig. 2A expressly includes a single hidden layer 264, other embodiments of feedforward networks within the scope of the descriptions can include any number of hidden layers. The hidden layers are intermediate the input and output layers and are generally where all or most of the computation is done. Wright teaches in ¶ 0081, in step 604, data is received, collected, accessed, or otherwise acquired and entered as can be termed data ingestion. In step 606 the data ingested in step 604 is preprocessed, for example, by cleaning, and/or transformation such as into a format that the following components can digest. The incoming data may be versioned to connect a data snapshot with the particularly resulting trained model. As newly trained models are tied to a set of versioned data, preprocessing steps are tied to the developed model. If new data is subsequently collected and entered, a new model will be generated. If the preprocessing step 606 is updated with newly ingested data, an updated model will be generated. Step 606 can include data validation, which focuses on confirming that the statistics of the ingested data are as expected, such as that data values are within expected numerical ranges, that data sets are within any expected or required categories, and that data comply with any needed distributions such as within those categories. Step 606 can proceed to step 608 to automatically alert the initiating user, other human or virtual agents, and/or other systems, if any anomalies are detected in the data, thereby pausing or terminating the process flow until corrective action is taken. Wright teaches in ¶ 0082, training test data such as a target variable value is inserted into an iterative training and testing loop. In step 612, model training, a core step of the machine learning work flow, is implemented. A model architecture is trained in the iterative training and testing loop. For example, features in the training test data are used to train the model based on weights and iterative calculations in which the target variable may be incorrectly predicted in an early iteration as determined by comparison in step 614, where the model is tested. Subsequent iterations of the model training, in step 612, may be conducted with updated weights in the calculations.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine a context drilling process to optimize operations for performing workforce management, quality monitoring, e-learning, performance management, and analytics functionality at a customer center to monitor an agent’s compliance and competence of Korenblit and techniques for performing contextual event rescheduling and reminders with an event scheduling service of Soffer with a system for guiding interactions with a user device requests a response from a plurality of users, stores the response as response data forming a subset of a personal data set of each of the responding users of Wright to assist businesses with utilizing trained predictive modeling, vectors, and neural networks to analyze response data from responding users of a business entity (Wright, Spec. ¶ 0007).
Claims 3, 10, and 17:
Korenblit, Soffer, and Wright teach claims 1, 8, and 15. Korenblit further teaches the following:
identifying a set of users with a common characteristic; Korenblit teaches in ¶ 0094, customer center agents may all have a call quality that is low for a new marketing campaign that has been in effect for only one week, where the low call quality of the agents is the common characteristic of a set of users;
and generating a semi-specialized predictive behavior model using personalized predictive behavior models corresponding to each user of the set of users; Korenblit teaches in ¶ 0118, a text transcript and searchable phonetic model of each agent’s recorded calls and the statistical model being refined to extract the context and further meaning of conversations.
Claims 4, 11, and 18:
Korenblit, Soffer, and Wright teach claims 1, 8, and 15. Korenblit further teaches the following:
receiving an indication of a new user associated with the common characteristic; Korenblit teaches in ¶ 0094, a new agent may have a low call quality score, and the quality manager using the drill through engine may drill down through various levels of information associated with the campaign, via graphical user interfaces, to obtain audio recordings for analysis of the new agent’s call quality during the marketing campaign, where the low quality score indicates a new agent.
generating a training reminder for the new user using the semi-specialized predictive behavior model; Korenblit teaches in ¶ 0095, for a new agent starting the new marketing campaign, associate recording parameters with new and inexperienced agents to monitor the new agent’s transcript interactions and the quality monitor using the drill through engine informing the manager of the non-compliance of the new agent, where the non-compliance is a low quality score of the text transcript;
and transmitting the training reminder to the new user at a time determined using the semi-specialized predictive behavior model; Korenblit teaches in ¶ 0114, a quality performance manager sending a flagged problem area to the agent where the user can click on a link to obtain the flagged call-statistic score.
Claims 5, 12, and 19:
Korenblit, Soffer, and Wright teach claims 1, 8, and 15. Korenblit further teaches the following:
wherein the reminder type is a text message, a social media message, an email, or a phone call; Korenblit teaches in ¶ 0080, delivering learning sessions that may be complete or incomplete over a network, using e-mail, or a hyperlink to a Web site.
Claims 6, 13, and 20:
Korenblit, Soffer, and Wright teach claims 1, 8, and 15. Korenblit further teaches the following:
in response to identifying a different uncompleted sub-unit of the training unit associated with the user, generating a different reminder comprising the different uncompleted sub-unit of the training unit; Korenblit teaches in ¶ 0064, a sixth state receiving information about various measurements, assessments, recordings, and examinations from the third state and tracking information indicating whether the various assessments indicate that the agent complied with the customer center policies. Korenblit teaches in ¶ 0069, the fifth state providing scorecards to the agents and customer center operators, and notifying the agents that changes have been made to a particular customer center operations because the customer center did not meet business goals.
and transmitting the different reminder to the user at another time determined using the personalized predictive behavior model; Korenblit teaches in ¶ 0074, the adherence component providing a real-time view of every activity across each channel in the customer center, allowing supervisors to see how their staff spends its time and alerts may be set to notify supervisors when agents are out-of-adherence, where an alert may be a reminder.
Claims 7 and 14:
Korenblit, Soffer, and Wright teach claims 1, 8, and 15. Korenblit further teaches the following:
in response to determining that there are no uncompleted sub-units associated with the training unit, generating a notification indicating completion of the training unit; Korenblit teaches in ¶ 0054, how well each agent complies with customer center policies, where complying with the policies is likened to completion of the training unit;
and transmitting the notification to the user; Korenblit teaches in ¶ 0054, a workforce manager (WFM) supplying the supervisor with this information.
Claim 23:
Korenblit, Soffer, and Wright teach claims 1, 8, and 15. Soffer further teaches the following:
wherein the personalized predictive behavior model is further updated based on external data sources, including calendar availability, weather conditions, or other contextual factors affecting the user's availability; Soffer teaches in ¶ 0030, schedule availability (based on calendar meetings), planned locations, nature of relationship (e.g. family, friends, colleagues, etc.), planned activity (e.g. driving, running), routine behavior (e.g. usually talk during work hours), and the like, where schedule availability may be for external state sources such as driving family, and friends and likened to external data sources may be likened to external data.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine a context drilling process to optimize operations for performing workforce management, quality monitoring, e-learning, performance management, and analytics functionality at a customer center to monitor an agent’s compliance and competence of Korenblit and a system for guiding interactions with a user device requests a response from a plurality of users, stores the response as response data forming a subset of a personal data set of each of the responding users of Wright with techniques for performing contextual event rescheduling and reminders with an event scheduling service of Soffer to assist businesses with refining and improving a personalized behavioral machine learning model (Soffer, Spec. ¶ 0031).
Claims 21 – 22, are rejected under 35 U.S.C. 103 as being unpatentable over Korenblit, Shmuel et al. (U.S. Publication No. 2007/019,5944) hereinafter “Korenblit” in view of Soffer, Ronen Aharon (U.S. Publication 2017/037,2267) hereinafter “Soffer” in view of Wright, David et al. (U.S. Publication No. 2023/0351435) hereinafter “Wright” in view of Millius, Sebastian et al. (U.S. Publication No. 2018/0211178) hereinafter “Millius”.
Claim 21:
While Korenblit, Soffer, and Wright teach claims 1, 8, and 15, neither Korenblit, Soffer, nor Wright explicitly teach dividing the training unit into sub-units that each have a duration. However, Millius teaches the following:
wherein dynamically generating sub-units of the training unit includes dividing the training unit into sub-units that each have a duration that is based on the predicted availability of the user; Millius teaches in ¶ 0057, status engine may receive “fresh” data from the data engine continuously, periodically, or at other regular and/or non-regular intervals to enable dynamic determination and adaptation of the status of a user; Millius teaches Fig. 2A illustrates three separate status notifications being provided to different groups of client devices, in some implementations more or fewer status notifications may be transmitted and/or may be transmitted for presentation to more or fewer additional users. Millius teaches in ¶ 0081, the data engine selects, from the user data, a subset of data for providing to the status engine and predicted duration engine.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine a context drilling process to optimize operations for performing workforce management, quality monitoring, e-learning, performance management, and analytics functionality at a customer center to monitor an agent’s compliance and competence of Korenblit and techniques for performing contextual event rescheduling and reminders with an event scheduling service of Soffer and a system for guiding interactions with a user device requests a response from a plurality of users, stores the response as response data forming a subset of a personal data set of each of the responding users of Wright with automatically generating and/or automatically transmitting a status of a user of Millius to assist businesses in selecting a subset of data for providing to the status engine and predicted duration engine (Millius, Spec. ¶ 0070).
Claim 22:
Korenblit, Soffer, and Wright teach claims 1, 8, and 15. Korenblit further teaches the following:
the threshold completion time based on the user's predicted attention span; Korenblit teaches in ¶ 0103, the lesson assignment component examines one or more of the KPIs for a particular agent, and makes an assignment for a lesson associated with that KPI, based on criteria associated with a KPI or a competency where the criteria is a comparison of one or more KPIs for an agent to threshold values, and the lesson assignment component assigns a lesson if the KPI is lower than the threshold; Korenblit teaches in ¶ 0104, the drill through engine can monitor and track information indicating whether the KPIs of the agents are below the threshold values; Korenblit teaches in ¶ 0088, monitoring application on agent workstations tracks agent activity on the workstation that includes screen data and may be likened to based on user’s predicted attention span;
While Korenblit teaches threshold, and monitor and track KPI information, Korenblit does not explicitly teach adjusting. However, Soffer teaches the following:
comprising dynamically adjusting; Soffer teaches in ¶ 0054, updating events, and adjusting and recalculating a timeline;
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine a context drilling process to optimize operations for performing workforce management, quality monitoring, e-learning, performance management, and analytics functionality at a customer center to monitor an agent’s compliance and competence of Korenblit and a system for guiding interactions with a user device requests a response from a plurality of users, stores the response as response data forming a subset of a personal data set of each of the responding users of Wright with techniques for performing contextual event rescheduling and reminders with an event scheduling service of Soffer to assist businesses with refining and improving a personalized behavioral machine learning model (Soffer, Spec. ¶ 0031).
While Korenblit teaches threshold, historical data, subsystem, and monitor and track KPI information, and Soffer teaches adjusting, neither Korenblit, Soffer, nor Wright explicitly teach historical duration. However, Millius teaches the following:
of the sub-units based on the user's historical session durations; Millius teaches in ¶ 0018, the data applied as input to the machine learning model includes historical data that indicates historical duration, for the user, for one or more activities. Millius teaches in ¶ 0103, adjust the predicted duration; Millius further teaches the adjusted predicted duration may be provided as feedback for use in generating further training examples for refining a machine learning model.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine a context drilling process to optimize operations for performing workforce management, quality monitoring, e-learning, performance management, and analytics functionality at a customer center to monitor an agent’s compliance and competence of Korenblit and techniques for performing contextual event rescheduling and reminders with an event scheduling service of Soffer and a system for guiding interactions with a user device requests a response from a plurality of users, stores the response as response data forming a subset of a personal data set of each of the responding users of Wright with automatically generating and/or automatically transmitting a status of a user of Millius to assist businesses in selecting a subset of data for providing to the status engine and predicted duration engine (Millius, Spec. ¶ 0070).
Claim Rejections: 35 U.S.C. § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 8, and 15 are rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 1 recites “candidate notification channel.” However, there is no support for “candidate notification channel” in Applicant’s Spec. Although Applicant’s Spec. ¶ 0006, recites in one embodiment of the present invention, wherein the reminder type is a text message, a social media message, an email, or a phone call, there is no support nor teaching for “candidate notification channel.” Claim 1 recites “processing the behavior data using vector instructions resident in a processor set, the processor set including processing circuitry configured for parallel execution.” Applicant’s Spec. ¶ 0027, recites “processing circuitry 120 may be distributed over multiple packages, for example, multiple, coordinated integrated circuit chips. Processing circuitry 120 may implement multiple processor threads and/or multiple processor cores,” the meaning of “processing circuitry” is not used in the same manner as used in the claims; furthermore there is no teaching of vector instructions, nor parallel execution in Applicant’s Specification.
Claims 8 and 15 recite substantially similar limitations and thus are rejected for the same reasons set forth above. Additionally, claims 3 – 7, 10 – 14, and 17 – 23 depend on claims 1, 8, and 15 and inherit the same deficiencies. Accordingly, claims 1, 3 – 8, 10 – 15, and 17 – 23, are rejected under 35 U.S.C. § 112(a).
Conclusion
The prior art made of record and not relied upon is considered relevant but not applied:
Note: these are additional references found but not used.
- Reference Solmssen, Anne (U.S. Publication No. 2023/0042345) discloses systems, methods, and machine-readable media for compliance training that includes determining a compliance status of a user.
Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Frank Alston whose telephone number is 703-756-4510. The Examiner can normally be reached 9:00 AM – 5:00 PM Monday - Friday. Examiner can be reached via Fax at 571-483-7338. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor Beth Boswell can be reached at (571) 272-6737.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FRANK MAURICE ALSTON/
Examiner, Art Unit 3625
1/10/2026
/BETH V BOSWELL/Supervisory Patent Examiner, Art Unit 3625