DETAILED ACTION
This communication is in response to the Application filed on May 30, 2024.
Claims 1 - 20 are pending and have been examined.
Claims 1, 10 and 16 are independent.
Domestic priority: May 31, 2023.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on May 30, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings filed on May 30, 2024 have been accepted and considered by the Examiner.
Double Patenting Note
The Examiner notes that previously published U.S. patent applications 2015/0261758, 2018/0268253 and 2024/0403341 were analyzed for Double Patenting. However, based on the current claim scope no Double patenting was found.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1 - 20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Han et al., (U.S. Patent Application Publication 2024/0086648), hereinafter referred to as Han, in view of Miller et al., (U.S. Patent Application Publication 2024/0354641), hereinafter referred to as Miller.
Regarding Claims 1, 10 and 16, Han teaches:
1. A computer-implemented method comprising, 10. A non-transitory computer-readable storage medium storing executable computer program instructions, the computer program instructions when executed by one or more processors of a system causing the system to, and 16. A data processing system, comprising: one or more processors; and one or more non-transitory computer-readable storage media storing executable computer program instructions, the computer program instructions when executed by the one or more processors cause the data processing system to: [Han, “System 200 may comprise secondary memory 220. Secondary memory 220 is a non-transitory computer-readable medium having computer-executable code (i.e., the claimed “executable computer program instructions”) and/or other data (e.g., any of the software disclosed herein) stored thereon. In this description, the term “computer-readable medium” is used to refer to any non-transitory computer-readable storage media used to provide computer-executable code and/or other data to or within system 200. The computer software stored on secondary memory 220 is read into main memory 215 for execution by processor 210.” Par. 0035]
receiving, at a user interface generated by a computer system for display by a user device, an input to generate an electronic message, [Han, “The method may further comprise using the at least one hardware processor to generate a graphical user interface comprising one or more inputs, wherein receiving the one or more parameter values comprises receiving an input of the one or more parameter values via the one or more inputs of the graphical user interface. The one or more inputs may be a plurality of inputs, and the method may further comprise using the at least one hardware processor to receive a selection of one of the plurality of inputs, wherein the email sequence is generated (i.e., the claimed “generate an electronic message”) in response to the selection of the one input.” Par. 0008]
wherein the input to generate the electronic message includes an identification of one or more digital content items to be associated with the electronic message; [Han, “Alternatively or additionally, subprocess 780 may automatically access scheduling software used by the agent to determine an available time for the agent, generate a meeting invite, and attach the meeting invite to the reply email message or incorporate the reply email message into the meeting invite (i.e., the claimed “identification of one or more digital content items to be associated with the electronic message”).” Par. 0143; “The one or more inputs may be a plurality of inputs, and the method may further comprise using the at least one hardware processor to receive a selection of one of the plurality of inputs, wherein the email sequence is generated (i.e., the claimed “generate an electronic message”) in response to the selection of the one input.” Par. 0008]
processing, by the computer system, a plurality of signals retrieved based on the received input, [Han, “The one or more inputs may be a plurality of inputs (i.e., the claimed “plurality of signals retrieved based on the received input”), and the method may further comprise using the at least one hardware processor to receive a selection of one of the plurality of inputs, wherein the email sequence is generated (i.e., the claimed “generate an electronic message”) in response to the selection of the one input.” Par. 0008]
wherein the plurality of signals include content metadata associated with the one or more digital content items; [Han, “The one or more inputs may be a plurality of inputs (i.e., the claimed “plurality of signals retrieved based on the received input”), and the method may further comprise using the at least one hardware processor to receive a selection of one of the plurality of inputs, wherein the email sequence is generated (i.e., the claimed “generate an electronic message”) in response to the selection of the one input.” Par. 0008; “Alternatively or additionally, subprocess 780 may automatically access scheduling software used by the agent to determine an available time for the agent, generate a meeting invite, and attach the meeting invite to the reply email message or incorporate the reply email message into the meeting invite (i.e., the claimed “identification of one or more digital content items to be associated with the electronic message”).” Par. 0143]
generating, by the computer system, a prompt to a large language model (LLM) to cause the LLM to generate content for the electronic message, [Han, “The embodiments described herein are generally directed to artificial intelligence (AI), and, more particularly, to automated generation of email sequences (i.e., the claimed “generate content for electronic message”) using an AI model, such as a generative language model (i.e., the claimed “large language model”).” (Par. 0002); “While GPT-4 is used as an example, it should be understood that the machine-learning model may be any generative language model or other generative artificial intelligence (AI) model, including past and future generations of GPT, as well as other large language models.” Par. 0059]
wherein the prompt is generated at least in part based on the processing of the plurality of signals and instructs the LLM to generate a summary of the one or more digital content items for inclusion in the electronic message based on the content metadata; [Han, “While GPT-4 is used as an example, it should be understood that the machine-learning model may be any generative language model or other generative artificial intelligence (AI) model, including past and future generations of GPT, as well as other large language models.” Par. 0059; “in response to a selection of the input for regenerating the selected content block, re-input the prompt, generated for the selected content block, to the generative language model (i.e., the claimed LLM”) to produce one or more new suggestions for the selected content block.” (Par. 0008); “Alternatively or additionally, subprocess 780 may automatically access scheduling software used by the agent to determine an available time for the agent, generate a meeting invite, and attach the meeting invite (i.e., the claimed “one or more digital content items”) to the reply email message or incorporate (i.e., the claimed “inclusion”) the reply email message into the meeting invite.” Par. 0143; “The one or more inputs may be a plurality of inputs (i.e., the claimed “plurality of signals retrieved based on the received input”), and the method may further comprise using the at least one hardware processor to receive a selection of one of the plurality of inputs, wherein the email sequence is generated (i.e., the claimed “generate an electronic message”) in response to the selection of the one input.” Par. 0008]
populating, into the user interface, content for the electronic message that is generated based on output by the LLM; and [Han, “Outputting the generated email sequence may comprise updating (i.e., the claimed “populating”) the graphical user interface to display the generated email sequence (i.e., the claimed “content for the electronic message that is generated based on output by the LLM”).” Par. 0008; “While GPT-4 is used as an example, it should be understood that the machine-learning model may be any generative language model or other generative artificial intelligence (AI) model, including past and future generations of GPT, as well as other large language models.” Par. 0059]
generating a transmissible payload that includes the content for the electronic message and the one or more digital content items attached to or linked within the transmissible payload. [Han, “Alternatively or additionally, subprocess 780 may automatically access scheduling software used by the agent to determine an available time for the agent, generate a meeting invite, and attach the meeting invite to the reply email message or incorporate the reply email message into the meeting invite (i.e., the claimed “one or more digital content items attached to or linked within the transmissible payload”).” Par. 0143; “Platform 110 transmits or serves one or more screens of the graphical user interface in response to requests (i.e., the claimed “transmissible payload”) from user system(s) 130.” Par. 0026]
Han fails to explicitly teach metadata.
However, Miller teaches:
wherein the input to generate the electronic message includes an identification of one or more digital content items to be associated with the electronic message; [Miller, “The personal AI agent 302 can retrieve data from external data sources 316 to apply to multimodal memory 308 and/or to use in generating the input (i.e., the claimed “plurality of signals”) for the neural network engine 314. The external data sources 316 can include any combination of a repository of scientific papers that include content information about the papers, title, author, abstract, keywords, and citations; data from emails, such as email addresses, contact lists, email content (i.e., the claimed “one or more digital content items to be associated with the electronic message”), attachments (i.e., the claimed “one or more digital content items to be associated with the electronic message”), and metadata (such as timestamps and IP addresses); search engine data, such as search queries, search history, location data, device information, and web activity; and/or communication data, such as messages, voice and video calls, roles in group messages, posts, comments, votes, user subscriptions, likes, followers, and hashtags.” Par. 0099]
wherein the plurality of signals include content metadata associated with the one or more digital content items; [Miller, “The personal AI agent 302 can retrieve data from external data sources 316 to apply to multimodal memory 308 and/or to use in generating the input (i.e., the claimed “plurality of signals”) for the neural network engine 314. The external data sources 316 can include any combination of a repository of scientific papers that include content information about the papers, title, author, abstract, keywords, and citations; data from emails, such as email addresses, contact lists, email content, attachments, and metadata (such as timestamps and IP addresses); search engine data, such as search queries, search history, location data, device information, and web activity; and/or communication data, such as messages, voice and video calls, roles in group messages, posts, comments, votes, user subscriptions, likes, followers, and hashtags.” Par. 0099]
wherein the prompt is generated at least in part based on the processing of the plurality of signals and instructs the LLM to generate a summary of the one or more digital content items for inclusion in the electronic message based on the content metadata; [Miller, “The content response component 412 generates one or more content items 408 for presentation to the user 438. The content response component 412 uses the intent 422 received from the intent component 414 and any extracted information from the intent component 414 to determine the appropriate content items 410. This can be done using rule-based systems, decision trees, statistical models, LLMs, neural networks, and the like.” Par. 0109; “The personal AI agent 302 can retrieve data from external data sources 316 to apply to multimodal memory 308 and/or to use in generating the input (i.e., the claimed “plurality of signals”) for the neural network engine 314. The external data sources 316 can include any combination of a repository of scientific papers that include content information about the papers, title, author, abstract, keywords, and citations; data from emails, such as email addresses, contact lists, email content, attachments, and metadata (such as timestamps and IP addresses); search engine data, such as search queries, search history, location data, device information, and web activity; and/or communication data, such as messages, voice and video calls, roles in group messages, posts, comments, votes, user subscriptions, likes, followers, and hashtags.” Par. 0099; “In generative AI examples, the prediction/inference data that is output include trend assessment and predictions, translations, summaries, image or video recognition and categorization, natural language processing, face recognition, user sentiment assessments, advertisement targeting and optimization, voice recognition, or media content generation, recommendation, and personalization.” Par. 0070]
generating a transmissible payload that includes the content for the electronic message and the one or more digital content items attached to or linked within the transmissible payload. [Miller, “This message data (i.e., the claimed “content for the electronic message”) includes, for any particular message, at least message sender data, message recipient (or receiver) data, and a payload (i.e., the claimed “transmissible payload”).” Par. 0193; “The neural network engine 314 can intelligently select one or more of the external data sources 316 to populate data (i.e., the claimed “one or more digital content items attached to or linked within the transmissible payload”) to respond to the prompt received from the personal AI agent 302.” Par. 0100; “The personal AI agent 302 can retrieve data from external data sources 316 to apply to multimodal memory 308 and/or to use in generating the input (i.e., the claimed “plurality of signals”) for the neural network engine 314. The external data sources 316 can include any combination of a repository of scientific papers that include content information about the papers, title, author, abstract, keywords, and citations; data from emails, such as email addresses, contact lists, email content, attachments, and metadata (such as timestamps and IP addresses); search engine data, such as search queries, search history, location data, device information, and web activity; and/or communication data, such as messages, voice and video calls, roles in group messages, posts, comments, votes, user subscriptions, likes, followers, and hashtags.” Par. 0099; “In generative AI examples, the prediction/inference data that is output include trend assessment and predictions, translations, summaries, image or video recognition and categorization, natural language processing, face recognition, user sentiment assessments, advertisement targeting and optimization, voice recognition, or media content generation, recommendation, and personalization.” Par. 0070]
Han and Miller pertain to electronic message generation and are analogous to the instant application. Accordingly, it would have been obvious to one of ordinary skill in the electronic message generation art to modify Han’s teachings of “automated generation of email sequences (i.e., the claimed “generate content for electronic message”) using an AI model, such as a generative language model (i.e., the claimed “large language model”)” (Han, Par. 0002) with the teachings of “metadata” (Miller, Par. 0099) taught by Miller in order to “incorporate multiple data modalities, using deep learning techniques for feature extraction, and fusing these representations to provide more accurate and relevant content suggestions for users” (Miller, Par. 0025)
Regarding Claims 2 and 11, Han in view of Miller has been discussed above. The combination further teaches:
wherein the plurality of signals include an identifier of a sender of the electronic message, and [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1; Miller, “Message sender identifier (i.e., the claimed “identifier of the sender of the electronic message”) 922: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the user system 102 on which the message 900 was generated and from which the message 900 was sent (i.e., the claimed “identifier of the sender of the electronic message”).” Par. 0218]
wherein the identifier of the sender is retrieved based on a login to a user account prior to the input to generate the electronic message being received. [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1; Han, “In an embodiment, each user account or organizational account may be associated with a distinct model 355.” Par. 0067; “. The parameter value(s) (i.e., the claimed “identifier”) may be derived from user input (i.e., the claimed “login to a user account”), campaign-specific settings, account-specific settings (i.e., the claimed “login to a user account”), system-wide settings, and/or the like.” Par. 0069; Miller, “Message sender identifier (i.e., the claimed “identifier of the sender of the electronic message”) 922: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the user system 102 on which the message 900 was generated and from which the message 900 was sent (i.e., the claimed “identifier of the sender of the electronic message”).” Par. 0218; “The Application Program Interface (API) server 122 exposes various functions supported by the interaction servers 124, including account registration; login functionality (i.e., the claimed “login to a user account”);” Par. 0032]
Regarding Claims 3, 12 and 17, Han in view of Miller has been discussed above. The combination further teaches:
wherein the plurality of signals include an identifier of a recipient for the electronic message, and [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1; Miller, “Message receiver identifier (i.e., the claimed “identifier of a receiver”) 924: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the user system 102 to which the message 900 is addressed (i.e., the claimed “recipient for the electronic message”).” Par. 0219]
wherein the identifier of the recipient is input at the user interface in association with the input to generate the electronic message. [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1; Miller, “Message receiver identifier (i.e., the claimed “identifier of a receiver”) 924: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the user system 102 to which the message 900 is addressed (i.e., the claimed “recipient for the electronic message”).” Par. 0219; Han, “In this case, the reply email message, generated in subprocess 780, may include the email address of that agent as the sender or as a “to” or carbon-copy (cc) recipient of the reply email message (i.e., the claimed “recipient is input at the user interface in association with the input to generate the electronic message”).” Par. 0143]
Regarding Claim 4, Han in view of Miller has been discussed above. The combination further teaches:
wherein processing the plurality of signals comprises processing the identifier of the recipient to select message features for the electronic message, and [Han, see mapping applied to claim 3; Miller, see mapping applied to claim 3; Han, “Each of the one or more content blocks in the displayed email sequence may be selectable (i.e., the claimed “select message features for the electronic message”), and the method may further comprise using the at least one hardware processor to: when receiving a selection of one of the one or more content blocks in the displayed email sequence, display an input for regenerating the selected content block;” Par. 0008]
wherein generating the prompt to the LLM comprises instructing the LLM to generate the content for the electronic message using the selected message features. [Han, see mapping applied to claim 3; Miller, see mapping applied to claim 3; Han, “Each of the one or more content blocks in the displayed email sequence may be selectable, and the method may further comprise using the at least one hardware processor to: when receiving a selection of one of the one or more content blocks in the displayed email sequence, display an input for regenerating the selected content block; and in response to a selection of the input for regenerating the selected content block, re-input the prompt, generated for the selected content block, to the generative language model (i.e., the claimed “LLM”) to produce one or more new suggestions for the selected content block (i.e., the claimed “LLM to generate the content for the electronic message using the selected message features”). Each of the one or more new suggestions for the selected content block may be selectable,” Par. 0008]
Regarding Claim 5, Han in view of Miller has been discussed above. The combination further teaches:
wherein the plurality of signals further include use data that characterizes user activity associated with the one or more digital content items. [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1; Miller, “The interaction client 104 can notify a user of the user system 102, or other users related to such a user (e.g., “friends”), of activity (i.e., the claimed “user activity”) taking place in one or more external resources (i.e., the claimed “user activity associated with the one or more digital content items”).” Par. 0037]
Regarding Claims 6, 14 and 19 Han in view of Miller has been discussed above. The combination further teaches:
wherein the plurality of signals include performance metrics associated with a plurality of prior electronic messages transmitted to respective recipients, and [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1; Miller, “In some examples, machine learning components of the intent component 414, the content response component 412, and the neural network 436 are continuously retrained or fine-tuned based on user interactions with the content items 410. For example, interactions by a user with the content items 410 are stored in an analytics database (i.e., the claimed “plurality of prior electronic messages transmitted to respective recipients”) (not shown). Metrics of the interactions (i.e., the claimed “performance metrics associated with a plurality of prior electronic messages transmitted to respective recipients)” are then used to provide reinforcement to the content response component 412 when the content response component 412 provides a sequence of responses that lead to a successful intent 422 determination and consequently properly targeted content items 410.” Par. 0128; Han, “wherein each of the plurality of intent classification models outputs an intent classification of the email message and a confidence value; when the intent classifications, output by the plurality of intent classification models, match each other, and a confidence, represented by the confidence values (i.e., the claimed “performance metrics”) output by the plurality of classification models, satisfies a threshold, generate a reply email message by, determining one or more content blocks based on the matching intent classifications, for each of the one or more content blocks, generating a prompt based on one or more parameter values, inputting the prompt to a generative language model to produce the content block, and adding the content block to the reply email message, and outputting the reply email message;” Par. 0011; Han, “Each email sequence may be classified into one of a plurality of classes. Each of the plurality of classes represents a different outcome (i.e., the claimed “performance metric”) of the email sequence. As one example, the plurality of classes may comprise or consist of “positive,” “negative,” and “unresponsive” (i.e., the claimed “performance metric”) classes.” Par. 0053]
wherein processing the performance metrics comprises: [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1; Miller, “In some examples, machine learning components of the intent component 414, the content response component 412, and the neural network 436 are continuously retrained or fine-tuned based on user interactions with the content items 410. For example, interactions by a user with the content items 410 are stored in an analytics database (i.e., the claimed “plurality of prior electronic messages transmitted to respective recipients”) (not shown). Metrics of the interactions (i.e., the claimed “performance metrics associated with a plurality of prior electronic messages transmitted to respective recipients)” are then used to provide reinforcement to the content response component 412 when the content response component 412 provides a sequence of responses that lead to a successful intent 422 determination and consequently properly targeted content items 410.” Par. 0128; Han, “wherein each of the plurality of intent classification models outputs an intent classification of the email message and a confidence value; when the intent classifications, output by the plurality of intent classification models, match each other, and a confidence, represented by the confidence values (i.e., the claimed “performance metrics”) output by the plurality of classification models, satisfies a threshold, generate a reply email message by, determining one or more content blocks based on the matching intent classifications, for each of the one or more content blocks, generating a prompt based on one or more parameter values, inputting the prompt to a generative language model to produce the content block, and adding the content block to the reply email message, and outputting the reply email message;” Par. 0011; Han, “Each email sequence may be classified into one of a plurality of classes. Each of the plurality of classes represents a different outcome (i.e., the claimed “performance metric”) of the email sequence. As one example, the plurality of classes may comprise or consist of “positive,” “negative,” and “unresponsive” (i.e., the claimed “performance metric”) classes.” Par. 0053]
using the performance metrics to modify a prompt generation model; and [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1; Miller, “In some examples, machine learning components of the intent component 414, the content response component 412, and the neural network 436 are continuously retrained or fine-tuned based on user interactions with the content items 410 (i.e., the claimed “performance metrics to modify a prompt generation model”). For example, interactions by a user with the content items 410 are stored in an analytics database (i.e., the claimed “plurality of prior electronic messages transmitted to respective recipients”) (not shown). Metrics of the interactions (i.e., the claimed “performance metrics associated with a plurality of prior electronic messages transmitted to respective recipients)” are then used to provide reinforcement to the content response component 412 (i.e., the claimed “performance metrics to modify a prompt generation model”) when the content response component 412 provides a sequence of responses that lead to a successful intent 422 determination and consequently properly targeted content items 410.” Par. 0128; Han, “wherein each of the plurality of intent classification models outputs an intent classification of the email message and a confidence value; when the intent classifications, output by the plurality of intent classification models, match each other, and a confidence, represented by the confidence values (i.e., the claimed “performance metrics”) output by the plurality of classification models, satisfies a threshold, generate a reply email message by, determining one or more content blocks based on the matching intent classifications, for each of the one or more content blocks, generating a prompt based on one or more parameter values, inputting the prompt to a generative language model to produce the content block, and adding the content block to the reply email message, and outputting the reply email message;” Par. 0011; Han, “Each email sequence may be classified into one of a plurality of classes. Each of the plurality of classes represents a different outcome (i.e., the claimed “performance metric”) of the email sequence. As one example, the plurality of classes may comprise or consist of “positive,” “negative,” and “unresponsive” (i.e., the claimed “performance metric”) classes.” Par. 0053]
using the modified prompt generation model to generate the prompt to the LLM. [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1; Miller, “In some examples, machine learning components of the intent component 414, the content response component 412, and the neural network 436 are continuously retrained or fine-tuned based on user interactions with the content items 410 (i.e., the claimed “performance metrics to modify a prompt generation model”). For example, interactions by a user with the content items 410 are stored in an analytics database (i.e., the claimed “plurality of prior electronic messages transmitted to respective recipients”) (not shown). Metrics of the interactions (i.e., the claimed “performance metrics associated with a plurality of prior electronic messages transmitted to respective recipients)” are then used to provide reinforcement to the content response component 412 (i.e., the claimed “performance metrics to modify a prompt generation model”) when the content response component 412 provides a sequence of responses (i.e., the claimed “modified prompt generation model to generate the prompt to the LLM”) that lead to a successful intent 422 determination and consequently properly targeted content items 410.” Par. 0128; “The personalized AI agent system 232 generates prompts for a user based on a user's past activity (i.e., the claimed “modified prompt generation model to generate the prompt to the LLM”), interests, and behavior patterns.” Par. 0148; Han, “wherein each of the plurality of intent classification models outputs an intent classification of the email message and a confidence value; when the intent classifications, output by the plurality of intent classification models, match each other, and a confidence, represented by the confidence values (i.e., the claimed “performance metrics”) output by the plurality of classification models, satisfies a threshold, generate a reply email message by, determining one or more content blocks based on the matching intent classifications, for each of the one or more content blocks, generating a prompt based on one or more parameter values, inputting the prompt to a generative language model (i.e., the claimed “using the modified prompt generation model to generate the prompt to the LLM”) to produce the content block, and adding the content block to the reply email message, and outputting the reply email message;” Par. 0011]
Regarding Claim 7, Han in view of Miller has been discussed above. The combination further teaches:
wherein the performance metrics characterize one or more of: [Han, see mapping applied to claim 6; Miller, see mapping applied to claim 6]
a number of the prior electronic messages that were read by the respective recipients; [Han, “email conversations (i.e., the claimed “prior email messages”) between sales or marketing representatives from one or more organizations and contacts, representing customers and/or leads (i.e., the claimed “prior electronic messages that were read by the respective recipients”).” Par. 0051; “These email conversations may be organized into email sequences (i.e., the claimed “prior email messages”). For example, each email message may be associated with a conversation identifier and/or campaign identifier, which can be used to organize the email messages into email sequences. An email sequence may comprise any number of email messages, from one to many. An email sequence may comprise email message(s) that were sent by an organization to one or more contacts (i.e., the claimed “prior electronic messages that were read by the respective recipients”) and/or email message(s) that were sent by the contact(s) to the organization.” Par. 0052; “Each email sequence may be classified into one of a plurality of classes. Each of the plurality of classes represents a different outcome (i.e., the claimed “performance metric”) of the email sequence. As one example, the plurality of classes may comprise or consist of “positive,” “negative,” and “unresponsive” (i.e., the claimed “performance metric”) classes. The “positive” class represents a successful outcome in the form of a positive (i.e., the claimed “performance metric”) response to the email sequence, such as an expression of interest (e.g., a reply email message that states “Yes, I am interested to find out more”) (i.e., the claimed “prior electronic messages that were read by the respective recipients”), engagement, agreement, purchase, and/or the like.” Par. 0053]
Regarding Claim 8, Han in view of Miller has been discussed above. The combination further teaches:
wherein the plurality of signals include an identification of a category of electronic message to be sent, and [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1; “Each email sequence may be classified into one of a plurality of classes. Each of the plurality of classes (i.e., the claimed “category”) represents a different outcome of the email sequence (i.e., the claimed “electronic message to be sent”).” Par. 0053]
wherein the prompt to the LLM specifies the identified category. [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1; “Each email sequence may be classified into one of a plurality of classes. Each of the plurality of classes (i.e., the claimed “category”) represents a different outcome of the email sequence (i.e., the claimed “electronic message to be sent”).” Par. 0053; Han, “wherein each of the plurality of intent classification models outputs an intent classification of the email message and a confidence value; when the intent classifications, output by the plurality of intent classification (i.e., the claimed “category”) models, match each other, and a confidence, represented by the confidence values (i.e., the claimed “performance metrics”) output by the plurality of classification models, satisfies a threshold, generate a reply email message by, determining one or more content blocks based on the matching intent classifications, for each of the one or more content blocks, generating a prompt based on one or more parameter values, inputting the prompt to a generative language model (i.e., the claimed “prompt to the LLM specifies the identified category”) to produce the content block, and adding the content block to the reply email message, and outputting the reply email message;” Par. 0011]
Regarding Claim 9, Han in view of Miller has been discussed above. The combination further teaches:
wherein populating the content for the electronic message into the user interface comprises generating a subject line and message body for the electronic message, and [Han, “The playbook may be displayed as a list comprising, for each email sequence, an identifier of the email sequence, a name of the email sequence (e.g., as specified in input 614 when the email sequence was saved), a touch point order, a representation of the body of the email sequence (e.g., a starting snippet from the email sequence), a date and time that the email sequence was created, a date and time at which the email sequence was most recently updated, an input for deleting the email sequence, an input for editing the email sequence that when selected returns the user to graphical user interface 600, and/or the like.” Par. 0129; “Thus, the subject line may only be shown once (e.g., as part of first email message 622A).” Par. 0118;
wherein the transmissible payload includes the subject line, message body. [Han, “The playbook may be displayed as a list comprising, for each email sequence, an identifier of the email sequence, a name of the email sequence (e.g., as specified in input 614 when the email sequence was saved), a touch point order, a representation of the body of the email sequence (e.g., a starting snippet from the email sequence), a date and time that the email sequence was created, a date and time at which the email sequence was most recently updated, an input for deleting the email sequence, an input for editing the email sequence that when selected returns the user to graphical user interface 600, and/or the like.” Par. 0129; “Thus, the subject line may only be shown once (e.g., as part of first email message 622A).” Par. 0118; “Platform 110 transmits or serves one or more screens of the graphical user interface in response to requests (i.e., the claimed “transmissible payload”) from user system(s) 130.” Par. 0026; Miller, “This message data (i.e., the claimed “content for the electronic message”) includes, for any particular message, at least message sender data, message recipient (or receiver) data, and a payload (i.e., the claimed “transmissible payload”).” Par. 0193]
Regarding Claims 13 and 18, Han in view of Miller has been discussed above. The combination further teaches:
wherein the input to generate the electronic message includes an identification of one or more digital content items to be associated with the electronic message; [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1]
wherein the plurality of signals include content metadata associated with the one or more digital content items; and [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1]
wherein the prompt instructs the LLM to generate a summary of the one or more digital content items for inclusion in the electronic message based on the content metadata. [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1]
Regarding Claims 15 and 20, Han in view of Miller has been discussed above. The combination further teaches:
wherein the instructions when executed further cause the system to:
generating, by the computer system, a transmissible payload for the electronic message, [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1]
wherein the transmissible payload includes the content for the electronic message and one or more content items attached to or linked within the transmissible payload. [Han, see mapping applied to claim 1; Miller, see mapping applied to claim 1]
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Cunningham et al., (U.S. Patent Application Publication 2024/0406124) teaches electronic message generation.
Natoli et al., (U.S. Patent Application Publication 2025/0323888) teaches electronic message generation.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EUNICE LEE whose telephone number is 571-272-1886. The examiner can normally be reached M-F 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EUNICE LEE/Examiner, Art Unit 2656
/BHAVESH M MEHTA/ Supervisory Patent Examiner, Art Unit 2656