Detailed Action
This communication is in response to the Amendments and Arguments filed on 7/28/2025.
Claims 1-20 are pending and have been examined.
Claims 1-20 are rejected. Hence, this Action has been made FINAL.
Any previous objection/rejection not mentioned in this Office Action has been withdrawn by the Examiner.
Response to Amendment
The Applicants have amended the independent claims to include “the message including one or more previous messages of the conversation thread;” “in response to receiving the selection of the message,” “message, wherein the primary content includes at least a portion of the one or more previous messages;” “message”
Regarding the Claim Rejections - 35 U.S.C. § 103 Applicant notes Independent claim 1 The cited references do not disclose or suggest "receiving a selection of a message of a conversation thread, the message including one or more previous messages of the conversation thread" and "extracting primary content from the message, wherein the primary content includes at least a portion of the one or more previous messages," as recited in claim 1.
Applicant notes In rejecting claim 1, the Office Action acknowledges that Singh does not disclose "receiving a selection of a message from a conversation thread" and relies on Flother as disclosing this claim element. (See Office Action, at 20.) Applicant submits that Flother does not disclose selection of a message that includes previous messages of a conversation thread nor extracting, from the message, content that includes at least a portion of the previous messages. Flother describes using a previous conversation history between a sender and a receiver to train a machine learning model to be able to identify a formality level between the sender and a receiver. (See Flother, [0007], [0065], and FIG. 1.) Flother further describes selecting text in a draft message (i.e., a message that is being composed) in order to receive, from the trained model, alternative wording suggestions that may be more positively received by the recipient based on the previous formality analysis. (Id., [0069].) Selecting text within a draft message does not correspond to selecting a message from a conversation thread, nor does Flother describe that a selected message includes a conversation history from which content may be extracted.
Examiner notes Flother does disclose this limitation. See Flother “[0065] FIG. 1 shows a block diagram of a preferred embodiment of a method 100 for personalizing a message between a sender and a receiver. Method 100 may be viewed as having a first phase (operations 102-112) and a second phase (operations 114-120). In a first phase, method 100 comprises semantically analyzing a back and forth communication history between the sender and the receiver, at operation 102. This may comprise every available electronic message sent between the sender and the receiver including the affiliated metadata. The metadata of the electronic messages may include the time of the messages, time periods between messages, environmental data, sensor data, captured health data, and so on. Based on the semantic analysis of the communications history, method 100 further comprises forming a knowledge graph, at operation 104, between a sender identifier which identifies a sender and a receiver identifier which identifies a receiver.”
See Flother [0081] “As data is used throughout the process, communication history data is extracted from the appropriate communication system(s) to train the machine learning model and to help to tailor the message. The message data (examiner notes from the communication history) may comprise the text of the message, entities of the text, structuring of the text, length of the text, sentiment analysis data, personal knowledge data, insider data, tone analysis data, emoticons, emoji, additional graphics, links, and other data…”
Applicant notes Independent claims 12 and 20 The cited references fail to disclose or suggest "content in the context object ordered based on relevance with the most-relevant content being positioned at an end of the context object and the least-relevant content being positioned at a beginning of the context object,""generating a prompt including the context object," and "providing the prompt to the generative Al model,"
Examiner notes Lagi does teach this limitation. See Lagi [0090] The content development platform 100 may include a content cluster data store 132 for storing the content clusters 130. The content cluster data store 132 may comprise a MySQL database or other type of database. The content cluster data store 132 may store mathematical relationships, based on the various models 118, between content objects, such as the primary content object 102 and various other content objects or topics, which, among other things, may be used to determine what pages should be in the same cluster of pages (and accordingly should be linked to each other). In embodiments, clusters are based on matching semantics between phrases, not just matching exact phrases. Thus, new topics can be discovered by observing topics or subtopics within semantically similar content objects in a cluster that are not already covered in a primary content object 102. In embodiments, an auto-discovery engine 170 may process a set of topics in a cluster to automatically discover additional topics that may be of relevance to parties interested in the content of the primary content object 102.”
PNG
media_image1.png
412
985
media_image1.png
Greyscale
In the rejection of claim 12, Singh is cited as disclosing providing a prompt to an Al model,
and Lagi at FIG. 3 is cited as disclosing ordering content based on relevance. (See Office Action, pp. 35 and 39.) Lagi at FIG. 3 depicts a user interface for presenting suggested topics and related information, not content that would be included in an Al prompt. The information shown in FIG. 3 of Lagi is an output of an analysis, not an input to an Al model. There is no suggestion or motivation to modify an Al prompt to include content that is ordered by relevance within the prompt based on Lagi's disclosure of displaying a user interface; these two concepts are unrelated. The motivation cited in the Office Action ("this allows for sorting by relevancy to which improves the user experience in presentation") is irrelevant to an Al prompt, which is not providing a presentation to a user.
Examiner notes Lagi does teach this limitation. See Lagi [0107] “…Thus, also provided herein is the auto-discovery engine 170, including various methods, systems, components, modules, services, processes, applications, interfaces and other elements for automated discovery of topics for interactions with customers of an enterprise, including methods and systems that assist various functions and roles within an enterprise in finding appropriate topics to draw customers into relevant conversations and to extend the conversations in a way that is relevant to the enterprise and to each customer. Automated discovery of relevant content topics may support processes and workflows that require insight into what topics should be written about, such as during conversations with customers. Such processes and workflows may include development of content by human workers, as well as automated generation of content, such as within automated conversational agents, bots, and the like. Automated discovery may include identifying concepts that are related by using a combination of analysis of a relevant item of text (such as core content of a website, or the content of an ongoing conversation) with an analysis of linking (such as linking of related content). In embodiments, this may be performed with awareness at a broad scale of the nature of content on the Internet, such that new, related topics can be automatically discovered that further differentiate an enterprise, while remaining relevant to its primary content. The new topics can be used within a wide range of enterprise functions, such as marketing, sales, services, public relations, investor relations and other functions, including functions that involve the entire lifecycle of the engagement of a customer with an enterprise.”
See Lagi [0101] “In embodiments, the user interface 152 facilitates generation of generated online presence content 160 related to the suggested topic 138. In embodiments, the user interface 152 includes at least one of key words and key phrases that represent the suggested topic 138, which may be used to prompt the user with content for the generation of online presence content.”
The Applicants Arguments and Amendments doe not overcome the 35 U.S.C. § 103 Claim Rejections.
Regarding the Double Patenting rejections, the Applicants Arguments and Amendments overcome the Double Patenting rejections.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 7/28/2025 and 10/8/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 7, are rejected under 35 U.S.C. 103 as being unpatentable over SINGH (U.S. Patent Number US 20230101224 A1) in view of Flöther (U.S. Patent Number US 20220045975 A1).
Regarding Claim 1, SINGH teaches generating a prompt including: a request phrase for a draft reply to the selected message; and the primary content of the message but not secondary content of the message; (see SINGH [0033] “FIG. 2 is a diagram showing an example of the GUI provided by the local assistant program 14 according to various embodiments of the present invention. As shown in FIG. 2, the local assistant program 14 provides a virtual field 50, in this example on the right-side of the email interface, that provides customized responses and information that the user can optionally include, by, in some cases and examples, a single mouse click, an draft response email 52 to be sent in response to an incoming email. The example of FIG. 2 shows on the left side of the GUI, an email 54 received from an original sender, in this example Mary Smith, and, above that still on the left side, the draft response email 50 from the user in response to original sender's email. The callouts on the left side of FIG. 2 generally show the types of information from Ms. Smith's email that are extracted by the intelligent email server 24 in order to generate relevant response content. The callouts on the right side of FIG. 2 generally show the information sent back from the intelligent email server 24 to the local assistant program 14 for inclusion in the GUI. This information includes the relevant response content determined by the intelligent email server 24.”) providing the prompt as input to the generative Al model; (see SINGH [0037] The NLP component 30 can also identify the substance, or body, or content of the incoming email so that the other components (e.g., the intent extraction component 32) of the intelligent email server 24 can extract and decipher the intent of any questions or requested actions in the body of the email.”) (see SINGH [0048] “Then, at block 109 the NLP component 30 of the back-end intelligent computer system 24 can generate a response to the inquiry in the incoming email based on the relevant documents found in the research library 27. Thus, the response is not a “canned,” pre-prepared response, but instead is generated specifically in response to the incoming. The generated response could be the same as (or similar to) a response to another inquiry in another (second) incoming email, where the inquiry in the second email is on the same topic and close in time (so that the same documents in the research library 27 are the same from a timing perspective). At step 118 the prepared response can then be transmitted by the back-end intelligent computer system 24 to the local assistant 14 on the user's device 10 for, at step 120, display on the GUI provided by the local assistant 14, as shown by the example query response block in FIG. 2.”) receiving, in response to the prompt, an output from the generative Al model including a suggested draft reply; (see SINGH [0039] “Once the content of the incoming email is identified, the intent extraction module 32 can, through its machine learning training, identify an intent, theme and/or sentiment of the email. Is it asking about stocks with upside or downside? In the near-term or long-term? Or stocks that will be impacted (or not impacted) by an upcoming or recent event? In terms of investments, is the sentiment bullish, bearish, or neutral? Again, once the intent, theme and/or sentiment are identified, they can be used to search for responsive information in the documents in the research library 27. They can also be stored in the contact interactions data store 23 as metadata about the ender's email (e.g., the sender's email was neutral).”) and causing a display of the suggested draft reply. (see SINGH [0048] “Then, at block 109 the NLP component 30 of the back-end intelligent computer system 24 can generate a response to the inquiry in the incoming email based on the relevant documents found in the research library 27. Thus, the response is not a “canned,” pre-prepared response, but instead is generated specifically in response to the incoming. The generated response could be the same as (or similar to) a response to another inquiry in another (second) incoming email, where the inquiry in the second email is on the same topic and close in time (so that the same documents in the research library 27 are the same from a timing perspective). At step 118 the prepared response can then be transmitted by the back-end intelligent computer system 24 to the local assistant 14 on the user's device 10 for, at step 120, display on the GUI provided by the local assistant 14, as shown by the example query response block in FIG. 2.”)
SINGH does not specifically teach 1. A system for generating a suggested reply message using a generative artificial intelligence (Al) model, the system comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, cause the system to perform a set of operations comprising: However Flöther does teach this limitation (see Flöther [0141] “Computer system/server 700 is shown in the form of a general-purpose computing device. The components of computer system 700 may include, but are not limited to, one or more processors or processing units 702, a system memory 704, and a bus 706 that couple various system components to a processor 702. Bus 706 represents one or more of any of several types of bus structures including a memory bus, a memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures.”) receiving a selection of a message of a conversation thread, the message including one or more previous messages of the conversation thread; (see Flöther and [0007] “According to one aspect of the present disclosure, a method for personalizing a message between a sender and a receiver may be provided. The method may comprise semantically analyzing a communication history between the sender and the receiver and forming a knowledge graph between a sender identifier identifying the sender and a receiver identifier identifying the receiver. Furthermore, the method may comprise deriving from the knowledge graph formality level values between the sender and the receiver, using a first trained machine-learning model, analyzing parameter values of replies in the communication history to determine receiver impact score values and training a second machine-learning system to generate a model to predict the receiver impact score value based on the knowledge graph and the formality level.”) (See Flother “[0065] FIG. 1 shows a block diagram of a preferred embodiment of a method 100 for personalizing a message between a sender and a receiver. Method 100 may be viewed as having a first phase (operations 102-112) and a second phase (operations 114-120). In a first phase, method 100 comprises semantically analyzing a back and forth communication history between the sender and the receiver, at operation 102. This may comprise every available electronic message sent between the sender and the receiver including the affiliated metadata. The metadata of the electronic messages may include the time of the messages, time periods between messages, environmental data, sensor data, captured health data, and so on. Based on the semantic analysis of the communications history, method 100 further comprises forming a knowledge graph, at operation 104, between a sender identifier which identifies a sender and a receiver identifier which identifies a receiver.”) (See Flother [0081] “As data is used throughout the process, communication history data is extracted from the appropriate communication system(s) to train the machine learning model and to help to tailor the message. The message data (examiner notes from the communication history) may comprise the text of the message, entities of the text, structuring of the text, length of the text, sentiment analysis data, personal knowledge data, insider data, tone analysis data, emoticons, emoji, additional graphics, links, and other data…”) extracting the primary content from the conversation thread; (see Flöther [0011] “Moreover, the message personalizing system may comprise selection means adapted for selecting a linguistic expression in a message being drafted, determination means adapted for determining an expression intent of the selected linguistic expression, modification means adapted for modifying the linguistic expression based on the formality level and the expression intent, thereby generating a modified linguistic expression and test means adapted for testing if the modified linguistic expression has an increased likelihood to lead to a higher receiver impact score value using a third trained machine-learning model.”)
SINGH in view of Flöther are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the generating a prompt including: a request phrase for a draft reply to the selected message; and the primary content of the message but not secondary content of the message; providing the prompt as input to the generative Al model; receiving, in response to the prompt, an output from the generative Al model including a suggested draft reply; and causing a display of the suggested draft reply of SINGH to incorporate A system for generating a suggested reply message using a generative artificial intelligence (Al) model, the system comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, cause the system to perform a set of operations comprising: receiving a selection of a message from a conversation thread the message including one or more previous messages of the conversation thread; extracting the primary content from the conversation thread of Flöther. This allows for faster and more efficient communication as recognized by Flöther [0014].
As to Claim 2 SINGH in view of Flöther teaches The system of claim 1,
Furthermore SINGH teaches wherein the secondary content includes a signature included in the selected message. (see SINGH [0024] “The local assistant program 14 interfaces with the email server 22 and the back-end intelligent computer system 24 to provide several enhanced email functionalities and customizations for the user of the user computer device 10. For the example, the back-end intelligent computer system 24, through machine learning, can read and interpret email sent to (and received by) the user computer device 10. For example, the back-end intelligent computer system 24 can read and interpret the sender, subject (from the subject line), the body, the signature, and any attachments to incoming emails. In addition, through machine learning, the back-end intelligent computer system 24 is trained to understand the intent of the incoming emails,”)
As to Claim 7 SINGH in view of Flöther teaches The system of claim 1,
Furthermore SINGH teaches wherein the primary content includes a body of the message. (see SINGH [0024] The local assistant program 14 interfaces with the email server 22 and the back-end intelligent computer system 24 to provide several enhanced email functionalities and customizations for the user of the user computer device 10. For the example, the back-end intelligent computer system 24, through machine learning, can read and interpret email sent to (and received by) the user computer device 10. For example, the back-end intelligent computer system 24 can read and interpret the sender, subject (from the subject line), the body, the signature, and any attachments to incoming emails.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over SINGH (U.S. Patent Number US 20230101224 A1), in view of Flöther (U.S. Patent Number US 20220045975 A1), and further in view of Brown (U.S. Patent Number US 20120297460 A1)
As to Claim 4 SINGH in view of Flöther teaches The system of claim 1,
Furthermore SINGH teaches wherein the operations further comprise: identifying a header in at least one of the one or more previous messages (see SINGH [0024] “The local assistant program 14 interfaces with the email server 22 and the back-end intelligent computer system 24 to provide several enhanced email functionalities and customizations for the user of the user computer device 10. For the example, the back-end intelligent computer system 24, through machine learning, can read and interpret email sent to (and received by) the user computer device 10. For example, the back-end intelligent computer system 24 can read and interpret the sender, subject (from the subject line), the body, the signature, and any attachments to incoming emails. In addition, through machine learning, the back-end intelligent computer system 24 is trained to understand the intent of the incoming emails,”)
SINGH in view of Flöther do not specifically teach and excluding the header from the primary content. However Brown does teach this limitation (see Brown [251] “…receiving a message from the service intended for the client; examining a header of the message to determine whether the header represents a potential security violation; stripping the header from the message responsive to a determination that the header represents a potential security violation;”)
SINGH in view of Flöther and Brown are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of SINGH in view of Flöther and the operations further comprise: identifying a header of one or more previous messages of SINGH to incorporate and excluding the header of Brown. This allows increased security as recognized by Brown [251].
Claims 5, 6, are rejected under 35 U.S.C. 103 as being unpatentable over SINGH (U.S. Patent Number US 20230101224 A1), in view of Flöther (U.S. Patent Number US 20220045975 A1) and further in view of Bivans (U.S. Patent Number US 20230188557 A1)
As to Claim 5 SINGH in view of Flöther teaches The system of claim 1,
Furthermore SINGH teaches wherein extracting the primary content from the message comprises: determining a recency of each of the previous messages in the conversation thread; (see SINGH [0028] “The interactions data store 23 stores data about interactions that the contact had with the research teams of the sell-side firm. The interactions between the contacts and the sell-side firm can include interactions such as emails, phone calls, and meetings involving the various contacts and members of the sell-side firm. This interaction-type data may include the date, time, duration, participants and/or topic(s) of the interaction.”) and extracting the set of previous messages as the primary content. (see SINGH [0035] The entity extraction component can also be trained, through machine learning, to identify the entity associated with the sender, in this example “Critical Capital.” The entity extraction component can identify the entity using, for example, the sender's email address and/or the sender's the signature block. Information about the sender and the sender's organization is stored in the CRM and contact interactions data stores 21, 23. For instance, in an example where Critical Capital is a buy-side firm and the recipient organization is a sell-side firm, the CRM data store 21 may store up-to-data about funds administered by the buy-side firm, its holdings, and recent trades. The contact interactions data store 23 can store data about the sender's interactions with the sell-side firm, including to whom and when the sender (Ms. Smith in this example) sent prior emails and made calls to individuals with the sell-side firm, as well as the content or subject of those emails and calls. It can also store data about in-person or virtual meetings (e.g., video, teleconferencing, etc.) that the contact had with the sell-side firm. The intelligent email server 24 can use this information to generate customized response material for the user. Indeed, the contact interactions data store 23 can store such data on all of the sell-side firm's contacts at the buy-side firm and use this aggregate contacts information for the buy-side firm to generate the customized response content for the user.”)
SINGH in view of Flöther do not teach determining a set of previous messages, where the set includes a preset number of most recent messages of the previous messages in the conversation thread; However Bivans does teach this limitation (see Bivans [0176] “… the receiver module can: store sets of metrics (e.g., first set of metrics, second set of metrics) for each received message; and calculate a trust score based on an average of these metrics for a set of messages within a window of a most recent time period (e.g., 5 minutes) or within a window of a predefined number of most recent messages (e.g., 10 messages).”)
SINGH in view of Flöther and Bivans are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have the method of combination of SINGH in view of Flöther to incorporate determining a set of previous messages, where the set includes a preset number of most recent messages of Bivans. This allows for the system to recognize which messages as relevant as recognized by Bivans [0175-0177].
As to Claim 6 SINGH in view of Flöther and further in view of Bivans teaches The system of claim 5,
Furthermore SINGH teaches wherein the operations further comprise: summarizing content of previous messages in the conversation thread that are not included in the set; and including the summarized content in the prompt. (see SINGH Figure 1 element 50 for summarized content included in the prompt. See SINGH [0039] “Once the content of the incoming email is identified, the intent extraction module 32 can, through its machine learning training, identify an intent, theme and/or sentiment of the email. Is it asking about stocks with upside or downside? In the near-term or long-term? Or stocks that will be impacted (or not impacted) by an upcoming or recent event? In terms of investments, is the sentiment bullish, bearish, or neutral? Again, once the intent, theme and/or sentiment are identified, they can be used to search for responsive information in the documents in the research library 27. They can also be stored in the contact interactions data store 23 as metadata about the ender's email (e.g., the sender's email was neutral).”)
Claims 8, 9, 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over SINGH (U.S. Patent Number US 20230101224 A1), in view of Flöther (U.S. Patent Number US 20220045975 A1) and further in view of Luzhnica (U.S. Patent Number US 11516158 B1)
As to Claim 8 SINGH in view of Flöther teaches The system of claim 1,
SINGH in view of Flöther do not teach wherein the operations further comprise: identifying machine-readable format content; and formatting the machine-readable format content into a human-readable format for inclusion in the prompt. However Luzhnica does teach this limitation (see Luzhnica (73:4-27) “(368) Records relating to natural language messages can, at various stages of storage, use, etc., be put into (translated into) different form not corresponding to natural language and analyzed, stored, relayed, etc., in such modified forms (e.g., in a machine-readable code, programming language, vector, or other encoding) or can be associated with additional data elements in records (e.g., metadata tags, additional context, and the like). E.g., in aspects, natural language message components of a method, such as prompts, training set data, or both, may subjected to embedding, tokenization, compression, encryption, vectorization, etc. In aspects, inputs are tokenized, data records are presented as tokens, or both. In aspects, tokens represent N-grams (either delimiter-separated n-grams, semantic n-grams, or fragment n-grams composed of a collection of characters from some other semantic element). In other aspects, systems can, at least in part, analyze message on a character-by-character basis. In general, any disclosure/aspect herein relating to semantic elements, words, and the like, provides implicit support for a step/function, etc., wherein token(s) or other elements described herein are used in place of such semantic elements in respect of any step(s) of described methods, components of systems, or functions carried out by system components.”)
SINGH in view of Flöther and Luzhnica are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of SINGH in view of Flöther to incorporate identifying machine-readable format content; and formatting the machine-readable format content into a human-readable format for inclusion in the prompt of Luzhnica. This allows for information to be stored with the message information as recognized by Luzhnica (73:4-27)
As to Claim 9 SINGH in view of Flöther and further in view of Luzhnica teaches The system of claim 8,
Furthermore Luzhnica teaches wherein the machine-readable format content includes a date. (see Luzhnica (73:4-27) “(368) Records relating to natural language messages can, at various stages of storage, use, etc., be put into (translated into) different form not corresponding to natural language and analyzed, stored, relayed, etc., in such modified forms (e.g., in a machine-readable code, programming language, vector, or other encoding) or can be associated with additional data elements in records (e.g., metadata tags, additional context, and the like). E.g., in aspects, natural language message components of a method, such as prompts, training set data, or both, may subjected to embedding, tokenization, compression, encryption, vectorization, etc. In aspects, inputs are tokenized, data records are presented as tokens, or both. In aspects, tokens represent N-grams (either delimiter-separated n-grams, semantic n-grams, or fragment n-grams composed of a collection of characters from some other semantic element). In other aspects, systems can, at least in part, analyze message on a character-by-character basis. In general, any disclosure/aspect herein relating to semantic elements, words, and the like, provides implicit support for a step/function, etc., wherein token(s) or other elements described herein are used in place of such semantic elements in respect of any step(s) of described methods, components of systems, or functions carried out by system components.”) (see Luzhnica (9:10-15) “a message is generated using a prior message apparently as a prompt along with a few additional short segment inputs (yes and a date/time indicator), thereby generating a response,”)
SINGH in view of Flöther and Luzhnica are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of SINGH in view of Flöther to incorporate the machine-readable format content includes a date of Luzhnica. This allows for information to be stored with the message information for a computer to read it more efficiently, as recognized by Luzhnica (73:4-27).
As to Claim 10 SINGH in view of Flöther teaches The system of claim 1,
SINGH in view of Flöther do not teach, wherein the primary content includes content ordered by a determined relevance, where more relevant content is included later in the prompt and less relevant content is included earlier in the prompt. However Luzhnica does teach this limitation (see Luzhnica Figure 3 shows highest relevance (90%) at the end of the prompt.)
SINGH in view of Flöther and Luzhnica are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of SINGH in view of Flöther to incorporate the primary included content includes content ordered by a determined relevance, where more relevant content is included later in the prompt and less relevant content is included earlier in the prompt of Luzhnica. This allows for information to be stored with the message information for a computer to read it more efficiently, as recognized by Luzhnica (73:4-27).
As to Claim 11 SINGH in view of Flöther teaches 11. The system of claim 10,
wherein the determined relevance is based on recency of the content. (see Luzhnica (120:65-121:54) (553) “…Thus, alternatively, if a freeform situational prompt input is provided, such a message (“Aiden loves drums”) could appear in a message that touch on concepts that NN(s) determine are sufficiently related to drums, such as “staying on beat,” referencing a great drummer in the user's location at a relevant time, referencing characteristics of a famous drummer in a message about a topic, such as perseverance or creativity, and the like….Selectable module #556 can provide the user with the ability to select additional prompt data concerning a recipient-associated organization. Such selectable prompt content can add additional instructional prompt or situational prompt content for the system to use in generating draft messages. Notably, the content and layout of various selectable modules can be different within, e.g., the module section of the interface, #506, as is clear by the interface example(s) shown in, e.g., FIGS. 13, 14A, and 15.”)
SINGH in view of Flöther and Luzhnica are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of SINGH in view of Flöther to incorporate the determined relevance is based on recency of the content of Luzhnica. This allows for information to be stored with the message information for a computer to read it more efficiently, as recognized by Luzhnica (73:4-27)
Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over SINGH (U.S. Patent Number US 20230101224 A1), in view of Flöther (U.S. Patent Number US 20220045975 A1) and further in view of Lagi (U.S. Patent Number US 20200065857 A1).
As to Claim 3 SINGH in view of Flöther teaches The system of claim 1,
SINGH in view of Flöther do not teach further comprising translation of at least a portion of the primary content from a first language to a second language. However Lagi does teach this limitation (see Lagi [0036] “According to some embodiments, the machine learning system iteratively tests variations of at least one of the timing, the language, and the topic of personalized message. In some embodiments, the machine learning system uses outcomes of the iteratively personalized messages to improve generation of new targeted messages. In some embodiments, the outcomes include whether a transaction was completed with a recipient. In some embodiments, outcome data about completion of a transaction is automatically extracted from a customer-relationship management database of the user.”) (see Lagi [0082] “Note that the present concepts can be carried across languages insofar as an aspect hereof provides for manual or automated translation from a first language to a second language, and that inputs, results and outputs of the system can be processed in one or another language, or in a plurality of languages as desired.”)
SINGH in view of Flöther and Lagi are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of combination of SINGH in view of Flöther to incorporate translation of at least a portion of the primary content from a first language to a second language of Lagi. This allows for improves user experience in presentation and allows users to understand the message, as recognised by Lagi [0082].
Regarding independent Claim 12, SINGH teaches generating a prompt including the context object and a request phrase for a draft reply to the selected message; (see SINGH [0033] “FIG. 2 is a diagram showing an example of the GUI provided by the local assistant program 14 according to various embodiments of the present invention. As shown in FIG. 2, the local assistant program 14 provides a virtual field 50, in this example on the right-side of the email interface, that provides customized responses and information that the user can optionally include, by, in some cases and examples, a single mouse click, an draft response email 52 to be sent in response to an incoming email. The example of FIG. 2 shows on the left side of the GUI, an email 54 received from an original sender, in this example Mary Smith, and, above that still on the left side, the draft response email 50 from the user in response to original sender's email. The callouts on the left side of FIG. 2 generally show the types of information from Ms. Smith's email that are extracted by the intelligent email server 24 in order to generate relevant response content. The callouts on the right side of FIG. 2 generally show the information sent back from the intelligent email server 24 to the local assistant program 14 for inclusion in the GUI. This information includes the relevant response content determined by the intelligent email server 24.”) providing the prompt to the generative Al model; (see SINGH [0037] The NLP component 30 can also identify the substance, or body, or content of the incoming email so that the other components (e.g., the intent extraction component 32) of the intelligent email server 24 can extract and decipher the intent of any questions or requested actions in the body of the email.”) (see SINGH [0048] “Then, at block 109 the NLP component 30 of the back-end intelligent computer system 24 can generate a response to the inquiry in the incoming email based on the relevant documents found in the research library 27. Thus, the response is not a “canned,” pre-prepared response, but instead is generated specifically in response to the incoming. The generated response could be the same as (or similar to) a response to another inquiry in another (second) incoming email, where the inquiry in the second email is on the same topic and close in time (so that the same documents in the research library 27 are the same from a timing perspective). At step 118 the prepared response can then be transmitted by the back-end intelligent computer system 24 to the local assistant 14 on the user's device 10 for, at step 120, display on the GUI provided by the local assistant 14, as shown by the example query response block in FIG. 2.”) receiving, in response to the prompt, an output from the generative Al model including a suggested draft reply; (see SINGH [0039] “Once the content of the incoming email is identified, the intent extraction module 32 can, through its machine learning training, identify an intent, theme and/or sentiment of the email. Is it asking about stocks with upside or downside? In the near-term or long-term? Or stocks that will be impacted (or not impacted) by an upcoming or recent event? In terms of investments, is the sentiment bullish, bearish, or neutral? Again, once the intent, theme and/or sentiment are identified, they can be used to search for responsive information in the documents in the research library 27. They can also be stored in the contact interactions data store 23 as metadata about the ender's email (e.g., the sender's email was neutral).”) and causing a display of the suggested draft reply. (see SINGH [0048] “Then, at block 109 the NLP component 30 of the back-end intelligent computer system 24 can generate a response to the inquiry in the incoming email based on the relevant documents found in the research library 27. Thus, the response is not a “canned,” pre-prepared response, but instead is generated specifically in response to the incoming. The generated response could be the same as (or similar to) a response to another inquiry in another (second) incoming email, where the inquiry in the second email is on the same topic and close in time (so that the same documents in the research library 27 are the same from a timing perspective). At step 118 the prepared response can then be transmitted by the back-end intelligent computer system 24 to the local assistant 14 on the user's device 10 for, at step 120, display on the GUI provided by the local assistant 14, as shown by the example query response block in FIG. 2.”)
SINGH does not specifically teach A computer-implemented method for generating a suggested reply message using a generative artificial intelligence (Al) model, the method comprising: However Flöther does teach this limitation (see Flöther [0141] “Computer system/server 700 is shown in the form of a general-purpose computing device. The components of computer system 700 may include, but are not limited to, one or more processors or processing units 702, a system memory 704, and a bus 706 that couple various system components to a processor 702. Bus 706 represents one or more of any of several types of bus structures including a memory bus, a memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures.”) receiving a selection of a message, the message comprising a header and a body; extracting at least a portion of the body of the message, where the body includes one or more previous messages in a communication thread; (see Flöther [0007] “According to one aspect of the present disclosure, a method for personalizing a message between a sender and a receiver may be provided. The method may comprise semantically analyzing a communication history between the sender and the receiver and forming a knowledge graph between a sender identifier identifying the sender and a receiver identifier identifying the receiver. Furthermore, the method may comprise deriving from the knowledge graph formality level values between the sender and the receiver, using a first trained machine-learning model, analyzing parameter values of replies in the communication history to determine receiver impact score values and training a second machine-learning system to generate a model to predict the receiver impact score value based on the knowledge graph and the formality level.”) (see Flöther [0081] “As data is used throughout the process, communication history data is extracted from the appropriate communication system(s) to train the machine learning model and to help to tailor the message. The message data may comprise the text of the message, entities of the text, structuring of the text, length of the text, sentiment analysis data, personal knowledge data, insider data, tone analysis data, emoticons, emoji, additional graphics, links, and other data. The structuring of the text may include greetings or openings, main part or text body, et cetera. The length of the text may include, for example, the measurement of the text bloc, the number of characters in words or in the message overall, the word count, or the number of paragraphs in the communication.”) selecting content from the extracted portion to include in a context object, (see Flöther [0011] “Moreover, the message personalizing system may comprise selection means adapted for selecting a linguistic expression in a message being drafted, determination means adapted for determining an expression intent of the selected linguistic expression, modification means adapted for modifying the linguistic expression based on the formality level and the expression intent, thereby generating a modified linguistic expression and test means adapted for testing if the modified linguistic expression has an increased likelihood to lead to a higher receiver impact score value using a third trained machine-learning model.”) (see Flöther [0081] “As data is used throughout the process, communication history data is extracted from the appropriate communication system(s) to train the machine learning model and to help to tailor the message. The message data may comprise the text of the message, entities of the text, structuring of the text, length of the text, sentiment analysis data, personal knowledge data, insider data, tone analysis data, emoticons, emoji, additional graphics, links, and other data. The structuring of the text may include greetings or openings, main part or text body, et cetera. The length of the text may include, for example, the measurement of the text bloc, the number of characters in words or in the message overall, the word count, or the number of paragraphs in the communication.”)
SINGH in view of Flöther are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the generating a prompt including: a request phrase for a draft reply to the selected message; and the primary content of the conversation thread but not the secondary content of the message; providing the prompt as input to the generative Al model; receiving, in response to the prompt, an output from the generative Al model including a suggested draft reply; and causing a display of the suggested draft reply of SINGH to incorporate A computer-implemented method for generating a suggested reply message using a generative artificial intelligence (Al) model, the method comprising: receiving a selection of a message, the message comprising a header and a body; extracting at least a portion of the body of the message, where the body includes one or more previous messages in a communication thread; selecting content from the extracted portion to include in a context object, of Flöther. This allows for faster and more efficient communication as recognized by Flöther [0014].
SINGH in view of Flöther do not specifically teach wherein content in the context object is ordered based on relevance with the most relevant content being positioned at an end of the context object and the least relevant content being positioned at a beginning of the context object; However Lagi does teach this limitation (see Lagi Figure 3 shows highest relevance (90%) at the end of the prompt.) See Lagi [0090] “The content development platform 100 may include a content cluster data store 132 for storing the content clusters 130. The content cluster data store 132 may comprise a MySQL database or other type of database. The content cluster data store 132 may store mathematical relationships, based on the various models 118, between content objects, such as the primary content object 102 and various other content objects or topics, which, among other things, may b