Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawing submitted on 04/20/2024 is considered by the examiner.
Examiner Note on Patent Subject Matter Eligibility under 35 U.S.C. 101
Independent claims 1, 8, and 15, regard a process and functionality for generating a text as a proposed message from a user input in a chat interface, which to send to a recipient. A Large Language Machine input prompt is generated, from determining intent or goal from the user input and the Large Language Machine using the LLM input prompt generates a proposed message based on the determining the user intent or goal from the user input. The proposed message is displayed in the chat interface and further sent to a recipient. This process includes natural language processing technology using Large Language model which is vastly used in artificial intelligence technology for processing/filtering high volume of data, using natural language input and cannot reasonably be performed by a human in the mind. Accordingly, the independent claims and their dependents by virtue of their dependency are found to be directed towards patent eligible subject matter under step 2A prong 1 of 2019 Patent Subject Matter Eligibility Guidelines.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s)1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Tsun et al.(US 2024/0296293 A1).
Regarding Claim 1, Tsun teach: A method comprising: displaying, by a processor, a chat interface within a messaging application (message drafting interface) ([0037] In some embodiments, attribute extraction component 150 extracts attribute data 104 from the online system in response to a user input received by an application software system. For example, an application software system (such as application software system 230 of FIG. 2) receives a user input from user system 110 as described in further detail with reference to FIG. 4 and/or FIG. 10. [0098] FIG. 11 illustrates another example graphical user interface 1000 in accordance with some embodiments of the present disclosure. As shown in FIG. 11, graphical user interface 1000 includes message drafting interface 1105. In some embodiments, message drafting interface 1105 includes message intent options 1110. For example, as shown in FIG. 11, message intent options 1110 can include “Seek work,” “Introduce myself,” and “Chat about: Career.” In response to a user selecting the message intent option 1110 to seek work, content generation system 100 generates suggestions (e.g., suggestion 114 of FIG. 1) for messages for seeking work.); receiving, by the processor, a message goal (message intent or a goal or purpose for that electronic messaging) from a user via the chat interface ([0041] In some embodiments, prompt generation component 160 determines an intent for content generation by profile 102. For example, in response to receiving an input from a user of user system 110 to initiate electronic messaging with a profile, prompt generation component 160 can determine a goal or purpose for that electronic messaging. In some embodiments, prompt generation component 160 determines messaging intent options and presents them to a user of user system 110. Prompt generation component 160 determines the messaging intent based on a user selection of one of the intent options.); generating, by the processor, a large language model (LLM) prompt (Prompt) using the message goal ([0039] Prompt generation component 160 generates prompt 106 based on attribute data 104 and the intent option 1110 selected. For example, the user interacting with graphical user interface 1000 to select intent option 1110 to seek work causes instruction generation component 162 to generate instructions for seeking work and input generation component to generate inputs from attribute data 104. Prompt generation component 160 generates message suggestions as described with reference to FIG. 1.); inputting, by the processor, the LLM prompt into an LLM (deep learning model) ([0050] The set of instructions includes data for instructing the deep learning model 108 to perform the appropriate task. For example, the set of instructions can include language telling the deep learning model 108 to generate a profile summary for a user with entry level experience associated with the set of user attributes. As an alternative example, the set of instructions can include an instruction, e.g., a natural language instruction, to the deep learning model 108 for the deep learning model 108 to generate a message for a user seeking a job. In some embodiments, instruction generation component 162 determines the set of instructions using a machine learning model.[0051] For example, example generation component 166 uses a high capacity (e.g., language generation model with many parameters of non-constant values) language generation model to generate a suggestion example. [0055] The deep learning model 108 includes a deep learning model that is configured using artificial intelligence-based technologies to machine-generate natural language text.); receiving, by the processor, generated text from the LLM responsive to the LLM prompt; displaying, by the processor, the generated text as a proposed message (suggestion) in the chat interface ([0058] Deep learning model 108 outputs suggestion 114 which is sent to user system 110. In some embodiments, user system 110 receives and displays suggestion 114 on user interface 112. For example, suggestion 114 can include text for a suggested summary for a profile 102 based on attribute data 104 of the profile 102. As another example, suggestion 114 can include text for a suggest headline for a profile 102 based on attribute data 104 of the profile 102.) ; and sending, by the processor, the proposed message to a recipient ([0108] FIG. 17 illustrates another example graphical user interface 1000 in accordance with some embodiments of the present disclosure. Send interface 1705 is an interface used to send the message suggestion. For example, a user interacting with the send button in send interface 1705 causes application software system 230 to send the message suggestion to the desired recipient.).
Regarding Claim 2, Tsun teach: The method of claim 1, further comprising revising the proposed message (update) in response to a user input before sending the proposed message (See rejection of claim 1 and [0061] For example, the profile interface displays suggestion 114 and the user interacts with the profile interface to refresh the suggestion. In response to receiving this interaction, user system 110 sends feedback 116 to prompt feedback component 168, indicating that the suggestion should be refreshed. In some embodiments, prompt feedback component 168 generates a performance parameter for suggestion 114 based on feedback 116. For example, feedback such as refreshing, skipping, or changing suggestion 114 is labeled as negative whereas feedback such as accepting suggestion 114 is labeled as positive. [0064] Based on the determination by prompt feedback component 168, input generation component 164 maps an updated set of user attributes of attribute data 104 to the set of prompt inputs. Using the updated set of user attributes, prompt generation component 160 generates an updated prompt. Prompt generation component 160 applies deep learning model 108 to the updated prompt to generate an updated suggestion.).
Regarding Claim 3, Tsun teach: The method of claim 1, wherein generating the LLM prompt further comprises: parsing (extract) the message goal(attribute data) to identify prompt keywords(intent/goal, associated with work, funding, education etc.); and augmenting the LLM prompt with the prompt keywords (See rejection of claim 1 and [0034] In some embodiments, although illustrated separately, part or all of attribute extraction component 150, prompt generation component 160, and/or deep learning model 108 are implemented on user system 110. For example, user system 110 can include deep learning model 108 and prompt generation component 160 can send prompt 106 to user system 110 implementing deep learning model 108, causing suggestion 114 to be displayed on a graphical user interface of user system 110. [0037] In some embodiments, attribute extraction component 150 extracts attribute data 104 from the online system in response to a user input received by an application software system. [0039] Prompt generation component 160 receives attribute data 104 and creates prompt 106 using the attribute data 104. [0041] For example, in response to receiving an input from a user of user system 110 to initiate electronic messaging with a profile, prompt generation component 160 can determine a goal or purpose for that electronic messaging. In some embodiments, prompt generation component 160 determines messaging intent options and presents them to a user of user system 110. For example, prompt generation component 160 can use predetermined messaging intent options such as “Seek work” and “Introduce myself” and present these options to a user of user system 110. Prompt generation component 160 determines the messaging intent based on a user selection of one of the intent options. [0043] In such an example, prompt generation component 160 determines that the messaging intent is to seek work. In an alternate example, the connection may indicate that the user initiating the electronic messaging is a start-up founder and that the recipient of the electronic messaging is an investor. In such an example, prompt generation component 160 determines that the messaging intent is to seek funding. [0045] In some embodiments, prompt generation component 160 maps a set of user attributes to a set of one or more prompt inputs using the identifier. For example, prompt generation component 160 maps user attributes that are relevant and effective to display for a user with entry level experience (e.g., education) while excluding user attributes that are irrelevant and ineffective to display for a user with entry level experience (e.g., years of experience).[0048] This initial prompt can result in suggestions 114 that read in a narrative format explaining the user's experience and education. Input generation component 164 updates the initial prompt to include additional information from attribute data 104.).
Regarding Claim 4, Tsun teach: The method of claim 1, wherein the LLM prompt includes data defining an output format for the generated text (See rejection of claim 1 and [0039] For example, the instructions are “Create a profile summary for a [JobTitle1] with [Experience].” In another example, the instructions are “Create a message to [JobPoster] for [JobApplicant] applying to [JobPosition] based on [Experience] and [Education].” In such examples, the bracketed phrases are used as placeholders for user attributes of attribute data 104. [0048] In some embodiments, input generation component 164 creates an initial prompt using a first subset of prompt inputs of the set of prompt inputs mapped to the user attributes and updating the initial prompt to generate prompt 106 which includes a second subset of prompt inputs of the set of prompt inputs. By generating the prompts for these separately, content generation system 100 ensures that the resulting suggestions 114 include both writing styles where necessary. [0050] The set of instructions includes data for instructing the deep learning model 108 to perform the appropriate task. For example, the set of instructions can include language telling the deep learning model 108 to generate a profile summary for a user with entry level experience associated with the set of user attributes. As an alternative example, the set of instructions can include an instruction, e.g., a natural language instruction, to the deep learning model 108 for the deep learning model 108 to generate a message for a user seeking a job.).
Regarding Claim 5, Tsun teach: The method of claim 1, wherein the proposed message comprises one of an email message, a short message service (SMS) message, or a chat message (See rejection of claim 1 and [0098] FIG. 11 illustrates another example graphical user interface 1000 in accordance with some embodiments of the present disclosure. As shown in FIG. 11, graphical user interface 1000 includes message drafting interface 1105. In some embodiments, message drafting interface 1105 includes message intent options 1110. For example, as shown in FIG. 11, message intent options 1110 can include “Seek work,” “Introduce myself,” and “Chat about: Career.” In response to a user selecting the message intent option 1110 to seek work, content generation system 100 generates suggestions (e.g., suggestion 114 of FIG. 1) for messages for seeking work. [0108] FIG. 17 illustrates another example graphical user interface 1000 in accordance with some embodiments of the present disclosure. Send interface 1705 is an interface used to send the message suggestion. For example, a user interacting with the send button in send interface 1705 causes application software system 230 to send the message suggestion to the desired recipient.).
Regarding Claim 6, Tsun teach: The method of claim 1, wherein the chat interface includes a text input to allow the user to issue subsequent LLM prompts to revise the proposed message (See rejection of claim 1 and [0058] Deep learning model 108 outputs suggestion 114 which is sent to user system 110. [0059] In some embodiments, deep learning model 108 sends suggestion 114 to prompt feedback component 168 of prompt generation component 160. Prompt feedback component 168 is a component that receives suggestion 114 from deep learning model 108 and feedback 116 from user system 110 and uses them to generate future prompts. For example, prompt feedback component 168 generates updated prompts based on suggestions 114 and/or feedback 116. [0061] For example, the profile interface displays suggestion 114 and the user interacts with the profile interface to refresh the suggestion. In response to receiving this interaction, user system 110 sends feedback 116 to prompt feedback component 168, indicating that the suggestion should be refreshed. In some embodiments, prompt feedback component 168 generates a performance parameter for suggestion 114 based on feedback 116. For example, feedback such as refreshing, skipping, or changing suggestion 114 is labeled as negative whereas feedback such as accepting suggestion 114 is labeled as positive. [0064] Based on the determination by prompt feedback component 168, input generation component 164 maps an updated set of user attributes of attribute data 104 to the set of prompt inputs. Using the updated set of user attributes, prompt generation component 160 generates an updated prompt. Prompt generation component 160 applies deep learning model 108 to the updated prompt to generate an updated suggestion.). [0101] For example, as shown in FIG. 11, one of message intent options 1110 is “Chat about: Career.” In some embodiments, the user can interact with graphical user interface 1000 to select an option to chat about. For example, the user can select “Career” from a menu of options for initiating an electronic messaging. Alternatively, graphical user interface 1000 can include a text box or other interface for a user of graphical user interface 1000 to manually input a topic to chat about.).
Regarding Claim 7, Tsun teach: The method of claim 1, wherein the chat interface includes a button input to allow the user to issue subsequent LLM prompts to revise the proposed message based on pre-defined goals (See rejection claim 6 and [0088] In response to receiving a user input of a selection of button 415, graphical user interface 400 updates as shown in FIG. 5. [0090] In some embodiments, profile interface 505 includes update suggestion selection buttons such as start button 515. In some embodiments, profile interface 505 includes a button or other method of selecting a specific update suggestion which causes graphical user interface 400 to update with the appropriate interface for the selected update suggestion. In some embodiments, as shown in FIG. 5, profile interface 505 includes a start button 515 which selects the update suggestions in a predetermined order. For example, selecting start button 515 causes profile interface 505 to update and a headline section 605. [0092] In some embodiments, content generation system 100 updates the prompt to change a tone for the suggestion to be displayed. For example, content generation system 100 updates a tone as explained with reference to FIG. 3 in response to a user interaction with user feedback interface 610. In some embodiments, receiving a user interaction with user feedback interface 610 causes the client device (e.g., user system 110) to send feedback (e.g., feedback 116) to a prompt feedback component (e.g., prompt feedback component 168). As explained with reference to FIG. 3, in response to receiving negatively labeled feedback (e.g., user interaction with the skip button), content generation system 100 generates an updated prompt through extracting updated attribute data, mapping an updated set of user attributes, generating an updated set of instructions, and/or generating an example. Additionally, in response to a user interaction with user feedback interface 610, graphical user interface 400 updates to display a suggestion in a summary section 705.).
Regarding Claim 8, Tsun teach: A non-transitory computer-readable storage medium for tangibly storing computer program instructions capable of being executed by a computer processor, the computer program instructions defining steps of ([0137] For example, a computer system or other data processing system, such as the computing system 100, can carry out the computer-implemented methods 1800 and 1900 in response to its processor executing a computer program (e.g., a sequence of instructions) contained in a memory or other non-transitory machine-readable storage medium. ): displaying, by a processor, a chat interface within a messaging application; receiving, by the processor, a message goal from a user via the chat interface; generating, by the processor, a large language model (LLM) prompt using the message goal; inputting, by the processor, the LLM prompt into an LLM; receiving, by the processor, generated text from the LLM responsive to the LLM prompt; displaying, by the processor, the generated text as a proposed message in the chat interface; and sending, by the processor, the proposed message to a recipient (See rejection of claim 1).
Regarding Claim 9, Tsun teach: The non-transitory computer-readable storage medium of claim 8, further comprising revising the proposed message in response to a user input before sending the proposed message (See rejection of claim 2).
Regarding Claim 10, Tsun teach: The non-transitory computer-readable storage medium of claim 8, wherein generating the LLM prompt further comprises: parsing the message goal to identify prompt keywords; and augmenting the LLM prompt with the prompt keywords (See rejection of claim 3).
Regarding Claim 11, Tsun teach: The non-transitory computer-readable storage medium of claim 8, wherein the LLM prompt includes data defining an output format for the generated text (See rejection of claim 4).
.
Regarding Claim 12, Tsun teach: The non-transitory computer-readable storage medium of claim 8, wherein the proposed message comprises one of an email message, a short message service (SMS) message, or a chat message(See rejection of claim 5).
Regarding Claim 13, Tsun teach: The non-transitory computer-readable storage medium of claim 8, wherein the chat interface includes a text input to allow the user to issue subsequent LLM prompts to revise the proposed message(See rejection of claim 6).
Regarding Claim 14, Tsun teach: The non-transitory computer-readable storage medium of claim 8, wherein the chat interface includes a button input to allow the user to issue subsequent LLM prompts to revise the proposed message based on pre-defined goals(See rejection of claim 7).
Regarding Claim 15, Tsun teach: A device comprising: a processor; a storage medium for tangibly storing thereon program logic for execution by the processor, the program logic comprising steps for ([0131] Computer system 2000 can send messages and receive data, including program code, through the network(s) and network interface device 2008. In the Internet example, a server can transmit a requested code for an application program through the Internet and network interface device 2008. The received code can be executed by processing device 2002 as it is received, and/or stored in data storage system 2040, or other non-volatile storage for later execution.): displaying, by the processor, a chat interface within a messaging application; receiving, by the processor, a message goal from a user via the chat interface; generating, by the processor, a large language model (LLM) prompt using the message goal; inputting, by the processor, the LLM prompt into an LLM; receiving, by the processor, generated text from the LLM responsive to the LLM prompt; displaying, by the processor, the generated text as a proposed message in the chat interface; and sending, by the processor, the proposed message to a recipient (See rejection of claim 1).
Regarding Claim 16, Tsun teach: The device of claim 15, further comprising revising the proposed message in response to a user input before sending the proposed message(See rejection of claim 2).
Regarding Claim 17, Tsun teach: The device of claim 15, wherein generating the LLM prompt further comprises: parsing the message goal to identify prompt keywords; and augmenting the LLM prompt with the prompt keywords(See rejection of claim 3).
Regarding Claim 18, Tsun teach: The device of claim 15, wherein the LLM prompt includes data defining an output format for the generated text (See rejection of claim 4).
Regarding Claim 19, Tsun teach: The device of claim 15, wherein the chat interface includes a text input to allow the user to issue subsequent LLM prompts to revise the proposed message(See rejection of claim 6).
Regarding Claim 20, Tsun teach: The device of claim 15, wherein the chat interface includes a button input to allow the user to issue subsequent LLM prompts to revise the proposed message based on pre-defined goals(See rejection of claim 7).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art of record Rodriguez et al. (US 20180367484 A1) teach: SUGGESTED ITEMS FOR USE WITH EMBEDDED APPLICATIONS IN CHAT CONVERSATIONS .
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD K ISLAM whose telephone number is (571)270-5878. The examiner can normally be reached Monday -Friday, EST (IFP).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras Shah can be reached at 571-270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMAD K ISLAM/Primary Examiner, Art Unit 2653