DETAILED ACTION
The action is responsive to the Application filed on 05/09/2024. Claims 1-31 are pending in the case. Claims 1, 13, 24, 28 and 31 are independent claims.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-31 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As to claims 1, 24 and 31, the claims recites a method for receiving a selection of content, generating and displaying a prompt based on the selection and context and receiving selection of the prompt to generate and display output from a language model.
The limitation of selecting content, generating a prompt based on the selection and some context and using the prompt to generate an output, as drafted, is a process that under its broadest reasonable interpretation, covers performance of the limitation as a manual selection of content, determining a question / prompt based on the selection and the content’s context and writing an answer / output to the question / prompt but for the recitation of generic computer components. That is, other than reciting “a processor” and “ a memory” (as recited in the system claim 24 and device claim 31), nothing the claims elements precludes the step from practically being performed by a user manually selecting and analyzing a subset of content and context to determine a prompt / question and then determining and writing answers / output to the prompt / question. For example, but for the “processor” and “memory” language, the selecting, providing and displaying in the context of the claims encompasses the user manually selecting a subset of content and providing and writing down a prompt / question based on the subset of content and some context. Similarly, the step of selecting and generating/displaying output, is a process that, under its broadest reasonable interpretation, covers performance of the user manually answering the question or instruction that the prompt posits and writing the output down. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particularly, the claims only recite two additional elements – using a processor and memory to perform the receiving, providing, displaying, generating and displaying steps. The processors and memories in these steps are recited at a high-level of generality (i.e., as a generic processors and memories performing a generic computer function of selecting content, providing and displaying a prompt and selecting a prompt to generate and display output) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using processors and memories to perform the steps of receiving, providing, displaying, generating and displaying amounts to no more than mere instruction to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
Claim 2-4 and 7 depends from claim 1, and thus recites a similar limitation of receiving a selection of content, generating and displaying a prompt based on the selection and context and receiving selection of the prompt to generate and display output from a language model. For the reasons discussed for claim 1, this limitation recites an abstract idea. The steps of context ”includ[ing] data available to an operating system executing on a user device”, “the selection is related to a first application and the context relating to the selection relates to a second application”, “the selection is related to a first application and… using content relating to the second application” and “the context relates to an application associated with the selection” does not integrate the judicial exception into a practical application. The limitations of context ”includ[ing] data available to an operating system executing on a user device”, “the selection is related to a first application and the context relating to the selection relates to a second application”, “the selection is related to a first application and… using content relating to the second application” and “the context relates to an application associated with the selection” merely represents instructions to apply the judicial exceptions on a computer with an operating system and considering content and context data that is available on a computer in a variety of ways. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claims 2-4 are directed to an abstract idea.
Claim 5 depends from claim 1, and thus recites a similar limitation of receiving a selection of content, generating and displaying a prompt based on the selection and context and receiving selection of the prompt to generate and display output from a language model. For the reasons discussed for claim 1, this limitation recites an abstract idea. The steps of editing the selection and replacing the selection with output does not integrate the judicial exception into a practical application. The limitations of editing the selection and replacing the selection with output merely represents instructions to apply the judicial exceptions on a computer and the user manually erasing a subset of content and replacing it with written down replacement content. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claim 5 is directed to an abstract idea.
Claim 6 depends from claim 1, and thus recites a similar limitation of receiving a selection of content, generating and displaying a prompt based on the selection and context and receiving selection of the prompt to generate and display output from a language model. For the reasons discussed for claim 1, this limitation recites an abstract idea. The step of generating a prompt based on “popularity” of the prompt given a context does not integrate the judicial exception into a practical application. The limitations of generating a prompt based on “popularity” of the prompt given a context merely represents instructions to apply the judicial exceptions on a computer and the user manually considering the suitability of the prompt given a particular context. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claim 6 is directed to an abstract idea.
Claim 8 depends from claim 1, and thus recites a similar limitation of receiving a selection of content, generating and displaying a prompt based on the selection and context and receiving selection of the prompt to generate and display output from a language model. For the reasons discussed for claim 1, this limitation recites an abstract idea. The steps of revising the selected content with output does not integrate the judicial exception into a practical application. The limitations of revising the selected content with output merely represents instructions to apply the judicial exceptions on a computer and the user manually editing the content selection with the output. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claim 8 is directed to an abstract idea.
Claim 9 depends from claim 1, and thus recites a similar limitation of receiving a selection of content, generating and displaying a prompt based on the selection and context and receiving selection of the prompt to generate and display output from a language model. For the reasons discussed for claim 1, this limitation recites an abstract idea. The steps of displaying more than one output and selecting a particular output does not integrate the judicial exception into a practical application. The limitations of displaying more than one output and selecting a particular output merely represents instructions to apply the judicial exceptions on a computer and the user manually writing down two or more answers and then selecting the answer that is most appropriate. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claim 9 is directed to an abstract idea.
Claims 10 and 11 depends from claim 1, and thus recites a similar limitation of receiving a selection of content, generating and displaying a prompt based on the selection and context and receiving selection of the prompt to generate and display output from a language model. For the reasons discussed for claim 1, this limitation recites an abstract idea. The steps of selecting an output, generating an additional prompt and then generating output using the additional prompt does not integrate the judicial exception into a practical application. The limitations of selecting an output, generating an additional prompt and then generating output using the additional prompt merely represents instructions to apply the judicial exceptions on a computer and the user manually revising or generating a new prompt based on considering the output of the old prompt and generating new outputs / answers. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claims 10 and 11 are directed to an abstract idea.
As to claims 13 and 28, the claims recites a method for receiving a selection of content, receiving an option to generate an explanation and generating and displaying the explanation based on the selection. The limitation of selecting content, receiving an option to generate an explanation and generating and displaying the explanation based on the selection, as drafted, is a process that under its broadest reasonable interpretation, covers performance of the limitation as a manual selection of content, the user deciding to explain the content, determining an explanation based on the selection and writing down the explanation but for the recitation of generic computer components. That is, other than reciting “a processor” and “a memory” (as recited in the system claim 28), nothing the claims elements precludes the step from practically being performed by a user manually selecting a subset of content, deciding to explain the subset and generating and writing an explanation for the subset. For example, but for the “processor” and “memory” language, the selecting, displaying and generating in the context of the claims encompasses the user manually selecting a subset of content, deciding to explain the content and providing and writing down an explanation based on the subset of content. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particularly, the claims only recite two additional elements – using a processor and memory to perform the receiving, displaying, generating and displaying steps. The processors and memories in these steps are recited at a high-level of generality (i.e., as a generic processors and memories performing a generic computer function of selecting content, deciding to explain the content and providing and displaying an explanation) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using processors and memories to perform the steps of receiving, displaying, generating and displaying amounts to no more than mere instruction to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
Claim 14 depends from claim 13, and thus recites a similar limitation of receiving a selection of content, receiving an option to generate an explanation and generating and displaying the explanation based on the selection. For the reasons discussed for claim 13, this limitation recites an abstract idea. The steps of generating an explanation using context does not integrate the judicial exception into a practical application. The limitations of generating an explanation using context merely represents instructions to apply the judicial exceptions on a computer and the user manually using contextual information to generate an explanation. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claim 14 is directed to an abstract idea.
Claim 15 depends from claim 14, and thus recites a similar limitation of receiving a selection of content, receiving an option to generate an explanation and generating and displaying the explanation based on the selection. For the reasons discussed for claim 13, this limitation recites an abstract idea. The steps of context data including data available to an operating system does not integrate the judicial exception into a practical application. The limitations of context data including data available to an operating system merely represents instructions to apply the judicial exceptions on a computer with an operating system and considering content and context data that is available on a computer in a variety of ways. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claim 15 is directed to an abstract idea.
Claim 16 depends from claim 14, and thus recites a similar limitation of receiving a selection of content, receiving an option to generate an explanation and generating and displaying the explanation based on the selection. For the reasons discussed for claim 13, this limitation recites an abstract idea. The steps of context data including content displayed in an application does not integrate the judicial exception into a practical application. The limitations of context data including content displayed in an application merely represents instructions to apply the judicial exceptions on a computer and considering content and context data that is available on a computer in a variety of ways. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claim 16 is directed to an abstract idea.
Claims 17 and 18 depends from claim 14, and thus recites a similar limitation of receiving a selection of content, receiving an option to generate an explanation and generating and displaying the explanation based on the selection. For the reasons discussed for claim 13, this limitation recites an abstract idea. The steps of providing a definition or meaning of a selection where there can be multiple meanings or definitions does not integrate the judicial exception into a practical application. The limitations of providing a definition or meaning of a selection where there can be multiple meanings or definitions merely represents instructions to apply the judicial exceptions on a computer and the user manually determining the meaning or definition of a selection of content where the content can have many meanings or definitions. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claims 17 and 18 are directed to an abstract idea.
Claim 19 depends from claim 13, and thus recites a similar limitation of receiving a selection of content, receiving an option to generate an explanation and generating and displaying the explanation based on the selection. For the reasons discussed for claim 13, this limitation recites an abstract idea. The steps of the explanation describing a relationship between phrases in a selection of content does not integrate the judicial exception into a practical application. The limitations of the explanation describing a relationship between phrases in a selection of content merely represents instructions to apply the judicial exceptions on a computer and the user manually determining the relationship between data in a subset of the content. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claim 19 is directed to an abstract idea.
Claim 20 depends from claim 13, and thus recites a similar limitation of receiving a selection of content, receiving an option to generate an explanation and generating and displaying the explanation based on the selection. For the reasons discussed for claim 13, this limitation recites an abstract idea. The step of “the selection of content is read-only content” links the judicial exceptions to a technical field and also adds meaningful limitation in that it employs the information provided by the judicial exceptions (a “read-only content”) to display information in an unchangeable fashion. Claim 20 is eligible because it is not directed to an abstract idea or any other judicial exception.
Claim 21 depends from claim 13, and thus recites a similar limitation of receiving a selection of content, receiving an option to generate an explanation and generating and displaying the explanation based on the selection. For the reasons discussed for claim 13, this limitation recites an abstract idea. The steps of determining multiple questions / prompts for a subset, selecting a question / prompt and generating an explanation that answers the question / prompt does not integrate the judicial exception into a practical application. The limitations of determining multiple questions / prompts for a subset, selecting a question / prompt and generating an explanation that answers the question / prompt merely represents instructions to apply the judicial exceptions on a computer and the user manually determining a variety of questions / prompts for a subset of content and choosing one in particular to answer. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claim 21 is directed to an abstract idea.
Claim 22 depends from claim 13, and thus recites a similar limitation of receiving a selection of content, receiving an option to generate an explanation and generating and displaying the explanation based on the selection. For the reasons discussed for claim 13, this limitation recites an abstract idea. The steps of basing an explanation on a subset of content, a prompt / question and contextual data does not integrate the judicial exception into a practical application. The limitations of basing an explanation on a subset of content, a prompt / question and contextual data merely represents instructions to apply the judicial exceptions on a computer and the user considering the subset of content, a prompt / question and contextual data when generating and writing down an explanation. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claim 22 is directed to an abstract idea.
Claim 23 depends from claim 13, and thus recites a similar limitation of receiving a selection of content, receiving an option to generate an explanation and generating and displaying the explanation based on the selection. For the reasons discussed for claim 13, this limitation recites an abstract idea. The steps of providing a user interface element for receiving a text prompt does not integrate the judicial exception into a practical application. The limitations of providing a user interface element for receiving a text prompt merely represents instructions to apply the judicial exceptions on a computer and the user manually writing down a question / prompt for content. Thus, the additional elements do not integrate the recited judicial exception into a practical application and claim 23 is directed to an abstract idea.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-4, 6-19, 21-26 and 28-31 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Fabian et al. (US 20240303440 A1, hereinafter Fabian).
As to claim 1, Fabian discloses a method comprising:
receiving a selection of content ("The user-supplied input may refer to a dataset, such as a data table, in a spreadsheet, or the input may reference a particular aspect of the spreadsheet," Fabian paragraph 0092; "In operational scenario 800, a user submits input regarding portion 802 of spreadsheet data 801 including loan application data. An application service associated with the spreadsheet application generates a prompt based on the input and includes alternative version 803 of portion 802 in the prompt for context," Fabian paragraph 0099);
in response to receiving the selection, providing a prompt based on the selection and context relating to the selection ("The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053);
displaying the prompt ("In some scenarios, the user's general inquiry is ambiguous or underspecified, and the application prompts the LLM to interpret the reply in multiple ways and to generate suggestions based on the multiple interpretations. The accuracy or appropriateness of the suggestions with respect to the inquiry may depend on additional information not provided in the inquiry, such as the user's possible intentions in making the inquiry. For example, the user may ask, “How can I get better results?” The application may include in its prompt to the LLM an instruction to interpret the inquiry in multiple ways and to generate suggestions based on the interpretations. The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027; "In task pane 144 of user experience 143, application service 110 displays three cards, each containing, in a natural language format, one of three suggestions provided by LLM service 120. Application service 110 may display the suggestions according to how LLM service 120 self-evaluated the suggestions, e.g., according to relevance to the input or correctness," Fabian paragraph 0048);
in response to receiving a selection of the prompt, generating output by providing the content and the prompt as input to a language model ("The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027; "The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053); and
displaying the output ("Upon receiving the user's selection of the first suggestion in task pane 144, application service 110 implements the suggestion by adding a column (not shown) to the spreadsheet data. In task pane 146 of user experience 145, application service 110 configures and displays suggested actions in natural language based on the chat history and spreadsheet contextual information," Fabian paragraph 0049).
As to claim 2, Fabian further discloses the method of claim 1, wherein the context relating to the content includes data available to an operating system executing on a user device (“Software 1305 (including application service process 1306) may be implemented in program instructions and among other functions may, when executed by processing system 1302, direct processing system 1302 to operate as described with respect to the various operational scenarios, sequences, and processes illustrated herein. For example, software 1305 may include program instructions for implementing an application service process as described herein.” Fabian paragraph 0128; “Software 1305 may include additional processes, programs, or components, such as operating system software, virtualization software, or other application software. Software 1305 may also comprise firmware or some other form of machine-readable processing instructions executable by processing system 1302,” Fabian paragraph 0129).
As to claim 3, Fabian further discloses the method of claim 1, wherein the selection is related to a first application ("The spreadsheet environment of application service 110 may be implemented a natively installed and executed application, a browser-based application, or a mobile application, and may execute in a stand-alone manner, within the context of another application such as a presentation application or word processing application," Fabian paragraph 0043; "In an implementation, the user enters a query into a chat interface displayed in the user experience of a spreadsheet environment of the productivity application, such as a spreadsheet application displaying a workbook or a word processing document including a spreadsheet data table. The user-supplied input may refer to a dataset, such as a data table, in a spreadsheet, or the input may reference a particular aspect of the spreadsheet. The intent of the user's request may be for a description or explanation of the dataset or to improve the dataset by requesting suggestions for modifying the dataset, e.g., adding a calculated column. The natural language input includes a text-based input keyed into the chat interface by the user or spoken by the user and translated by a speech-to-text module," Fabian paragraph 0092; "In operational scenario 800, a user submits input regarding portion 802 of spreadsheet data 801 including loan application data. An application service associated with the spreadsheet application generates a prompt based on the input and includes alternative version 803 of portion 802 in the prompt for context," Fabian paragraph 0099) and the context relating to the selection relates to a second application ("The spreadsheet environment of application service 110 may be implemented a natively installed and executed application, a browser-based application, or a mobile application, and may execute in a stand-alone manner, within the context of another application such as a presentation application or word processing application," Fabian paragraph 0043; "In an implementation, the user enters a query into a chat interface displayed in the user experience of a spreadsheet environment of the productivity application, such as a spreadsheet application displaying a workbook or a word processing document including a spreadsheet data table. The user-supplied input may refer to a dataset, such as a data table, in a spreadsheet, or the input may reference a particular aspect of the spreadsheet. The intent of the user's request may be for a description or explanation of the dataset or to improve the dataset by requesting suggestions for modifying the dataset, e.g., adding a calculated column. The natural language input includes a text-based input keyed into the chat interface by the user or spoken by the user and translated by a speech-to-text module," Fabian paragraph 0092; "The inputs may relate to the suggestion that was implemented, to another suggestion, to an error generated in relation to the implemented suggestion, or to another aspect of workbook data 320. The inputs trigger replies from LLM 330 and responses to the inputs based on the replies. With each new input, prompt engine 305 gathers context data from application 301 which includes the chat history, i.e., previous inputs, replies, suggestions, and so on," Fabian paragraph 0079, spreadsheet application executing within the context of a word processing app (i.e., a second app) where the chat history context is from the underlying word processing second app).
As to claim 4, Fabian further discloses the method of claim 1, wherein the selection is related to a first application ("The spreadsheet environment of application service 110 may be implemented a natively installed and executed application, a browser-based application, or a mobile application, and may execute in a stand-alone manner, within the context of another application such as a presentation application or word processing application," Fabian paragraph 0043; "In an implementation, the user enters a query into a chat interface displayed in the user experience of a spreadsheet environment of the productivity application, such as a spreadsheet application displaying a workbook or a word processing document including a spreadsheet data table. The user-supplied input may refer to a dataset, such as a data table, in a spreadsheet, or the input may reference a particular aspect of the spreadsheet. The intent of the user's request may be for a description or explanation of the dataset or to improve the dataset by requesting suggestions for modifying the dataset, e.g., adding a calculated column. The natural language input includes a text-based input keyed into the chat interface by the user or spoken by the user and translated by a speech-to-text module," Fabian paragraph 0092; "In operational scenario 800, a user submits input regarding portion 802 of spreadsheet data 801 including loan application data. An application service associated with the spreadsheet application generates a prompt based on the input and includes alternative version 803 of portion 802 in the prompt for context," Fabian paragraph 0099) and generating the output further includes using content relating to a second application as input to the language model ("The spreadsheet environment of application service 110 may be implemented a natively installed and executed application, a browser-based application, or a mobile application, and may execute in a stand-alone manner, within the context of another application such as a presentation application or word processing application," Fabian paragraph 0043; "In an implementation, the user enters a query into a chat interface displayed in the user experience of a spreadsheet environment of the productivity application, such as a spreadsheet application displaying a workbook or a word processing document including a spreadsheet data table. The user-supplied input may refer to a dataset, such as a data table, in a spreadsheet, or the input may reference a particular aspect of the spreadsheet. The intent of the user's request may be for a description or explanation of the dataset or to improve the dataset by requesting suggestions for modifying the dataset, e.g., adding a calculated column. The natural language input includes a text-based input keyed into the chat interface by the user or spoken by the user and translated by a speech-to-text module," Fabian paragraph 0092; "The inputs may relate to the suggestion that was implemented, to another suggestion, to an error generated in relation to the implemented suggestion, or to another aspect of workbook data 320. The inputs trigger replies from LLM 330 and responses to the inputs based on the replies. With each new input, prompt engine 305 gathers context data from application 301 which includes the chat history, i.e., previous inputs, replies, suggestions, and so on," Fabian paragraph 0079, spreadsheet application executing within the context of a word processing app (i.e., a second app) where the chat history context is from the underlying word processing second app).
As to claim 5, Fabian further discloses the method of claim 1, wherein the selection is editable and the output is replacement content for the selection (“The user clicks the “Add column” button, causing one or more of application components 303 to modify data table 502 to include the new column and to fill the column according to instructions provided in the reply. Application components 303 transmit an instruction to user interface 307 to update the display of data table 502 accordingly,” Fabian paragraph 0087, replacing old data table with new data table containing the additional column).
As to claim 6, Fabian further discloses the method of claim 1, wherein the prompt is further generated based on a popularity of the prompt given the context ("Continuing operational scenario 500 in FIG. 5C, prompt engine 305 submits the prompt to LLM 330. Upon receiving the prompt, LLM 330 generates three suggestions, then performs a qualitative evaluation of each of the suggestions with respect to how well they respond to the inquiry in terms of accuracy, appropriateness, relevance, quality, toxicity, and so on. In this exemplary scenario, LLM 330 determines that two suggestions are of low relevance to data table 502," Fabian paragraph 0086; "The application configures a display of the suggestions according to the self-evaluation provided by the LLM for each suggestion, with the suggestion corresponding to the highest level of confidence, quality, or accuracy presented first," Fabian paragraph 0112, determining what prompts will be relevant to the context (i.e., what prompts are more likely to be popular with the user).
As to claim 7, Fabian further discloses the method of claim 1 wherein the context relates to an application associated with the selection ("The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053).
As to claim 8, Fabian further discloses the method of claim 1, wherein the output is a revised version of the selection generated based on the content and the prompt ("Upon receiving the user's selection of the first suggestion in task pane 144, application service 110 implements the suggestion by adding a column (not shown) to the spreadsheet data," Fabian paragraph 0049, revising the spreadsheet by adding a column).
As to claim 9, Fabian further discloses the method of claim 1, wherein the output is a first output, and wherein the language model further generates a second output by providing the content and the prompt as input to the language model ("In some scenarios, the user's general inquiry is ambiguous or underspecified, and the application prompts the LLM to interpret the reply in multiple ways and to generate suggestions based on the multiple interpretations. The accuracy or appropriateness of the suggestions with respect to the inquiry may depend on additional information not provided in the inquiry, such as the user's possible intentions in making the inquiry. For example, the user may ask, “How can I get better results?” The application may include in its prompt to the LLM an instruction to interpret the inquiry in multiple ways and to generate suggestions based on the interpretations. The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027), and the method further comprises:
displaying the second output with the first output ("In task pane 144 of user experience 143, application service 110 displays three cards, each containing, in a natural language format, one of three suggestions provided by LLM service 120. Application service 110 may display the suggestions according to how LLM service 120 self-evaluated the suggestions, e.g., according to relevance to the input or correctness," Fabian paragraph 0048),
wherein the first output and the second output are selectable by a user ("The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027), and
wherein the first output is selected by the user ("The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027).
As to claim 10, Fabian further discloses the method of claim 1, wherein the prompt is a first prompt, the output is a first output, and the method further comprises:
receiving an indication that the first output was selected ("In some scenarios, the user's general inquiry is ambiguous or underspecified, and the application prompts the LLM to interpret the reply in multiple ways and to generate suggestions based on the multiple interpretations. The accuracy or appropriateness of the suggestions with respect to the inquiry may depend on additional information not provided in the inquiry, such as the user's possible intentions in making the inquiry. For example, the user may ask, “How can I get better results?” The application may include in its prompt to the LLM an instruction to interpret the inquiry in multiple ways and to generate suggestions based on the interpretations. The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027; "In task pane 144 of user experience 143, application service 110 displays three cards, each containing, in a natural language format, one of three suggestions provided by LLM service 120. Application service 110 may display the suggestions according to how LLM service 120 self-evaluated the suggestions, e.g., according to relevance to the input or correctness," Fabian paragraph 0048); and
in response to receiving the indication, displaying a second prompt operable to generate a second output based on the first output and the second prompt as input to the language model (“The chat interface displays a turn-based conversation through which the user can iterate or step through multiple revisions of the spreadsheet which are responsive to the user's inquiries. As more inputs are received, the chat history adds to the contextual information that is used by the LLM to provide more accurate results, i.e., results which are increasingly responsive to the user's inquiries during the conversation. As the LLM is presented with more contextual information, the results (i.e., suggestions) generated by the LLM will be more specific to the user's inquiries and to the spreadsheet context,” Fabian paragraph 0039).
As to claim 11, Fabian further discloses the method of claim 10, further comprising: receiving a selection of the second prompt; and
generating the second output by providing the first output and the second prompt as input to the language model ("The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027; "The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053).
As to claim 13, Fabian discloses a method comprising:
receiving a selection of content ("The user-supplied input may refer to a dataset, such as a data table, in a spreadsheet, or the input may reference a particular aspect of the spreadsheet," Fabian paragraph 0092; "In operational scenario 800, a user submits input regarding portion 802 of spreadsheet data 801 including loan application data. An application service associated with the spreadsheet application generates a prompt based on the input and includes alternative version 803 of portion 802 in the prompt for context," Fabian paragraph 0099);
in response to receiving the selection of the content, displaying an option for generating an explanation of the selection (“FIGS. 5B-5F continue operational scenario 500, illustrating a turn-based chat including inputs received from a user, prompts generated based on the inputs, and responses based on replies to the inputs from LLM 330. In FIG. 5B, the user enters the natural language inquiry about data table 502 into task pane 503. User interface 307 transmits the user input to prompt engine 305 which generates a prompt based on the input,” Fabian paragraph 0082; "The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053; Fabian Figure 5A 503 “What is this data about?” input);
generating the explanation by providing the selection of content and the option as input to a language model ("Continuing with FIG. 5B, with the prompt configured, prompt engine 305 sends the prompt to LLM 330. LLM 330 generates a reply to the prompt and transmits the reply to prompt engine 305. Prompt engine 305 processes the reply, including extracting the natural language explanation enclosed in the appropriate tags, and generates a response for display by user interface 307 in task pane 503," Fabian paragraph 0084; Fabian Figure 5B 330 "This data shows payroll data of salaried employees by employee. The employee data includes the name of the employee" explanation); and
displaying the explanation ("Continuing with FIG. 5B, with the prompt configured, prompt engine 305 sends the prompt to LLM 330. LLM 330 generates a reply to the prompt and transmits the reply to prompt engine 305. Prompt engine 305 processes the reply, including extracting the natural language explanation enclosed in the appropriate tags, and generates a response for display by user interface 307 in task pane 503," Fabian paragraph 0084; Fabian Figure 5B 330 "This data shows payroll data of salaried employees by employee. The employee data includes the name of the employee" explanation).
As to claim 14, Fabian further discloses the method of claim 13, wherein the explanation is further generated using context relating to the selection as input to the language model ("The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053).
As to claim 15, Fabian further discloses the method of claim 14, wherein the context includes data available to an operating system executing on a user device (“Software 1305 (including application service process 1306) may be implemented in program instructions and among other functions may, when executed by processing system 1302, direct processing system 1302 to operate as described with respect to the various operational scenarios, sequences, and processes illustrated herein. For example, software 1305 may include program instructions for implementing an application service process as described herein.” Fabian paragraph 0128; “Software 1305 may include additional processes, programs, or components, such as operating system software, virtualization software, or other application software. Software 1305 may also comprise firmware or some other form of machine-readable processing instructions executable by processing system 1302,” Fabian paragraph 0129).
As to claim 16, Fabian further discloses the method of claim 14, wherein the context includes additional content displayed in an application outside the selection of content ("The spreadsheet environment of application service 110 may be implemented a natively installed and executed application, a browser-based application, or a mobile application, and may execute in a stand-alone manner, within the context of another application such as a presentation application or word processing application," Fabian paragraph 0043; "In an implementation, the user enters a query into a chat interface displayed in the user experience of a spreadsheet environment of the productivity application, such as a spreadsheet application displaying a workbook or a word processing document including a spreadsheet data table. The user-supplied input may refer to a dataset, such as a data table, in a spreadsheet, or the input may reference a particular aspect of the spreadsheet. The intent of the user's request may be for a description or explanation of the dataset or to improve the dataset by requesting suggestions for modifying the dataset, e.g., adding a calculated column. The natural language input includes a text-based input keyed into the chat interface by the user or spoken by the user and translated by a speech-to-text module," Fabian paragraph 0092; "The inputs may relate to the suggestion that was implemented, to another suggestion, to an error generated in relation to the implemented suggestion, or to another aspect of workbook data 320. The inputs trigger replies from LLM 330 and responses to the inputs based on the replies. With each new input, prompt engine 305 gathers context data from application 301 which includes the chat history, i.e., previous inputs, replies, suggestions, and so on," Fabian paragraph 0079, spreadsheet application executing within the context of a word processing app (i.e., a second app) where the chat history context is from the underlying word processing second app).
As to claim 17, Fabian further discloses the method of claim 14, wherein the selection of content includes a term with multiple meanings and the explanation is related to one meaning of the multiple meanings based on the context (Fabian Figure 5B 330 "This data shows payroll data of salaried employees by employee. The employee data includes the name of the employee" explanation, selected portion of the spreadsheet can have multiple meanings and a particular meaning is displayed to the user).
As to claim 18, Fabian further discloses the method of claim 14, wherein the explanation relates to a definition of the selection (Fabian Figure 5B 330 "This data shows payroll data of salaried employees by employee. The employee data includes the name of the employee" explanation, defining what the data means in the selected portion of the spreadsheet).
As to claim 19, Fabian further discloses the method of claim 13, wherein the explanation describes a relationship between a first phrase and a second phrase in the selection of content (Fabian Figure 5A “Employee Name” and “Salary” columns; Fabian Figure 5B 330 "This data shows payroll data of salaried employees by employee. The employee data includes the name of the employee" explanation which shows the relationship between the employee name phrases and the salary phrases).
As to claim 21, Fabian further discloses the method of claim 13, further comprising:
receiving a selection of the option (“FIGS. 5B-5F continue operational scenario 500, illustrating a turn-based chat including inputs received from a user, prompts generated based on the inputs, and responses based on replies to the inputs from LLM 330. In FIG. 5B, the user enters the natural language inquiry about data table 502 into task pane 503. User interface 307 transmits the user input to prompt engine 305 which generates a prompt based on the input,” Fabian paragraph 0082; "The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053; Fabian Figure 5A 503 “What is this data about?” input);
providing prompts relating to explanations ("In some scenarios, the user's general inquiry is ambiguous or underspecified, and the application prompts the LLM to interpret the reply in multiple ways and to generate suggestions based on the multiple interpretations. The accuracy or appropriateness of the suggestions with respect to the inquiry may depend on additional information not provided in the inquiry, such as the user's possible intentions in making the inquiry. For example, the user may ask, “How can I get better results?” The application may include in its prompt to the LLM an instruction to interpret the inquiry in multiple ways and to generate suggestions based on the interpretations. The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027, providing additional prompts related to explanations if the user’s request for an explanation was ambiguous or underspecified); and
receiving a selection of a prompt of the prompts, wherein the explanation is based on the prompt of the prompts and the selection (“The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027).
As to claim 22, Fabian further discloses the method of claim 21, wherein the explanation is based on the selection of content, the prompt of the prompts, and context relating to the selection (“The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027; "The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053).
As to claim 23, Fabian further discloses the method of claim 21, wherein providing prompts relating to explanations includes providing a user interface element for receiving a text prompt from a user, wherein the text prompt is one of the prompts (“In operation, the user of computing device 133 interacts with application service 110 via a natural language interface of task pane 142 in user experience 141. In user experience 141, the user keys in a natural language statement or inquiry (‘How can I make this better?’),” Fabian paragraph 0047).
As to claim 24, Fabian discloses a system comprising:
a processor (“Referring still to FIG. 13, processing system 1302 may comprise a micro-processor,” Fabian paragraph 0125); and
a memory configured with instructions (“Processing system 1302 loads and executes software 1305 from storage system 1303. Software 1305 includes and implements application service process 1306, which is (are) representative of the application service processes discussed with respect to the preceding Figures, such as processes 200 and 600. When executed by processing system 1302, software 1305 directs processing system 1302 to operate as described herein for at least the various processes, operational scenarios, and sequences discussed in the foregoing implementations,” Fabian paragraph 0124) to:
receive a selection of content ("The user-supplied input may refer to a dataset, such as a data table, in a spreadsheet, or the input may reference a particular aspect of the spreadsheet," Fabian paragraph 0092; "In operational scenario 800, a user submits input regarding portion 802 of spreadsheet data 801 including loan application data. An application service associated with the spreadsheet application generates a prompt based on the input and includes alternative version 803 of portion 802 in the prompt for context," Fabian paragraph 0099);
in response to receiving the selection, provide a prompt based on the selection and context relating to the selection ("The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053);
display the prompt ("In some scenarios, the user's general inquiry is ambiguous or underspecified, and the application prompts the LLM to interpret the reply in multiple ways and to generate suggestions based on the multiple interpretations. The accuracy or appropriateness of the suggestions with respect to the inquiry may depend on additional information not provided in the inquiry, such as the user's possible intentions in making the inquiry. For example, the user may ask, “How can I get better results?” The application may include in its prompt to the LLM an instruction to interpret the inquiry in multiple ways and to generate suggestions based on the interpretations. The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027; "In task pane 144 of user experience 143, application service 110 displays three cards, each containing, in a natural language format, one of three suggestions provided by LLM service 120. Application service 110 may display the suggestions according to how LLM service 120 self-evaluated the suggestions, e.g., according to relevance to the input or correctness," Fabian paragraph 0048);
in response to receiving a selection of the prompt, generate output by providing the content and the prompt as input to a language model ("The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027; "The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053); and
display the output ("Upon receiving the user's selection of the first suggestion in task pane 144, application service 110 implements the suggestion by adding a column (not shown) to the spreadsheet data. In task pane 146 of user experience 145, application service 110 configures and displays suggested actions in natural language based on the chat history and spreadsheet contextual information," Fabian paragraph 0049).
As to claim 25, it is substantially similar to claim 2 and is therefore rejected using the same rationale as above.
As to claim 26, it is substantially similar to claim 5 and is therefore rejected using the same rationale as above.
As to claim 28, Fabian discloses a system comprising:
a processor (“Referring still to FIG. 13, processing system 1302 may comprise a micro-processor,” Fabian paragraph 0125); and
a memory configured with instructions (“Processing system 1302 loads and executes software 1305 from storage system 1303. Software 1305 includes and implements application service process 1306, which is (are) representative of the application service processes discussed with respect to the preceding Figures, such as processes 200 and 600. When executed by processing system 1302, software 1305 directs processing system 1302 to operate as described herein for at least the various processes, operational scenarios, and sequences discussed in the foregoing implementations,” Fabian paragraph 0124) to:
receive a selection of content ("The user-supplied input may refer to a dataset, such as a data table, in a spreadsheet, or the input may reference a particular aspect of the spreadsheet," Fabian paragraph 0092; "In operational scenario 800, a user submits input regarding portion 802 of spreadsheet data 801 including loan application data. An application service associated with the spreadsheet application generates a prompt based on the input and includes alternative version 803 of portion 802 in the prompt for context," Fabian paragraph 0099);
in response to receiving the selection of the content, display an option for generating an explanation of the selection (“FIGS. 5B-5F continue operational scenario 500, illustrating a turn-based chat including inputs received from a user, prompts generated based on the inputs, and responses based on replies to the inputs from LLM 330. In FIG. 5B, the user enters the natural language inquiry about data table 502 into task pane 503. User interface 307 transmits the user input to prompt engine 305 which generates a prompt based on the input,” Fabian paragraph 0082; "The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053; Fabian Figure 5A 503 “What is this data about?” input);
generate the explanation by providing the selection of content and the option as input to a language model ("Continuing with FIG. 5B, with the prompt configured, prompt engine 305 sends the prompt to LLM 330. LLM 330 generates a reply to the prompt and transmits the reply to prompt engine 305. Prompt engine 305 processes the reply, including extracting the natural language explanation enclosed in the appropriate tags, and generates a response for display by user interface 307 in task pane 503," Fabian paragraph 0084; Fabian Figure 5B 330 "This data shows payroll data of salaried employees by employee. The employee data includes the name of the employee" explanation); and
display the explanation ("Continuing with FIG. 5B, with the prompt configured, prompt engine 305 sends the prompt to LLM 330. LLM 330 generates a reply to the prompt and transmits the reply to prompt engine 305. Prompt engine 305 processes the reply, including extracting the natural language explanation enclosed in the appropriate tags, and generates a response for display by user interface 307 in task pane 503," Fabian paragraph 0084; Fabian Figure 5B 330 "This data shows payroll data of salaried employees by employee. The employee data includes the name of the employee" explanation).
As to claim 29, it is substantially similar to claim 15 and is therefore rejected using the same rationale as above.
As to claim 30, it is substantially similar to claim 14 and is therefore rejected using the same rationale as above.
As to claim 31, Fabian discloses a computing device, comprising:
at least one processor (“Referring still to FIG. 13, processing system 1302 may comprise a micro-processor,” Fabian paragraph 0125); and
a non-transitory computer-readable medium storing executable instructions that, when executed by the at least one processor, cause the computing device (“Processing system 1302 loads and executes software 1305 from storage system 1303. Software 1305 includes and implements application service process 1306, which is (are) representative of the application service processes discussed with respect to the preceding Figures, such as processes 200 and 600. When executed by processing system 1302, software 1305 directs processing system 1302 to operate as described herein for at least the various processes, operational scenarios, and sequences discussed in the foregoing implementations,” Fabian paragraph 0124) to:
receive a selection of content ("The user-supplied input may refer to a dataset, such as a data table, in a spreadsheet, or the input may reference a particular aspect of the spreadsheet," Fabian paragraph 0092; "In operational scenario 800, a user submits input regarding portion 802 of spreadsheet data 801 including loan application data. An application service associated with the spreadsheet application generates a prompt based on the input and includes alternative version 803 of portion 802 in the prompt for context," Fabian paragraph 0099);
in response to receiving the selection, provide a prompt based on the selection and context relating to the selection ("The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053);
display the prompt ("In some scenarios, the user's general inquiry is ambiguous or underspecified, and the application prompts the LLM to interpret the reply in multiple ways and to generate suggestions based on the multiple interpretations. The accuracy or appropriateness of the suggestions with respect to the inquiry may depend on additional information not provided in the inquiry, such as the user's possible intentions in making the inquiry. For example, the user may ask, “How can I get better results?” The application may include in its prompt to the LLM an instruction to interpret the inquiry in multiple ways and to generate suggestions based on the interpretations. The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027; "In task pane 144 of user experience 143, application service 110 displays three cards, each containing, in a natural language format, one of three suggestions provided by LLM service 120. Application service 110 may display the suggestions according to how LLM service 120 self-evaluated the suggestions, e.g., according to relevance to the input or correctness," Fabian paragraph 0048);
in response to receiving a selection of the prompt, generate output by providing the content and the prompt as input to a language model ("The application presents the suggestions generated by the LLM to the user in the task pane. The application may submit a follow-up prompt based on the user's selection of a suggestion and including in the contextual information of the prompt the selection made by the user thereby to receive a more focused reply or suggestion from the LLM," Fabian paragraph 0027; "The application service generates a prompt for the LLM service based on the input and at least a portion of the spreadsheet (step 203). In an implementation, the prompt includes contextual information, such as the chat history and a portion of the spreadsheet including row and column headers and a subset of the data," Fabian paragraph 0053); and
display the output ("Upon receiving the user's selection of the first suggestion in task pane 144, application service 110 implements the suggestion by adding a column (not shown) to the spreadsheet data. In task pane 146 of user experience 145, application service 110 configures and displays suggested actions in natural language based on the chat history and spreadsheet contextual information," Fabian paragraph 0049).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 12 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Fabian et al. (US 20240303440 A1, hereinafter Fabian) in view of Seth et al. (US 20230315983 A1, hereinafter Seth).
As to claim 12, Fabian discloses the method of claim 1, however Fabian does not appear to explicitly disclose a limitation wherein the language model executes on a user device.
Seth teaches a limitation wherein the language model executes on a user device (“In some examples, large language model 750 may be accessed directly and may be executed on local hardware. In other examples, the large language model 750 may be accessed via an application program interface to a cloud hosted language model (e.g. through network 746),” Seth paragraph 0164).
Accordingly it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Fabian to execute the language model locally as taught by Seth. One would have been motivated to make such a combination so that the prompting and reply features could be used in more situations such as where there is no internet connection thus making the finished product more robust and reducing frustration for the user.
As to claim 27, it is substantially similar to claim 12 and is therefore rejected using the same rationale as above.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Fabian et al. (US 20240303440 A1, hereinafter Fabian) in view of Mujica et al. (US 20030117447 A1, hereinafter Mujica).
As to claim 20, Fabian discloses the method of claim 13, however Fabian does not appear to explicitly disclose a limitation wherein the selection of content is read-only content.
Mujica teaches a limitation wherein the selection of content is read-only content ("An individual cell can be locked as shown in FIGS. 3e-g. The desired cell is selected with the cursor as shown in FIG. 3e. A lock cell input or key is then activated by the user. In the illustrated embodiment, the lock cell input is selected from the edit menu as shown in FIG. 3f by pressing a function key and then selecting the 'Lock/Unlock' function. Further, the user could lock a range of cells or block of cells by first selecting all the cells to be locked and then using the 'Lock/Unlock' function," Mujica paragraph 0022; "A separate unlock input can be used, or the lock input can be a toggle on/off of the selected cells," Mujica paragraph 0020).
Accordingly it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Fabian to allow the user to set content to read-only as taught by Mujica. One would have been motivated to make such a combination to prevent accidental editing or loss of the content (Mujica paragraphs 0021 and 0022).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 20240012992 A1 to Tamm et al. discloses content paths and framework for content creation where content and context are used to generate a prompt and using the prompt to generate output from a large language model;
US 20240184812 A1 to McDaniel et al. discloses distributed active learning in natural language processing for determining resource metrics where prompts for use with a large language model are generated and then filtered/displayed based on sentiment scores;
US 20240256762 A1 to Beauchamp discloses methods and systems for prompting large language models to process inputs from multiple user elements where selecting a subset of text, generating a prompt based on the subset of text and submitting the prompt to a large language model to generate edits to the subset of text; and
US 20240289360 A1 to Chepkwony discloses generating new content from existing productivity application content using a large language model where a user can select an expand option to automatically generate a new prompt based on a user entered prompt.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL SAMWEL whose telephone number is (313) 446-6549. The examiner can normally be reached Monday through Thursday 8:00-6:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL SAMWEL/ Primary Examiner, Art Unit 2171