DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This Office Action is in response to the application filed on 03/29/2024.
3. The IDSs filed on 03/29/2024 (2), 09/04/2024, and 10/20/2025 are considered and entered into the application file.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
4. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dolan et al (US 20220414320 A1).
Dolan et al (“Dolan”) disclosure relate to techniques for interactive content generation.
As per claim 1, Dolan discloses a computing system (see at least content generation system Figs. 1, and 5-8) for automatically generating personalized and structured content, the computing system comprising: one or more processors (for example, see processing unit 502, Fig. 5) ; and one or more non-transitory computer-readable media (for example see system memory 504, Fig. 5) that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations ([0006] FIG. 1 illustrates an overview of an example system for interactive content generation) . the operations comprising:
providing a user interface to a user computing system ([0066] With reference to FIG. 4A, view 400 illustrates an example new document creation pane 402, comprising blank document element 404 and quick start element 406. In examples, a user may actuate blank document element 404 to cause the document editor to create a new document in which a user may manually draft content.; also see user interface Figs .4A-4D);
receiving a prompt from the user computing system via the user interface (0037] Method 200 begins at operation 202, where a request is received to generate content. Such a request may be referred to herein as a content generation request. The request may comprise a content seed, such as one or more words, sentences, or paragraphs. As noted above, the content seed may comprise instructions for producing processed content or at least a part of a first draft prepared by a user. The request may be received from a document editor, such as document editor 116 or 118 discussed above with respect to computing devices 104 and 106, respectively, in FIG. 1.
[0050] Method 300 begins at operation 302, where user input is received to generate content. For example, user input may be received at a document editor, such as document editor 116 or 118. In examples, the user input may comprise actuation of a user interface element to generate a rough draft based on a content seed.
providing the prompt to a generative model, the generative model being a machine-learned model trained to process language input prompts to generate a language output ([0037] Method 200 begins at operation 202, where a request is received to generate content. Such a request may be referred to herein as a content generation request. The request may comprise a content seed, such as one or more words, sentences, or paragraphs. As noted above, the content seed may comprise instructions for producing processed content or at least a part of a first draft prepared by a user);
receiving a generative output generated by the generative model in response to the prompt ([0038] At operation 204, content is generated based on the received content seed. For example, a generative model may be used to produce the processed content. In examples, operation 204 comprises selecting a generative model from a set of available models, as was discussed above with respect to content generator 114 in FIG. 1);
generating a modified output by modifying the generative output based at least in part on historical user data for a user associated with the prompt ([0018] In some instances, the user may further provide an additional content source for use by the generative model or additional content associated with the user (e.g., personal documents, documents of the user's team, etc.) may be used, thereby grounding the processed content according, at least in part, to the additional content). and
providing the modified output via the user interface (a generative model may produce updated processed content based at least in part on the previously processed content, the user input, and/or, in some examples, additional content, as may be indicated by a user (Abstract), [0008] FIG. 2B illustrates an overview of an example method for generating updated content in response to an iterative generation request according to aspects described herein).
As per claim 2, Dolan further discloses that the computing system of claim 1, wherein receiving the generative output generated by the generative model comprises receiving the generative output generated by the generative model and providing the generative output via the user interface ([0068] View 440 of FIG. 4C illustrates an example of the resulting processed content 442 that may be presented by the document editor. As illustrated, uncertain subparts 444 and 446 are emphasized, such that a user may interact with the uncertain subparts to provide clarification or select from a set of replacement subparts, among other examples. For example, comments 448 associated with uncertain subpart 446 indicate a user's instructions to add additional detail to the processed content).
As per claim 3, Dolan further discloses that the computing system of claim 2, the operations further comprising receiving an insertion request from the user computing system via the user interface subsequent to providing the generative output, wherein generating the modified output comprises generating the modified output in response to receiving the insertion request ([0069] Uncertain subpart 444 illustrates another example, in which user actuation of uncertain subpart 444 causes prompt 462 in FIG. 4D to be presented. As illustrated, prompt 462 comprises a set of replacement subparts for user selection, such that the user may select replacement subpart 464 or 466 to replace uncertain subpart 444. Actuation of preview button 468 may cause the document editor to present updated processed content based on a user's selection of one of replacement subparts 464 or 466).
As per claim 4, Dolan further discloses that the computing system of claim 3, wherein providing the user interface comprises providing an integrated development environment in which content is insertable in-line, wherein providing the generative output comprises providing the generative output in a generative area of the integrated development environment, the generative area separating the generative output from being in-line within the integrated development environment, wherein providing the modified output via the user interface comprises inserting the modified output in-line within the integrated development environment ([0068] View 440 of FIG. 4C illustrates an example of the resulting processed content 442 that may be presented by the document editor. As illustrated, uncertain subparts 444 and 446 are emphasized, such that a user may interact with the uncertain subparts to provide clarification or select from a set of replacement subparts, among other examples. For example, comments 448 associated with uncertain subpart 446 indicate a user's instructions to add additional detail to the processed content. As illustrated, comments 448 are an example of user interaction (“Kelly Shane”) with an automated conversational agent (“Editor”) using natural language. It will be appreciated that any of a variety of alternative or additional interaction techniques may be used to receive similar user input associated with the processed content according to aspects described herein. Also see Figs. 4A-4D, wherein FIGS. 4A-4D illustrate overviews of example views for interactive content generation according to aspects described herein).
As per claim 5, Dolan further discloses that the computing system of claim 4, wherein receiving the prompt from the user computing system via the user interface comprises receiving the prompt within the generative area of the user interface ([0025] Thus, user input may be received in response to prompts (e.g., associated with subparts having a low confidence score), directly to the processed content (e.g., as additions, changes, or deletions), or as part of a communication session between the user and an automated conversational agent, among other examples. [0067] Accordingly, user input may be received in text box 424, after which the user may actuate create button 426 to cause the creation of processed content based on the content seed entered in text box 424 (e.g., according to method 200 and 300 of FIGS. 2A and 3A, respectively)).
As per claim 6, Dolan further discloses that the computing system of claim 4, wherein the integrated development environment comprises at least one formatting selection interface for selecting formatting rules for text in-line within the integrated development environment, wherein providing the modified output via the user interface comprises inserting the modified output in-line within the integrated development environment and formatted according to the formatting rules ([0057] For example, the user input may be received in association with an uncertain subpart that was highlighted by the document editor as a result of having a low confidence score. Examples of such user input include, but are not limited to, clarification, additional instructions, and/or a selection of a replacement subpart. [0068] View 440 of FIG. 4C illustrates an example of the resulting processed content 442 that may be presented by the document editor. As illustrated, uncertain subparts 444 and 446 are emphasized, such that a user may interact with the uncertain subparts to provide clarification or select from a set of replacement subparts, among other examples. For example, comments 448 associated with uncertain subpart 446 indicate a user's instructions to add additional detail to the processed content. As illustrated, comments 448 are an example of user interaction (“Kelly Shane”) with an automated conversational agent (“Editor”) using natural language. It will be appreciated that any of a variety of alternative or additional interaction techniques may be used to receive similar user input associated with the processed content according to aspects described herein. Also see [0069-0070], and Fig. 4C).
As per claim 7, Dolan further discloses that the computing system of claim 1, wherein receiving the prompt from the user computing system via the user interface comprises receiving selection of text within the user interface, the text being formatted according to embedded formatting rules, wherein generating the modified output comprises generating the modified output by modifying the generative output based at least in part on the historical user data and the embedded formatting rules received with the selection of text ([0096] receiving a second iterative generation request comprising an indication of a second user input; processing, using a second generative model different than the first generative model, the second user input based on a document history associated with the processed content to produce a second updated processed content. In a further example, the document history comprises: at least a part of the processed content; information associated with the first user input; and at least a part of the first updated processed content). [0068] View 440 of FIG. 4C illustrates an example of the resulting processed content 442 that may be presented by the document editor. As illustrated, uncertain subparts 444 and 446 are emphasized, such that a user may interact with the uncertain subparts to provide clarification or select from a set of replacement subparts, among other examples. For example, comments 448 associated with uncertain subpart 446 indicate a user's instructions to add additional detail to the processed content. As illustrated, comments 448 are an example of user interaction (“Kelly Shane”) with an automated conversational agent (“Editor”) using natural language. It will be appreciated that any of a variety of alternative or additional interaction techniques may be used to receive similar user input associated with the processed content according to aspects described herein).
As per claim 8, Dolan further discloses that the computing system of claim 1, wherein the generative output comprises a block template generated by the generative model (see generated template with input fields in Fig. 4A), the block template defining one or more fields associated with the prompt (as shown in Fig. 4A, NEW field is shown selected), wherein generating the modified output comprises populating eligible fields of the one or more fields within the block template based on the historical user data, the eligible fields (as shown in he Figs. 4A-4B, the user is creating NEW document using quick start 406 element causing the document editor to prompt the user for input to be used as a content seed with which to produce processed content to form a first draft. Accordingly, user input may be received in text box 424, after which the user may actuate create button 426 to cause the creation of processed content based on the content seed entered in text box 424 (e.g., according to method 200 and 300 of FIGS. 2A and 3A, respectively). Examiner’s note the user input that is to be used (content created in fig. 4B) as a content seed is the user’s own data or historical user data). [0068] In another example, comments 448 associated with uncertain subpart 446 indicate a user's instructions to add additional detail to the processed content. As illustrated, comments 448 are an example of user interaction (“Kelly Shane” ) with an automated conversational agent (“Editor”) using natural language).
As per claim 9, Dolan further discloses that the computing system of claim 1, wherein the historical user data is not provided to the generative model ([0037] Method 200 begins at operation 202, where a request is received to generate content. Such a request may be referred to herein as a content generation request. The request may comprise a content seed, such as one or more words, sentences, or paragraphs. As noted above, the content seed may comprise instructions for producing processed content or at least a part of a first draft prepared by a user. The request may be received from a document editor, such as document editor 116 or 118 discussed above with respect to computing devices 104 and 106, respectively, in FIG. 1. Examiner’s note: as illustrated in the method steps of Fig. 2A-2B, no personal information is passed or provided to the content generator 114).
As per claim 10, Dolan further discloses that the computing system of claim 1, wherein the historical user data includes one or more of a name, contact information, contacts, calendar events, or location history associated with the user ([0042] In some instances, the request further comprises at least a part of a document history associated with the previously processed content, a user input received from a user, and/or an indication of additional content. As a further example, the request may comprise an identifier associated with the previously processed content, such that a document history may be identified using the identifier, as may be the case when the document history is maintained by a document service. [0058] In some instances, the request further comprises at least a part of a document history associated with the previously processed content, user input received from a user at operation 352 or 354, and/or an indication of additional content (e.g., as may have been received as part of the user input at operations 352 or 354). [0068] For example, comments 448 associated with uncertain subpart 446 indicate a user's instructions to add additional detail to the processed content. As illustrated, comments 448 are an example of user interaction (“Kelly Shane”) with an automated conversational agent (“Editor”) using natural language).
As per method claims 11-16, these method claims recite steps that correspond to system claims 1-3, and 8-10, respectively. Thus, the method claims are also rejected under similar citations given to the system claims.
As per computer readable media claims 17-20, these method claims recite steps that correspond to system claims 1, and 8-10, respectively. Thus, the media claims are also rejected under similar citations given to the system claims.
5. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Smith et al (US 20240273291 A1).
Smith et al (“Smith”) is directed to generative collaborative publishing system.
As per claim 1, Smith discloses a computing system for automatically generating personalized and structured content, the computing system comprising: one or more processors; and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations ([0382] In FIG. 23, generative collaborative publishing system 2350 represents portions of generative collaborative publishing system 240 when the computer system 2300 is executing those portions of generative collaborative publishing system 240. Instructions 2312 include portions of generative collaborative publishing system 2350 when those portions of the generative collaborative publishing system 2350 are being executed by processing device 2302), the operations comprising:
providing a user interface to a user computing system (user interface 212, Fig. 2) ;
receiving a prompt from the user computing system via the user interface; [0071] The post-publication feedback mechanism 124 generates post-publication contribution feedback 136 based on one or more of the received contributions 128 and returns the post-publication contribution feedback 136 to the generative language model 106. For example, the post-publication contribution feedback 136 includes a segment-contribution pair that has received the highest amount of social reaction. The post-publication contribution feedback 136 is used to refine the prompt x, e.g., by modifying at least a portion of the prompt to generate a new document based on a received contribution 128 or other post-publication contribution feedback 136. Also see [0052]
providing the prompt to a generative model, the generative model being a machine-learned model trained to process language input prompts to generate a language output; [0057] For example, prompt refinements and/or model fine tuning performed by one or more pre-publication and/or post-publication feedback mechanisms on documents previously output by the generative language model can improve the quality of the generative language model output to the extent that no pre-publication review or filtering of the documents is needed, such that the documents 108 produced by the generative language model can be published directly by the publishing subsystem 120.
receiving a generative output generated by the generative model in response to the prompt ([0216] Edit window ′R16 also includes an audit section ′R26. Audit section ′R26 receives and displays information that is used to keep track of which generative language model-generated texts have been reviewed and edited by which human reviewers);
generating a modified output by modifying the generative output based at least in part on historical user data for a user associated with the prompt ([0352] In some implementations, prompts are generated or modified based on differences between post-publication contributions to writings output by a generative language model and the original writings themselves. For example, a difference between the first contribution received from the network and the first piece of writing generated by the generative language model is generated in response to the first prompt, and the second prompt is generated based on the difference between the first contribution received from the network and the first piece of writing generated by the generative language model in response to the first prompt. and
providing the modified output via the user interface ([0065] Editing tool 116 can be implemented as an automated document editor or grammar checking tool into which the document 108 is loaded. The automatically-generated document edits can be surfaced to a human reviewer/editor through a front end user interface for verification or modification. In other implementations, the edits are not automatically generated and instead the edits are received from a human editor via a front end user interface through which the human editor reviews and edits the document 108. also see [0070]).
As per claim 2, Smith further discloses that the computing system of claim 1, wherein receiving the generative output generated by the generative model comprises receiving the generative output generated by the generative model and providing the generative output via the user interface (In response to input of the second prompt to the generative language model, the generative language model outputs a second document different from the first document, where the second document includes a second piece of writing based on the second prompt. The second document is published to a network (Abstract). [0116] In response to input of generated prompt 304 into the generative language model, the generative language model of content generation subsystem 306 produces and outputs machine-generated content 308, which is based on the generated prompt 304).
As per claim 3, Smith further discloses that the computing system of claim 2, the operations further comprising receiving an insertion request from the user computing system via the user interface subsequent to providing the generative output, wherein generating the modified output comprises generating the modified output in response to receiving the insertion request ([0050] In some embodiments, the attribute data 104 is extracted from the online system in response to a user input received by an application software system. The attribute data 104 includes data that is specific to a user or a user group of the online system, in some implementations. In other words, output of the generative language model 106 can be customized for a particular user or user group of the online system based on the attribute data 104 that is selected and used to generate the task descriptions (e.g., prompts) to which the generative language model 106 is applied).
As per claim 4, Smith further discloses that the computing system of claim 3, wherein providing the user interface comprises providing an integrated development environment in which content is insertable in-line, wherein providing the generative output comprises providing the generative output in a generative area of the integrated development environment, the generative area separating the generative output from being in-line within the integrated development environment, wherein providing the modified output via the user interface comprises inserting the modified output in-line within the integrated development environment. ([0078] User interface 212 can be used to input data, upload, download, receive, send, or share content items, including documents and contributions, initiate user interface events, and view or otherwise perceive output such as data and/or documents produced by application software system 230, generative collaborative publishing system 240, content moderation system 250, and/or content serving system 260. For example, user interface 212 can include a graphical user interface (GUI), a conversational voice/speech interface, a virtual reality, augmented reality, or mixed reality interface, and/or a haptic interface. User interface 212 includes a mechanism for logging in to application software system 230, clicking or tapping on GUI user input control elements, and interacting with digital content items such as documents. Examples of user interface 212 include web browsers, command line interfaces, and mobile app front ends. User interface 212 as used herein can include application programming interfaces (APIs).
As per claim 5, Smith further discloses that the computing system of claim 4, wherein receiving the prompt from the user computing system via the user interface comprises receiving the prompt within the generative area of the user interface ([029] A generative language model generates new text in response to model input. The model input includes a task description, also referred to as a prompt. [0111] In other implementations, the seed is obtained by prompt generation subsystem 302 as a parameter value, e.g., the seed is passed to prompt generation subsystem 302 from another application, process, or service, using an application program interface (API), or the seed is received as input from a front end user interface, such as a front end of application software system 230.
As per claim 6, Smith further discloses that the computing system of claim 4, wherein the integrated development environment comprises at least one formatting selection interface for selecting formatting rules for text in-line within the integrated development environment, wherein providing the modified output via the user interface comprises inserting the modified output in-line within the integrated development environment and formatted according to the formatting rules ([ 0112] To produce generated prompt 304, prompt generation subsystem 302 applies a prompt template to the seed. A prompt template includes a format and/or specification for arranging data and/or instructions, including the seed, for input a generative language model so that the generative language model can read and process the inputs and generate corresponding output . Also see [0298]).
As per claim 7, Smith further discloses that the computing system of claim 1, wherein receiving the prompt from the user computing system via the user interface comprises receiving selection of text within the user interface, the text being formatted according to embedded formatting rules, wherein generating the modified output comprises generating the modified output by modifying the generative output based at least in part on the historical user data and the embedded formatting rules received with the selection of text. ([0362] In some implementations, the user is identified and selected to be invited to contribute to a document output by a generative language model based on the user's history of receiving social actions on the user's previous publications of content to an online system, such as a social network service, relating to one or more particular topics. For example, social action data is received, where the social action data includes historical data about digital social actions received by the network in response to publication, by the user, via the network, of content relating to a topic associated with the document, a contributor score for the user is computed based on the social action data, where the contributor score includes an estimate of a likelihood of a contribution to the document, by the user, receiving digital social actions, and based on the contributor score, the user is selected, from a set of users of the network, to be invited to contribute to a document output by a generative language model).
As per claim 8, Smith further discloses that the computing system of claim 1, wherein the generative output comprises a block template generated by the generative model, the block template defining one or more fields associated with the prompt, wherein generating the modified output comprises populating eligible fields of the one or more fields within the block template based on the historical user data, the eligible fields being associated with the historical user data (]0112] To produce generated prompt 304, prompt generation subsystem 302 applies a prompt template to the seed. A prompt template includes a format and/or specification for arranging data and/or instructions, including the seed, for input a generative language model so that the generative language model can read and process the inputs and generate corresponding output. For instance, a prompt template contains a placeholder for the seed as well as one or more other placeholders for other data and/or parameter values or instructions. [0140] Template selector 406 selects a prompt template 408 from template data store 285 based on one or more of seed 404 and template scores 420. Prompt templates stored in template data store 285 can include initial templates and engineered templates. Also see [0118]).
As per claim 9, Smith further discloses that the computing system of claim 1, wherein the historical user data is not provided to the generative model ([0362] For example, social action data is received, where the social action data includes historical data about digital social actions received by the network in response to publication, by the user, via the network, of content relating to a topic associated with the document, a contributor score for the user is computed based on the social action data, where the contributor score includes an estimate of a likelihood of a contribution to the document, by the user, receiving digital social actions, and based on the contributor score, the user is selected, from a set of users of the network, to be invited to contribute to a document output by a generative language model. Examiner’s note historical data about digital social actions received by the network in response to publication, by the user, so as a result, historical user data is not provided to the generative model).
As per claim 10, Spiegel further discloses that the computing system of claim 1, wherein the historical user data includes one or more of a name, contact information, contacts, calendar events, or location history associated with the user ([0252] User-activity mappings 1414 contain links between the user and the user's recent activity in the online system, such as content generation data that includes historical data about the user's content generation activity in the online system.
As per method claims 11-16, these method claims recite steps that correspond to system claims 1-3, and 8-10, respectively. Thus, the method claims are also rejected under similar citations given to the system claims.
As per computer readable media claims 17-20, these method claims recite steps that correspond to system claims 1, and 8-10, respectively. Thus, the media claims are also rejected under similar citations given to the system claims.
6. Claims 1, 11 and 17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Spiegel et al (US 20240249318 A1) (Note this publication has a Provisional application No. 63/440,785, filed on Jan 24, 2023).
As per claim 1, Spiegel discloses a computing system (system of Fig. 11) for automatically generating personalized and structured content, the computing system (The system receives user prompts during chat sessions with a chatbot and generates responses using a large language model (see Abstract)), comprising:
one or more processors ([0324] The machine 1100 may include processors 1104, Fig. 11) ; and
one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations ([0325] The memory 1106 includes a main memory 1116, a static memory 1118, and a storage unit 1120, both accessible to the processors 1104 via the bus 1110 ) the operations comprising:
providing a user interface to a user computing system (see user chat with chatbot via user interface windows shown in Fig. 3A, 5A-5B) ;
receiving a prompt from the user computing system via the user interface ([0035] In some examples, a chatbot system receives a prompt from a user during a first interactive session.
providing the prompt to a generative model, the generative model being a machine-learned model trained to process language input prompts to generate a language output ([0266] In some examples, the trained machine-learning program 702 may be a generative AI model. Generative AI is a term that may refer to any type of artificial intelligence that can create new content from training data 706. For example, generative AI can produce text, images, video, audio, code, or synthetic data similar to the original data but not identical);
receiving a generative output generated by the generative model in response to the prompt ([0198] The output selector component 342 determines which response is more appropriate to s current stage of the conversation 410, and either returns the skill replies 432 to the LLM 338 for natural answer generation, or uses the responses 412a and 412b already generated and returns them directly to the interactive platform application 408 and the user);
generating a modified output by modifying the generative output based at least in part on historical user data for a user associated with the prompt; and providing the modified output via the user interface [0293] A modified image or video stream may be presented in a graphical user interface displayed on the client system 102 as soon as the image or video stream is captured and a specified modification is selected. The transformation system may implement a complex convolutional neural network on a portion of the image or video stream to generate and apply the selected modification).
As per method claim 11, these method claim recites steps that correspond to system claim 1. Thus, the method claim is also rejected under similar citations given to the system claim.
As per computer readable media claim 17, the method claim recites steps that correspond to system claim 1. Thus, the media claim is also rejected under similar citations given to the system claim.
Conclusion
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TADESSE HAILU whose telephone number is (571)272-4051; and the email address is Tadesse.hailu@USPTO.GOV. The examiner can normally be reached Monday- Friday 9:30-5:30 (Eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bashore, William L. can be reached (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TADESSE HAILU/ Primary Examiner, Art Unit 2174