DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Action is responsive to the Application filed on 5/7/2024. Claims 1-20 are pending in the case.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-6, 9 and 13 are provisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim 1-6, 14, and 16 of co-pending Application No. 18/657040 in view of Arunachalam et al. and Crouch et al. The claims of the instant application and the claims of the reference co-pending application are compared in the table below.
This is a provisional nonstatutory double patenting rejection.
Instant Application 18/657022
Co-pending Application 18/657040
1. A method of generating a document having multiple chunks of text that collectively form at least a portion of the document, the method comprising:
determining a first chunk of the multiple chunks of text to generate dependent upon topical information relevant to the document that is to be created;
retrieving, from an index, at least one example first chunk of text with the at least one example first chunk of text being dependent upon a desired purpose of the first chunk and upon the topical information;
generating the first chunk of text by a first large language model via a first request that includes a prompt that states the desired purpose of the first chunk of text to be generated, a context that provides first information dependent upon the topical information, and the at least one example first chunk of text;
determining a second chunk of the multiple chunks of text to generate dependent upon the topical information;
retrieving, from the index, at least one example second chunk of text with the at least one example second chunk of text being dependent upon a desired purpose of the second chunk and upon the topical information;
generating the second chunk of text by the first large language model via a second request that includes a prompt that states the desired purpose of the second chunk of text to be generated, a context that provides second information dependent upon the topical information, the at least one example second chunk of text, and the first chunk of text with the second chunk of text being dependent upon the first chunk of text previously generated by the first large language model; and
assembling the first chunk of text and the second chunk of text to form at least a portion of the document such that the first chunk and the second chunk are consistent in content.
1. A method of generating a document having multiple chunks of text that collectively form at least a portion of the document, the method comprising:
receiving topical information relevant to the document to be created;
dependent upon the topical information, determining a first chunk of text to generate;
retrieving, from an index, at least one example first chunk of text;
providing the at least one example first chunk of text and at least a portion of the topical information to a first large language model;
prompting, by a computer processor, the first large language model to generate the first chunk of text through the use of a first request that includes a prompt that states a desired purpose of the first chunk of text to be generated, a context that provides information dependent upon the topical information, and the at least one example first chuck of text; and
generating, by the first large language model, the first chunk of text dependent upon the topical information and accomplishing the desired purpose set out in the prompt.
2. The method of claim 1, wherein the topical information includes at least one of the following: a project name, a project identification number, a client name, a client industry, a client description, a document type, at least one challenge of the project, a project duration, at least one priority of the project, at least one special consideration, at least one service type, a delivery type, and a delivery location.
2. The method of claim 1, wherein the topical information includes at least one of the following: a project name, a project identification number, a client name, a client industry, a client description, a document type, at least one challenge of the project, a project duration, at least one priority of the project, at least one special consideration, at least one service type, a delivery type, and a delivery location.
3. The method of claim 1, wherein the document is a contract.
3. The method of claim 1, wherein the document is a contract.
4. The method of claim 3, wherein the contract is a statement of work.
4. The method of claim 3, wherein the contract is a statement of work.
5. The method of claim 4, wherein the statement of work is for development of a software program for a client.
5. The method of claim 4, wherein the statement of work is for development of a software program for a client.
6. The method of claim 5, wherein the desired purpose of the first chunk for the statement of work is at least one of the following: a project scope, a project summary, an executive summary, client responsibilities, a project description, deliverables, assumptions, a project duration, a service description, and party roles.
6. The method of claim 5, wherein the desired purpose of the first chunk for the statement of work is at least one of the following: a project scope, a project summary, an executive summary, client responsibilities, a project description, deliverables, assumptions, a project duration, a service description, and party roles.
9. The method of claim 1, further comprising:
evaluating the first chunk of text for a hallucination as generated by the first large language model.
14. The method of claim 1, further comprising:
evaluating the first chunk of text for a hallucination as generated by the first large language model.
13. The method of claim 9, wherein the second large language model is different from the first large language model.
16. The method of claim 15, wherein the second large language model is different from the first large language model.
Claim 1 of the reference co-pending application recites all of the limitations of claim 1 of the instant application as cited in the table above except the underlined portion “determining a second chunk of the multiple chunks of text to generate dependent upon the topical information; retrieving, from the index, at least one example second chunk of text with the at least one example second chunk of text being dependent upon a desired purpose of the second chunk and upon the topical information; generating the second chunk of text by the first large language model via a second request that includes a prompt that states the desired purpose of the second chunk of text to be generated, a context that provides second information dependent upon the topical information, the at least one example second chunk of text, and the first chunk of text with the second chunk of text being dependent upon the first chunk of text previously generated by the first large language model; and assembling the first chunk of text and the second chunk of text to form at least a portion of the document such that the first chunk and the second chunk are consistent in content.”.
Arunachalam teaches determining a second chunk of the multiple chunks of text to generate dependent upon the topical information (“If information request router 504 determines that the information request is a request for a summary, information request router 504 passes the information request to chunk retrieving module 510. Chunk retrieving module 510 may parse the metadata tags in dictionary 410A associated with project data 202A to determine tags that indicate that the corresponding chunk may include information that contributes to the summary.” Paragraph 0071, “In some instances, data processing engine 114 may assign metadata tags to chunks. Typically, there may be one metadata tag for each chunk. The metadata tag may include a project identifier, a title of the document, a subtitle of the document, a hierarchy of the chunk as compared to other chunks in the document or in the section of the document, a hierarchy of the chunk in the project, etc.” paragraph 0038).
retrieving, from the index, at least one example second chunk of text with the at least one example second chunk of text being dependent upon a desired purpose of the second chunk and upon the topical information (“In some instances, LLM 110C may summarize the subset of chunks over several iterations to refine the summary. For example, LLM 110C may receive a first chunk in the subset of chunks to generate a summary. Next, LLM 110C may receive a second chunk” paragraph 0071);
generating the second chunk of text by the first large language model via a second request that includes
assembling the first chunk of text and the second chunk of text to form at least a portion of the document such that the first chunk and the second chunk are consistent in content (“Next, LLM 110C may receive…the summary generated using the first and second chunks. The process may continue until LLM 110C uses all chunks in the subset of chunks or until LLM 110C determines that the content of the summary is no longer being modified.” Paragraph 0071).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to comprise the teachings of Arunachalam. One would have been motivated to make such a combination to provide relevant responses.
Crouch teaches generating the chunk of text by a first large language model via a first request that includes a prompt that states the desired purpose of the chunk of text to be generated (“In step 508, the computing device may generate a search query from the received prompt for passage search. The computing device may normalize all received prompts into a search query composed of logical operators, search keywords, concept identifiers in a format that is used to conduct passage searches by the QA system.” paragraph 0076,0079).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to comprise the teachings of Crouch. One would have been motivated to make such a combination to provide relevant responses.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Arunachalam et al. (US 20250211549 A1, hereinafter Arunachalam) in view of Crouch et a. (US 20160078102 A1, hereinafter Crouch).
As to independent claim 1, Arunachalam teaches a method of generating a document having multiple chunks of text that collectively form at least a portion of the document (“The embodiments are directed to a generative artificial intelligence (AI) system for generating answers to questions or generating summaries from data included in various data sources and in multiple domains.” Abstract), the method comprising:
determining a first chunk of the multiple chunks of text to generate dependent upon topical information relevant to the document that is to be created (“Once trained, LLM 110A may receive the dialogue or information request and may classify the information request as a question/answer request, a summary request, or the like.” Paragraph 0064 last sentence, “In some embodiments, LLM 110A may also identify a project identifier for a project from the dialogue or from the information request.” Paragraph 0065, “If information request router 504 determines that the information request is a request for a summary, information request router 504 passes the information request to chunk retrieving module 510.” paragraph 0071);
retrieving, from an index, at least one example first chunk of text with the at least one example first chunk of text being dependent upon a desired purpose of the first chunk and upon the topical information (“Chunk retrieving module 510 may parse the metadata tags in dictionary 410A associated with project data 202A to determine tags that indicate that the corresponding chunk may include information that contributes to the summary.” Paragraph 0071);
generating the first chunk of text by a first large language model via a first request that includes (“Once chunk retrieving module 510 identifies a subset of chunks from chunks 404A that may contribute to the summary, chunk retrieving module 510 may forward the chunks to LLM 110C. LLM 110C may receive and summarize the subset of chunks into a summary. In some instances, LLM 110C may summarize the subset of chunks over several iterations to refine the summary. For example, LLM 110C may receive a first chunk in the subset of chunks to generate a summary.” Paragraph 0071);
determining a second chunk of the multiple chunks of text to generate dependent upon the topical information (“If information request router 504 determines that the information request is a request for a summary, information request router 504 passes the information request to chunk retrieving module 510. Chunk retrieving module 510 may parse the metadata tags in dictionary 410A associated with project data 202A to determine tags that indicate that the corresponding chunk may include information that contributes to the summary.” Paragraph 0071, “In some instances, data processing engine 114 may assign metadata tags to chunks. Typically, there may be one metadata tag for each chunk. The metadata tag may include a project identifier, a title of the document, a subtitle of the document, a hierarchy of the chunk as compared to other chunks in the document or in the section of the document, a hierarchy of the chunk in the project, etc.” paragraph 0038).
retrieving, from the index, at least one example second chunk of text with the at least one example second chunk of text being dependent upon a desired purpose of the second chunk and upon the topical information (“In some instances, LLM 110C may summarize the subset of chunks over several iterations to refine the summary. For example, LLM 110C may receive a first chunk in the subset of chunks to generate a summary. Next, LLM 110C may receive a second chunk” paragraph 0071);
generating the second chunk of text by the first large language model via a second request that includes (“Chunk retrieving module 510 may parse the metadata tags in dictionary 410A associated with project data 202A to determine tags that indicate that the corresponding chunk may include information that contributes to the summary….Next, LLM 110C may receive a second chunk and a summary generated from the first chunk to generate a summary.” Paragraph 0071), the at least one example second chunk of text, and the first chunk of text with the second chunk of text being dependent upon the first chunk of text previously generated by the first large language model (“In some instances, data processing engine 114 may assign metadata tags to chunks. Typically, there may be one metadata tag for each chunk. The metadata tag may include a project identifier, a title of the document, a subtitle of the document, a hierarchy of the chunk as compared to other chunks in the document or in the section of the document, a hierarchy of the chunk in the project, etc.” paragraph 0038); and
assembling the first chunk of text and the second chunk of text to form at least a portion of the document such that the first chunk and the second chunk are consistent in content (“Next, LLM 110C may receive…the summary generated using the first and second chunks. The process may continue until LLM 110C uses all chunks in the subset of chunks or until LLM 110C determines that the content of the summary is no longer being modified.” Paragraph 0071).
Arunachalam does not appear to expressly teach generating the first chunk of text and the second chunk of text by a first large language model via a request that includes a prompt that states the desired purpose of the first chunk of text to be generated.
Crouch teaches generating the chunk of text by a first large language model via a first request that includes a prompt that states the desired purpose of the chunk of text to be generated (“In step 508, the computing device may generate a search query from the received prompt for passage search. The computing device may normalize all received prompts into a search query composed of logical operators, search keywords, concept identifiers in a format that is used to conduct passage searches by the QA system.” paragraph 0076,0079).
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Arunachalam to comprise generating the first chunk of text by a first large language model via a first request that includes a prompt that states the desired purpose of the first chunk of text to be generated. One would have been motivated to make such a combination to provide relevant responses.
As to dependent claim 2, Arunachalam teaches the method of claim 1, Arunachalam further teaches wherein the topical information includes at least one of the following: a project name, a project identification number, a client name, a client industry, a client description, a document type, at least one challenge of the project, a project duration, at least one priority of the project, at least one special consideration, at least one service type, a delivery type, and a delivery location. (“In one embodiment, data processing engine 114 may generate a project identifier for the project data 202A associated with a domain that is a project.” Paragraph 0078).
As to dependent claim 3, Arunachalam teaches the method of claim 1, wherein the document is a contract (“The requirements stage may generate documents pertaining to functional requirements, technical requirements, review and approval documentation, and/or a statement of work.” Paragraph 0027).
As to dependent claim 4, Arunachalam the method of claim 3, Arunachalam further teaches wherein the contract is a statement of work (“The requirements stage may generate documents pertaining to functional requirements, technical requirements, review and approval documentation, and/or a statement of work.” Paragraph 0027).
As to dependent claim 5, Arunachalam the method of claim 4, Arunachalam further teaches wherein the statement of work is for development of a software program for a client (“The generative AI system may be particularly useful in summarizing projects and generating answers to questions associated with projects. Projects, such as software projects,” paragraph 0019).
As to dependent claim 6, Arunachalam teaches the method of claim 5, Arunachalam further teaches wherein the desired purpose of the first chunk for the statement of work is at least one of the following: a project scope, a project summary, an executive summary, client responsibilities, a project description, deliverables, assumptions, a project duration, a service description, and party roles (“The dialogue may include a project identifier and a request for information associated with the project. At operation 806, a determination that the information request is a request for a summary is made. For example, information request router 504 may use LLM 110A to identify the project identifier and classify the information request as a request for the summary from the dialogue.” Paragraph 0094-0095).
As to dependent claim 7, Arunachalam teaches the method of claim 1, Arunachalam further teaches the method comprising:
determining a third chunk of the multiple chunks of text to generate dependent upon the topical information (“If information request router 504 determines that the information request is a request for a summary, information request router 504 passes the information request to chunk retrieving module 510. Chunk retrieving module 510 may parse the metadata tags in dictionary 410A associated with project data 202A to determine tags that indicate that the corresponding chunk may include information that contributes to the summary.” Paragraph 0071, “In some instances, data processing engine 114 may assign metadata tags to chunks. Typically, there may be one metadata tag for each chunk. The metadata tag may include a project identifier, a title of the document, a subtitle of the document, a hierarchy of the chunk as compared to other chunks in the document or in the section of the document, a hierarchy of the chunk in the project, etc.” paragraph 0038);
retrieving, form the index, at least one example third chunk of text with the at least one example third chunk of text being dependent upon a desired purpose of the third chunk (“In some instances, LLM 110C may summarize the subset of chunks over several iterations to refine the summary [….] Next, LLM 110C may receive a third chunk” paragraph 0071); and
generating the third chunk of text by the large language module via a third request that includes (“ Next, LLM 110C may receive a third chunk and the summary generated using the first and second chunks. The process may continue until LLM 110C uses all chunks in the subset of chunks or until LLM 110C determines that the content of the summary is no longer being modified.” Paragraph 0071, “In some instances, data processing engine 114 may assign metadata tags to chunks. Typically, there may be one metadata tag for each chunk. The metadata tag may include a project identifier, a title of the document, a subtitle of the document, a hierarchy of the chunk as compared to other chunks in the document or in the section of the document, a hierarchy of the chunk in the project, etc.” paragraph 0038).
Arunachalam does not appear to expressly teach generating the third chunk of text by a first large language model via a request that includes a prompt that states the desired purpose of the third chunk of text to be generated.
Crouch teaches generating the chunk of text by a first large language model via a first request that includes a prompt that states the desired purpose of the chunk of text to be generated (“In step 508, the computing device may generate a search query from the received prompt for passage search. The computing device may normalize all received prompts into a search query composed of logical operators, search keywords, concept identifiers in a format that is used to conduct passage searches by the QA system.” paragraph 0076,0079).
Accordingly, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the method of Arunachalam to comprise generating the third chunk of text by a first large language model via a first request that includes a prompt that states the desired purpose of the third chunk of text to be generated. One would have been motivated to make such a combination to provide relevant responses to the requests.
As to dependent claim 8, Arunachalam teaches the method of claim 7, Arunachalam teaches the method further comprising: adding the third chunk of text to the document that includes the first chunk of text and the second chunk of text (“LLM 110C may receive a third chunk and the summary generated using the first and second chunks. Once LLM 110C generates a summary, generative AI system 108 may transmit the summary” paragraph 0071-0072).
As to dependent claim 9, Arunachalam teaches the method of claim 1, Arunachalam teaches the method further comprising: evaluating the first chunk of text for a hallucination as generated by the first large language model (“In some embodiments, generative AI system 108 may reduce or eliminate a number of AI hallucinations from the response. An AI hallucination may occur when one of LLMs 110 may create an answer or a summary that is not based on chunks 404.” Paragraph 0076).
As to dependent claim 10. Arunachalam teaches the method of claim 9, Arunachalam further teaches wherein the evaluation of the first chunk of text is performed before the generation of the second chunk of text (“LLM 110C may summarize the subset of chunks over several iterations to refine the summary. For example, LLM 110C may receive a first chunk in the subset of chunks to generate a summary. Next, LLM 110C may receive a second chunk and a summary generated from the first chunk to generate a summary.” Paragraph 0071, “An AI hallucination may occur when one of LLMs 110 may create an answer or a summary that is not based on chunks 404.” Paragraph 0076).
As to dependent claim 11, Arunachalam teaches the method of claim 9, Arunachalam further teaches the method comprising: evaluating the second chunk of text for a hallucination as generated by the first large language model (“LLM 110C may summarize the subset of chunks over several iterations to refine the summary. For example, LLM 110C may receive a first chunk in the subset of chunks to generate a summary. Next, LLM 110C may receive a second chunk and a summary generated from the first chunk to generate a summary.” Paragraph 0071, “An AI hallucination may occur when one of LLMs 110 may create an answer or a summary that is not based on chunks 404.” Paragraph 0076).
As to dependent claim 12, Arunachalam teaches the method of claim 11, Arunachalam further teaches wherein the evaluation of the first chunk and the evaluation of the second chunk are performed concurrently (“Once chunk retrieving module 510 identifies a subset of chunks from chunks 404A that may contribute to the summary, chunk retrieving module 510 may forward the chunks to LLM 110C. LLM 110C may receive and summarize the subset of chunks into a summary.” Paragraph 0071, “An AI hallucination may occur when one of LLMs 110 may create an answer or a summary that is not based on chunks 404.” Paragraph 0076).
As to dependent claim 13, Arunachalam teaches the method of claim 9, Arunachalam further teaches wherein the evaluation is performed by a second large language model that is different from the first large language model (“An AI hallucination may occur when one of LLMs 110 may create an answer or a summary that is not based on chunks 404. An AI hallucination may occur when one of LLMs 110 may create an answer or a summary that is not based on chunks 404.” Paragraph 0076, “LLMs 208A, 208B, and 208C include respective project data 202A, 202B, and 202C, which reduces the likelihood of LLMs 208A-C hallucinating, or the likelihood of project data 202A-202C being intermingled with data from other projects when LLMs 208A-C generate a response to a question during the inference stage discussed in FIG. 3.” Paragraph 0045).
As to dependent claim 14, Arunachalam teaches the method of claim 1, Arunachalam further teaches wherein the steps of determining the first chunk of text to generate and determining the second chunk of text to generate is performed by a computer processor (“the computer system 900 performs specific operations by the processor 904 executing one or more sequences of instructions contained in the memory component 906,” paragraph 0102,0071).
As to dependent claim 15, Arunachalam teaches the method of claim 14, Arunachalam further teaches wherein the computer processor determines the first chunk of text to generate and the second chunk of text to generate based on instructions dependent on the document that is to be generated (“In some instances, data processing engine 114 may assign metadata tags to chunks. Typically, there may be one metadata tag for each chunk. The metadata tag may include a project identifier, a title of the document, a subtitle of the document, a hierarchy of the chunk as compared to other chunks in the document or in the section of the document, a hierarchy of the chunk in the project, etc.” paragraph 0038).
As to dependent claim 16, Arunachalam teaches the method of claim 1, Arunachalam does not appear to teach wherein the retrieval of the at least one example first chunk of text from the index further comprises:
formulating a query that depends upon the desired purpose of the first chunk of text to be generated and upon the topical information;
providing the query to a search engine in communication with the index; and determining the at least one example first chunk of text from multiple examples chunks of text in the index.
Crouch teaches wherein the retrieval of the at least one example first chunk of text from the index further comprises:
formulating a query that depends upon the desired purpose of the first chunk of text to be generated and upon the topical information (“In step 508, the computing device may generate a search query from the received prompt for passage search. The computing device may normalize all received prompts into a search query composed of logical operators, search keywords, concept identifiers in a format that is used to conduct passage searches by the QA system.” paragraph 0076);
providing the query to a search engine in communication with the index; and determining the at least one example first chunk of text from multiple examples chunks of text in the index (“In step 510, the computing device may analyze the annotated passage index using a search query keyword to identify a passage from at least one document. The computing device, using a passage analyzer, may search through the one or more passage index entries for all the documents and/or web pages identified for the passage search.” Paragraph 0077).
Accordingly, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the method of Arunachalam to comprise formulating a query that depends upon the desired purpose of the first chunk of text to be generated and upon the topical information; providing the query to a search engine in communication with the index; and determining the at least one example first chunk of text from multiple examples chunks of text in the index. One would have been motivated to make such a combination to provide relevant responses.
As to dependent claim 17, Arunachalam teaches the method of claim 16, Arunachalam does not appear to expressly teach wherein the query is formulated by a query module in communication with the search engine.
Crouch teaches wherein the query is formulated by a query module in communication with the search engine (“The computing device may parse the prompt to generate the search query used to identify the at least one passage.” paragraph 0012).
Accordingly, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the method of Arunachalam to comprise wherein the query is formulated by a query module in communication with the search engine. One would have been motivated to make such a combination to provide relevant responses.
As to dependent claim 18, Arunachalam teaches the method of claim 1, Arunachalam further teaches wherein the assembly of the first chunk of text and the second chunk of text to form at least a portion of the document is performed by an assembler module (“Next, LLM 110C may receive a second chunk and a summary generated from the first chunk to generate a summary. Next, LLM 110C may receive a third chunk and the summary generated using the first and second chunks. The process may continue until LLM 110C uses all chunks in the subset of chunks or until LLM 110C determines that the content of the summary is no longer being modified.” Paragraph 0071).
As to dependent claim 19, Arunachalam teaches the method of claim 1, Arunachalam further teaches wherein the document is saved in a storage media (Fig. 9, computer readable medium, such as the static storage component 908 or the disk drive component 910 for storing document/data).
As to dependent claim 20, Arunachalam teaches the method of claim 1, Arunachalam teaches the method further comprising: communicating the document having the first chunk of text and the second chunk of text to a user (“The finetuned LLM 208A may generate a response, which may include an answer to the question or a summary, and transmit the response back to generative AI chatbot interface 116, which may then display the response to the user.” Paragraph 0047, last sentence).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Sinha et al. US 20250156639 teaches systems and methods for analyzing and managing documents, such as contracts, agreements, forms, and related items.
Abraham US 20240428005 A1 teaches methods and systems for automatically generating documents for a specific topic using large language models.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHELET SHIBEROU whose telephone number is (571)270-7493. The examiner can normally be reached Monday-Friday 9:00 AM-5:00 PM Eastern Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at 571-272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MAHELET SHIBEROU/Primary Examiner, Art Unit 2171