Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The following rejections are withdrawn in view of new grounds of rejection as necessitated by applicant’s amendment:
Claims 1 – 4, 6 – 12, 15-18, and 20 rejected under 35 U.S.C. 103 as being unpatentable over Liu et al (US Application: US 20240177084, published: May 30, 2024, filed: Sep. 29, 2023) in view of Austin et al (US Application: US 2024/0420012 A1, published: Dec. 19, 2024, filed: Jun. 14, 2023).
Claims 23 and 24 rejected under 35 U.S.C. 103 as being unpatentable over Liu et al (US Application: US 20240177084, published: May 30, 2024, filed: Sep. 29, 2023) in view of Austin et al (US Application: US 2024/0420012A1, published: Dec. 19, 2024, filed: Jun. 14, 2023) in view of Gottlob et al (US Application: US 2025/0045256, published: Feb. 6, 2025, filed: Aug. 2, 2024, EEFD: Aug. 4, 2023).
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/18/2025 has been entered.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/18/2025 is being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-10, 12-17, and 19-22 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al (US Application: US 20240177084, published: May 30, 2024, filed: Sep. 29, 2023) in view of DeFoor et al (US Patent: 11972223, issued: Apr. 30, 2024, filed: Jul. 31, 2023).
With regards to claim 1. Liu et al teaches a computer-implemented method (Fig 13, paragraph 0094: a computing device is implemented using a memory, processor which executes programs) comprising:
receiving, at a server system, a free-form query (paragraph 0074 and 0094: a question is received by a user);
identifying history records from a database associated with the free-form query (paragraphs 0065, 0074: two way communications between at least two parties are selected from a transcript); and
aggregating the language responses (paragraph 0078: each information (language response) corresponding to each chunk is then aggregated /merged into a single chunk); and
processing the aggregated language responses using a [particular] large language model to generate a summary of the history records (paragraph 0079: the language responses as the merged chunk is sent to a particular summarizer LLM to generate an answer/summary).
However Liu et al does not expressly teach … wherein the free-form query includes a plurality of facets; … determining a length of the history records; in response to determining that the history records exceed a predetermined length, generating one or more intermediate summaries of a subset of the plurality of facets for the history records and generating, from the free-form query, a plurality of facet-specific query prompts using the one or more intermediate summaries; in response to determining the history records are below the predetermined length generating from the free-form query, the plurality of facet-specific query prompts; providing the plurality of facet-specific query prompts and the history records to one or more large language models; receiving, from the one or more language models, language responses for each of the plurality of facet-specific query prompts; and; … generate a summary of the language responses.
Yet DeFoor et al teaches … wherein the free-form query includes a plurality of facets (column 10, lines 32-39: a natural language query is received and the query includes natural language text and one or more instructions in a query language); … determining a length of the history records (column 13, lines 35-45: a text size is determined and assessed against a chunk size for the stored text (interpreted as ‘history record(s)’); in response to determining that the history records exceed a predetermined length (column 13, lines 45-56: a determination is made identifying that the text exceed a predetermined chunk size/length), generating one or more intermediate summaries of a subset of the plurality of facets for the history records and generating, from the free-form query, a plurality of facet-specific query prompts using the one or more intermediate summaries (column 22, lines 4-12: each chunk can be considered an intermediate summary and prompts obtained using template(s) are generated by filling the templates with chunk text (see “A document relevance prompt is determined for the selected chunk at 912 based on the document relevance prompt template. In some implementations, determining the document relevance prompt may involve adding the chunk text to the document relevance prompt template in the fillable document portion. In addition, all or a portion of the query, topic, or question may be added to the document relevance prompt template in a fillable query portion”);
in response to determining the history records are below the predetermined length generating from the free-form query, the plurality of facet-specific query prompts (column 14, lines 4-15: “An updated text portion that does not exceed the maximum text chunk size is identified”. Also in column 25, lines 40-50: “The document chunks and the questions to answer are used to generate document review prompts at 1210 through 1212. A document review prompt may be created by combining a document chunk with one or more questions and a document review prompt template. The document review prompt template may have one or more fillable portions that may be filled with text determined based on the document chunk and questions. The document review prompt may include one or more instructions to a large language model” ); providing the plurality of facet-specific query prompts and the history records to one or more large language models (column 25, lines 40-50, 60-67: a question has different parameters for task(s), and the task(s) have different parameter(s)/instruction(s) (claimed facet(s)). “The document chunks and the questions to answer are used to generate document review prompts at 1210 through 1212. A document review prompt may be created by combining a document chunk with one or more questions and a document review prompt template. The document review prompt template may have one or more fillable portions that may be filled with text determined based on the document chunk and questions. The document review prompt may include one or more instructions to a large language mode”); receiving, from the one or more language models, language responses for each of the plurality of facet-specific query prompts (column 27, lines 56-35: “prompts are sent to the text generation modeling system 270 via one or more API calls at 1216 through 1218, where they are individually analyzed and completed by the text generation model 276. At 1218 through 1220, the text generation modeling system 270 sends one or more document review response messages to the text generation interface system 210”); and; … generate a summary of the language responses (column 27, lines 50-63: a summary of language responses is generated through consolidation).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Liu et al’s ability to process an aggregate of obtained language responses using a summarizing type LLM (interpreted as the claimed ‘second’) based upon a provided query and historical data, such length of historical/saved data is determined prior to prompt(s) that are generated and provided to a language model for summary generation associated with facets (of the provided query), such that the summary generation is based applying generated prompts to one or more language models, as taught by DeFoor et al. as taught by DeFoor et al. The combination would have allowed Liu et al to have efficiently processed input documents using Large Language Models. .
With regards to claim 4. The computer-implemented method of claim 1, Liu et al teaches further comprising:
selecting an additional number of history records from a database associated with the free-form query to identify second selected history records (paragraph 0075, paragraph 0080, Fig. 12: based on the query/question, when there is not just a single transcript of records, but multiple transcript of records, other transcripts (such as a second transcript among a plurality of transcripts) will be accessed to obtain/select from it, it’s history record data));
generating additional individual summaries of individual history records of the second selected history records by:
processing additional set of corresponding individual history records using the chunking algorithm (Fig. 12, each transcript , such as a second transcript is processed to produce a summarized output (language response). The summarization for each transcript is detailed in Fig. 11 (as also explained in the rejection of claim 1 above), which includes chunking the transcript currently undergoing processing (as explained above in paragraph 0074 of Liu et al));
constructing additional language responses from second outputs of the chunking algorithm using the one or more large language models (Fig. 12, paragraph 0075 of Liu et al: the language responses for each transcript (such as for a second transcript) results in language responses (such as second language responses for a second transcript) where an LLM is used to process each chunk to identify information (language responses) for each chunk deemed relevant to the question);
aggregating the additional language responses (as explained above in paragraph 0078 of Liu et al: each information (language response) corresponding to each chunk is then aggregated /merged into a single chunk); and
processing the additional language responses using the one or more large language models to generate an additional individual summary for additional corresponding individual history records (paragraph 0079 of Liu et al: the language responses as the merged chunk is sent to an LLM to generate an answer/summary); and
generating, using the one or more large language models, an aggregate summary using the individual summary and the additional individual summary (Fig. 12, the summary from the first individual summary (corresponding to a first transcript) and summary for additional transcript (second transcript) are combined (ref 1206) and then sent to the LLM to generate an aggregate summary (such as a short summary or a long summary in ref 1212 and 1214 respectively)).
The examiner further notes, as explained in the rejection of claim 1, Liu et al’s summarization of aggregated response data was modified/combined with the teachings of Austin et al, such that the response data is sourced from one or more LLMs based one or more historical records (interpreted to also encompass ‘second additional’ history records) to produce the aggregate summar(ies).
With regards to claim 5. The computer-implemented method of claim 1, Liu et al teaches further comprising:
generating a plurality of individual summaries for sets of history records numbering less than or equal to a threshold number (paragraph 0074: any number of meeting/communications records having a size less than or equal to the maximum token size for an LLM , will result in generating summaries (and if there are a plurality of transcripts, then a plurality of summaries are generated as shown in Fig. 12, ref 1204));
processing the plurality of individual summaries using an additional large language model to generate an aggregated summary for data of the sets of history records (Fig. 12, ref 1212 and ref 1214: a second instance of LLM can be used to generate a different version of the aggregated summary (such as a longer summary being generated by an instance of an LLM different than the earlier instance of the LLM referenced in 1212 for a short summary)).
With regards to claim 6. The computer-implemented method of claim 1, Liu et al teaches further comprising:
generating a summary query (Fig. 12, ref 1204: a request for each transcript is dispatched to an LLM) and an aggregation query from the free-form query (Fig. 12, ref a step to initiate combination of summarized transcripts leads to step ref 1206);
generating a plurality of individual summaries for sets of history records including the individual summaries using the summary query (Fig. 12, ref 1206: based upon the queries to initiate a summarization of each transcript, the summaries are aggregated ); and processing the plurality of individual summaries using the aggregation query and an additional large language model (Fig. 12: Using the plurality of individual summaries in aggregated/combined form, processing them using another/second instance of the LLM that is specialized to either generate a short or a long summary).
The examiner further notes, as explained in the rejection of claim 1, Liu et al’s summarization of aggregated response data was modified/combined with the teachings of Austin et al, such that the response data is sourced from one or more LLMs based one or more historical records (interpreted to also encompass ‘second additional’ history records) to produce the aggregate summar(ies).
With regards to claim 7. The computer-implemented method of claim 1, Liu et al teaches further comprising: determining whether a number of history records is greater than a threshold number (paragraph 0074: the communications data between participants (any number) is checked against a threshold sizing number of tokens due to an LLM being limited to process a specific number of tokens (the threshold size amount)); and facilitating presentation of the summary as a response to the free-form query when the number of history records is not greater than the threshold number (paragraph 0074: when the history communication data is within (interpreted as less then) the maximum size limit that a LLM can handle, then the LLM can process the chunk to proceed to generating/presenting a summary /answer to the query (paragraph 0079).).
With regards to claim 8. The computer-implemented method of claim 1, Liu et al teaches further comprising:
determining whether a number of history records is greater than a threshold number, wherein when the number is greater than the threshold number, generating a response to the free-form query by: dividing the number of history records into sets of records including fewer than the threshold number of records (paragraph 0074: an LLM can handle up to a specific size-number of tokenized conversation/meeting history content, and if the number is more than the threshold, the chunk is further divided into smaller chunks such that the LLM can handle the smaller chunks of conversation/meeting record/content );
generating, by the one or more large language models, individual summaries for the sets of records (Fig. 12, if the chunks come from multiple transcripts, then individual summaries from the multiple transcripts are generated (ref 1204)); and generating, by the one or more large language models, the response as an aggregated summary using the individual summaries (Fig. 12, ref 1206 and also ref 1212 and 1214: the summaries are aggregated/combined and can be further summarized in short or long summary form).
The examiner further notes, as explained in the rejection of claim 1, Liu et al’s summarization of aggregated response data was modified/combined with the teachings of Austin et al, such that the response data is sourced from one or more LLMs based one or more historical records (interpreted to also encompass ‘second additional’ history records) to produce the aggregate summar(ies).
With regards to claim 9, Liu et al and DeFoor et al teaches a system comprising: a memory; and one or more processors coupled to the memory and configured to perform operations comprising: receiving, at a server system, a free-form query, wherein the free-form query includes a plurality of facets; records; in response to determining that the history records exceed a predetermined length, generating one or more intermediate summaries of a subset of the plurality of facets for the history records and generating, from the free-form query, a plurality of facet-specific query prompts using the one or more intermediate summaries; in response to determining the history records are below the predetermined length, generating from the free-form query, the plurality of facet-specific query prompts; providing the plurality of facet-specific query prompts and the history records to one or more large language models; receiving, from the one or more large language models, language responses for each of the plurality of facet specific query prompts; and ; aggregating the language responses using a second large language model to generate a summary of the language , as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
With regards to claim 10. The system of claim 9, Liu et al teaches wherein the history records are selected from the database either randomly or using a search algorithm (paragraph 0074: the transcript having the history of meeting records are obtained/searched based upon its association with a specific knowledge transfer plan).
With regards to claim 12, The system of claim 9, Liu et al and DeFoor et al teaches wherein the one or more processors are further configured to perform operations comprising: selecting an additional number of history records from a database associated with the free-form query to identify additional selected history records; generating additional individual summaries of individual history records of the
With regards to claim 13. The system of claim 9, Liu et al and DeFoor et al teaches wherein the one or more processors are further configured to perform operations comprising: generating a plurality of individual summaries for sets of history records numbering less than or equal to a threshold number; and processing the plurality of individual summaries using an additional large language model different than the one or more large language models to generate an aggregated summary for data of the sets of history records, as similarly explained in the rejection of claim 5, and is rejected under similar rationale.
With regards to claim 14. The system of claim 9, Liu et al and DeFoor et al teaches wherein the one or more processors are further configured to perform operations comprising: generating a plurality of individual summaries for sets of history records numbering less than or equal to a threshold number; and processing the plurality of individual summaries using an additional large language model different than the one or more large language models to generate an aggregated summary for data of the sets of history records, as similarly explained in the rejection of claim 5, and is rejected under similar rationale.
With regards to claim 15. The system of claim 9, Liu et al and DeFoor et al teaches wherein the one or more processors are further configured to perform operations comprising: generating a summary query and an aggregation query from the free-form query; generating a plurality of individual summaries for sets of history records including the individual summaries using the summary query; and processing the plurality of individual summaries using the aggregation query and an additional large language model, as similarly explained in the rejection of claim 6, and is rejected under similar rationale.
With regards to claim 16, the combination of Liu et al and DeFoor et al teaches a non-transitory computer-readable storage medium comprising instructions that, when executed by one or more processors of a computing system, cause the computing system to perform operations comprising: receiving, at a server system, a free-form query, wherein the free-form query includes a plurality of facets; identifying history records from a database associated with the free-form query to identify selected history records; determining a length of the history records; in response to determining that the history records exceed a predetermined length, generating one or more intermediate summaries of a subset of the plurality of facets for the history records and generating, from the free-form query, a plurality of facet-specific query prompts using the one or more summaries; in response to determining the history records are below the predetermined length, generating from the free-form query, the plurality of facet-specific query prompts; providing the plurality of facet-specific query prompts and the history records to one or more large language models; receiving, from the one or more large language models, language responses for each of the plurality of facet-specific query prompts; and aggregating the language responses using a second large language model to generate a summary of the language responses , as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
With regards to claim 17. The non-transitory computer-readable storage medium of claim 16, Liu et al and DeFoor et al teaches wherein the selected history records are selected from the database either randomly or using a search algorithm, as similarly explained in the rejection of claim 10, and is rejected under similar rationale.
With regards to claim 19. The non-transitory computer-readable storage medium of claim 16, Liu et al and DeFoor et al teaches wherein when executed by one or more processors of a computing system, the instructions cause the computing system to perform operations comprising: generating a plurality of individual summaries for sets of history records numbering less than or equal to a threshold number; processing the plurality of individual summaries using an additional large language model different than the one or more large language models to generate an aggregated summary for data of the sets of history records, as similarly explained in the rejection of claim 5, and is rejected under similar rationale.
With regards to claim 20. The non-transitory computer-readable storage medium of claim 16, Liu et al and DeFoor et al teaches wherein when executed by one or more processors of a computing system, the instructions cause the computing system to perform operations comprising: generating a summary query and an aggregation query from the free-form query; generating a plurality of individual summaries for sets of history records including the individual summaries using the summary query; and processing the plurality of individual summaries using the aggregation query and an additional large language model, as similarly explained in the rejection of claim 6, and is rejected under similar rationale.
With regards to claim 21, which depends on claim 1, the combination of Liu et al and DeFoor et al teaches identifying the plurality of facets from [associated stored data], as similarly explained in the rejection of claim 1, and rejected under similar rationale.
However although the combination teaches facets could be identified from associated stored data, the combination does not expressly teach the stored data comprises data of a two-way conversation including completed or resolved conversations.
Yet Liu et al teaches … it is known that stored data can comprise two-way conversation including completed or resolved conversations (paragraph 0074: prior stored transcript data includes conversation/meeting data between two or more participants (and are interpreted as the claimed ‘completed’ since it has already happened/occurred).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Liu et al and DeFoor et al’s ability to identify a query to be associated with facets/categories of data, such that the type of categories would have included completed/occurred two-way conversation data, as also taught by Liu et al. The combination would have flexibly allowed additional recognition conversation data to be subject to response data.
With regards to claim 22, which depends on claim 1, the combination of Liu et al and DeFoor et al teaches processing the language responses … aggregating …, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
However the combination explained in the rejection of claim 1 does not specifically address processing the language responses using a chunking algorithm, … aggregating the chunked language responses.
Yet Liu et al teaches processing the language responses using a chunking algorithm, … aggregating the chunked language responses (paragraph 0078: each information (language response) corresponding to each chunk is then aggregated /merged into a single chunk).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Liu et al and DeFoor et al’s ability to process the language responses from an aggregation, such that the responses could have been chunked, as also taught by Liu et al. The combination would have allowed a more fine tooth approach for selectively identifying specific responses to designate for subsequent processing.
Claims 23 and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al (US Application: US 20240177084, published: May 30, 2024, filed: Sep. 29, 2023) in view of DeFoor et al (US Patent: 11972223, issued: Apr. 30, 2024, filed: Jul. 31, 2023) in view of Gottlob et al (US Application: US 2025/0045256, published: Feb. 6, 2025, filed: Aug. 2, 2024, EEFD: Aug. 4, 2023).
With regards to claim 23, which depends on claim 1, the combination of Liu et al and DeFoor et al teaches one or more summaries … the second large language model, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
However the combination does not expressly teach identifying, from one or more summaries, one or more conflicts, and generating, using … large language model, a summary resolution that resolves the conflicts.
Yet Gottlob et al teaches identifying, from one or more [results/answers], one or more conflicts, and generating, using … large language model, a summary resolution that resolves the conflicts (paragraphs 0085, 0180: answers are checked and conflicts can be resolved through filtering to produce remaining answer(s). Furthermore the LLM is adaptive to the results (LLM behavior is updated)).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Liu et al and DeFoor et al’s ability to generate one or more output summaries of LLM responses using a summarizing-output LLM (second LLM), such that the outputs are further checked for conflicts and resolves conflicts through result filtering and update of the output-LLM accordingly ( based on the results), as taught by Gottlob et al. The combination would have allowed Liu et al and DeFoor et al to have produced accurate factual information by identifying incorrect data (Gottlob et al, Abstract, paragraph 0028).
With regards to claim 24, which depends on claim 23, the combination of Liu et al, DeFoor et al and Gottlob et al teaches further comprising updating the second large language model based on the one or more conflicts, as similarly explained in the rejection of claim 23, and is rejected under similar rationale.
Response to Arguments
Applicant's arguments filed 12/18/2025 have been fully considered but they are not persuasive.
The applicant’s arguments with respect to the pending claims (claims 1, 4-10, 12-17, and 19-24) are directed to the prior cited art applied in prior rejections in view of the newly amended claim language. In response the examiner points out that the newly amended claim language necessitated a new grounds of rejection and new rejections are applied to the pending claims to address the pending claims. The examiner respectfully directs the applicant’s attention to the rejections above for a full explanation as to how the pending claims are now rejected using at least a combination of Liu et al and DeFoor et al (and additionally Gottlob et al for pending claims 23 and 24).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILSON W TSUI whose telephone number is (571)272-7596. The examiner can normally be reached Monday - Friday 9 am -6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILSON W TSUI/Primary Examiner, Art Unit 2172