DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The response was received on 2/10/2026. Claims 1-9 and 11-20 are pending where claims 1-9 and 11-20 were previously presented and claim 10 was cancelled.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9 and 11-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
With regard to claim 1 (Step 2A, Prong One):
The claim recites the following limitations which are drawn towards an abstract idea:
A method comprising:
generating,
… an objective function that computes a percentage of technical words over a total number of words in a portion of the output or a percentage of technical sections over a total number of sections in the output (recites mental process steps of evaluating/calculating information about the summary to be used as criteria for making a determination/judgement; e.g. grading).
As seen from above, the identified limitations recite concepts associated with an abstract idea and thus the respective claim recites a judicial exception (see 2106.04(a)) and thus requires further analysis as discussed below.
Step 2A, Prong Two:
The following limitations have been identified as being additional elements as discussed below.
obtaining a source document; obtaining a user characteristic that indicates a complexity preference of a user (relates to insignificant extrasolution activity of mere data gathering and/or receiving information over a network, see MPEP 2106.05(g));
, wherein the language generation model is trained to generate a output that conforms to the complexity preference based on an objective function … (recites mere instructions to apply the judicial exception in a computer using machine learning tools, see MPEP 2106.05(f)).
This judicial exception is not integrated into a practical application because the additional elements merely gather particular information and indicate that the judicial exception is performed with a generically recited machine learning element (i.e. language model).
As seen from the above discussion, the identified limitations did not integrate the judicial exception into a practical application (see MPEP 2106.04(d)).
Step 2B:
Below is the analysis of the claims:
obtaining a source document; obtaining a user characteristic that indicates a complexity preference of a user (relates to well-understood, routine, and conventional activity of mere data gathering and/or receiving information over a network, see MPEP 2106.05(g));
As seen from above, the respective claim elements taken individually do not amount to significantly more than the judicial exception. When taken as a whole (in combination), the claim also does not amount to significantly more than the abstract idea because the additional elements merely indicate generically that machine learning models are involved in performing the judicial exception and that particular information is received.
With regard to claim 2, this claim recites wherein: the complexity preference comprises a topic length preference or an expertise level of the user which recites field of use limitations to describe the meaning of a particular variable (see MPEP 2106.05(h)) and adds no meaningful limitation beyond that of the abstract idea as discussed above.
With regard to claim 3, this claim recites generating a prompt for the language generation model based on the user characteristic, wherein the topic description is generated based on the prompt which recites at a high-level of generality a user interface element to allow users to interact with a computer to receive user input (see MPEP 2106.05(f)) which merely recites using a computer as a tool to implement judicial exception and adds no meaningful limitation beyond that of the abstract idea as discussed above.
With regard to claim 4, this claim recites generating an output document based on the topic description which recites creating a file which is similar to insignificant extrasolution activity of writing/saving information to memory and is considered well-understood, routine, and conventional activity of writing/storing information in memory (see MPEP 2106.05(d)).
With regard to claim 5, this claim recites generating a prompt that includes instructions to generate the output document which recites at a high-level of generality a user interface element to allow users to interact with a computer to receive user input such as a file name (see MPEP 2106.05(f)) which merely recites using a computer as a tool to implement judicial exception and adds no meaningful limitation beyond that of the abstract idea as discussed above.
With regard to claim 6, this claim recites wherein generating the output document comprises: generating a plurality of topics (which recites mental process steps of evaluating/judgment steps of determining topics/sub-topics in a document);
and clustering a plurality of sentences from the source document based on the plurality of topics, wherein the output document is based on the clustering (which recites insignificant extrasolution activity of sorting which corresponds to well-understood, routine, and conventional activity of sorting, see MPEP 2106.05(d)) and adds no meaningful limitation beyond that of the abstract idea as discussed above.
With regard to claim 7, this claim recites wherein the objective function is based on a percentage of technical words in the topic description, a percentage of technical sections, and a number of a plurality of topic descriptions (recites field of use limitations and mental process steps involving mathematical equation with the claim describing the intended meaning of the variables that were used, see MPEP 2106.05(h)).
With regard to claim 8, this claim recites displaying the topic description to the user; and receiving feedback from the user based on the topic description which recites insignificant extrasolution activity of transmitting and receiving information over a network which corresponds to well-understood, routine, and conventional activity of transmitting and receiving information over a network (see MPEP 2106.05(d)) and adds no meaningful limitation beyond that of the abstract idea as discussed above.
With regard to claim 9 (Step 2A, Prong One):
The claim recites the following limitations which are drawn towards an abstract idea:
A method of training a machine learning model, the method comprising:
generating,
computing an objective function that computes a percentage of technical words over a total number of words in a portion of an output or a percentage of technical sections over a total number of sections in the output (recites mental process steps of performing mathematical calculation/determination to score the topic description/summary);
As seen from above, the identified limitations recite concepts associated with an abstract idea and thus the respective claim recites a judicial exception (see 2106.04(a)) and thus requires further analysis as discussed below.
Step 2A, Prong Two:
The following limitations have been identified as being additional elements as discussed below.
obtaining a source document (relates to insignificant extrasolution activity of mere data gathering and/or receiving information over a network, see MPEP 2106.05(g));
This judicial exception is not integrated into a practical application because the additional elements merely gather particular information and indicate a generically recited machine learning element (i.e. language model) can be trained (including updates).
As seen from the above discussion, the identified limitations did not integrate the judicial exception into a practical application (see MPEP 2106.04(d)).
Step 2B:
Below is the analysis of the claims:
obtaining a source document (relates to well-understood, routine, and conventional activity of mere data gathering and/or receiving information over a network, see MPEP 2106.05(d));
As seen from above, the respective claim elements taken individually do not amount to significantly more than the judicial exception. When taken as a whole (in combination), the claim also does not amount to significantly more than the abstract idea because the additional elements merely gather particular information and indicate a generically recited machine learning element (i.e. language model) can be trained (including updates).
With regard to claim 11, this claim recites generating, using the language generation model, a plurality of topic descriptions based on the source document, wherein the objective function is based on a number of the plurality of topic descriptions which recites the above identified abstract idea to evaluate/analyze content to make a summary where the plurality of topic descriptions can relate to sub-topics, e.g. discussing a complex topic, a human can break that topic/concept into smaller elements to discuss first; or similar to summarizing a book, a human can summarize particular characters plots or summarize the chapters/sections.
With regard to claim 12, this claim recites wherein updating the language generation model comprises: performing a reinforcement learning process based on the objective function which recites technological environment limitations by reciting a particular machine learning model to use (see MPEP 2106.05(h)) in a level of high-generality which adds no meaningful limitation beyond that of the abstract idea as discussed above.
With regard to claim 13, this claim recites obtaining a user characteristic that indicates a complexity preference of a user (which recites insignificant extrasolution activity and well-understood, routine, and conventional activity of receiving information over a network, see MPEP 2106.05(d)),
wherein the topic description is generated based on the complexity preference (which recites the above identified abstract idea of summarizing content).
With regard to claim 14, this claim recites clustering, using a clustering model, a plurality of sentences of the source document to obtain a plurality of clustered sentences (which recites insignificant extrasolution activity and well-understood, routine, and conventional activity of sorting information, see MPEP 2106.05(d));
receiving user feedback based on the plurality of clustered sentences; and updating parameters of the clustering model based on the user feedback (which recites generic machine learning training steps and amounts to merely implementing the abstract idea on a computer, see MPEP 2106.05(f)).
With regard to claim 15, this claim recites generating a description of an intent of the user feedback which recites the above identified abstract idea associated with mentally summarizing data.
With regard to claim 16, this claim recites receiving a modified description of the intent of the user feedback; computing a likelihood loss based on the modified description of the intent; and updating the parameters of the clustering model based on the likelihood loss (which recites generic machine learning training steps and amounts to merely implementing the abstract idea on a computer, see MPEP 2106.05(f)).
With regard to claim 17 (Step 2A, Prong One):
The claim recites the following limitations which are drawn towards an abstract idea:
generate a topic description that conforms to the complexity preference of the user
… an objective function that computes a percentage of technical words over a total number of words in a portion of the output or a percentage of technical sections over a total number of sections in the output (recites mental process steps of evaluating/calculating information about the summary to be used as criteria for making a determination/judgement; e.g. grading).
As seen from above, the identified limitations recite concepts associated with an abstract idea and thus the respective claim recites a judicial exception (see 2106.04(a)) and thus requires further analysis as discussed below.
Step 2A, Prong Two:
The following limitations have been identified as being additional elements as discussed below.
An apparatus comprising: at least one processor; at least one memory including instructions executable by the at least one processor (recites generic hardware elements to implement the abstract idea, see MPEP 2106.05(f));
a language generation model comprising parameters stored in the at least one memory and trained to tokens
wherein the language generation model is trained to generate an output that conforms to the complexity preference based on an objective function … (recites mere instructions to apply the judicial exception in a computer using machine learning tools, see MPEP 2106.05(f));
and a clustering model comprising parameters stored in the at least one memory and trained to cluster a plurality of sentences of the source document to obtain a plurality of clustered sentences corresponding to the topic description (recites insignificant extrasolution activity of sorting information, see MPEP 2106.05(g)).
This judicial exception is not integrated into a practical application because the additional elements merely gather particular information and indicate a generically recited machine learning element (i.e. language model) can be trained (including updates) and that particular information can be clustered/sorted.
As seen from the above discussion, the identified limitations did not integrate the judicial exception into a practical application (see MPEP 2106.04(d)).
Step 2B:
Below is the analysis of the claims:
An apparatus comprising: at least one processor; at least one memory including instructions executable by the at least one processor (recites generic hardware elements to implement the abstract idea, see MPEP 2106.05(f));
a language generation model comprising parameters stored in the at least one memory and trained to
wherein the language generation model is trained to generate an output that conforms to the complexity preference based on an objective function … (recites mere instructions to apply the judicial exception in a computer using machine learning tools, see MPEP 2106.05(f));
and a clustering model comprising parameters stored in the at least one memory and trained to cluster a plurality of sentences of the source document to obtain a plurality of clustered sentences corresponding to the topic description (recites well-understood, routine, and conventional activity of sorting information, see MPEP 2106.05(d)).
As seen from above, the respective claim elements taken individually do not amount to significantly more than the judicial exception. When taken as a whole (in combination), the claim also does not amount to significantly more than the abstract idea because the additional elements merely gather particular information and indicate a generically recited machine learning element (i.e. language model) can be trained (including updates) and particular information can be sorted/clustered.
With regard to claim 18, this claim recites an extraction component configured to extract text from the source document which recites insignificant extrasolution activity and well-understood, routine, and conventional activity of extracting data from a document (see MPEP 2106.05(d)).
With regard to claim 19, this claim recites a user interface configured to receive feedback on the topic description or the plurality of clustered sentences which recites using a computer tool (user interface) to interact with a user and allow user input (see MPEP 2106.05(f)) and adds no meaningful limitation beyond that of the abstract idea as discussed above.
With regard to claim 20, this claim recites wherein: the language generation model and the clustering model each comprises a transformer network which recites technological environment claim limitations that describe particular machine learning models (see MPEP 2106.05(h)).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-5, 8, 9, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al [US 2022/0269713 A1] in view of Bailey et al [US 2025/0005064 A1] and Hu et al [US 2025/0094704 A1] (and Wikipedia: Transformer [https://en.wikipedia.org/w/index.php?title=Transformer_(deep_learning_architecture)&oldid=1153144108], published 4 May 2023 (hereinafter Wiki-Transformer; and BERT (language model) [https://en.wikipedia.org/w/index.php?title=BERT_(language_model)&oldid=1155736984], published 19 May 2023; (hereinafter Wiki-BERT) and Lewis et al, BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension; provided as evidence).
With regard to claim 1, Wang teaches a method comprising: obtaining a source document (see paragraphs [0035] and [0037]; a source document can be specified and obtained);
and generating, using a language generation model, a topic description
Wang does not appear to explicitly teach:
obtaining a user characteristic that indicates a complexity preference of a user;
a topic description that conforms to the complexity preference of the user by performing a self-attention mechanism on a sequence of token based on the source document and the user characteristic,
wherein the language generation model is trained to generate an output that conforms to the complexity preference based on an objective function that computes a percentage of technical words over a total number of words in a portion of the output or a percentage of technical sections over a total number of technical sections.
Bailey teaches obtaining a user characteristic that indicates a complexity preference of a user (see paragraphs [0031], [0036], and [0038]; the system is able to retrieve profile information about the user including user characteristics indicating their complexity preferences associated with language reading understanding including of various topical knowledge too);
wherein the language generation model is trained to generate an output that conforms to the complexity preference based on an objective function (see paragraph [0052] and [0048]; the system utilizes the user’s complexity or a particular topic’s knowledge to provide the descriptions/summarizations while being able to adjust the summarizations to include new jargon or technical words based on the users growing knowledge of the topic).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the abstractive summary generation process of Wang by including means to obtain user characteristics associated with their knowledge comprehension level for various topics and be able to create summaries optimized for that comprehension level as taught by Bailey in order to improve the usefulness of generated abstractive summaries/descriptions about a topic since the summarization can tailor the summary for particular user comprehension level of not only general reading but also be topic-specific while providing means to incrementally add additional technical concepts and vocabulary as the user’s respective topical understanding grows thus helping provide a robust and flexible summarization process that adapts to the user as their knowledge progresses in various topics instead of rigid summarizations that only focus on the content being summarized.
Wang in view of Bailey teach a topic description that conforms to the complexity preference of the user by performing a self-attention mechanism on a sequence of token based on the source document and the user characteristic (see Wang, paragraph [0045 and [0046]; see Bailey, paragraphs [0031]-[0036], [0038], [0045], and [0048]; the system provides means to provide a topic description/summary that is tailored to the user based on the user’s complexity preference/understanding; [see Lewis, section 2.1 and second paragraph in right column on first page; and Wiki-Transformer, first paragraph on page 1 and Self-Attention section on page 2; showing evidence that BART is a Transformer model that uses Self-Attention]).
Wang in view of Bailey do not appear to explicitly teach wherein the language generation model is trained to generate an output that conforms to the complexity preference based on an objective function that computes a percentage of technical words over a total number of words in a portion of the output or a percentage of technical sections over a total number of technical sections.
Hu teaches wherein the language generation model is trained to generate an output that conforms to the complexity preference based on an objective function that computes a percentage of
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the training of the model for summary generation of Wang in view of Bailey by including an objective/loss function that can measure the accuracy/performance of the model for a generated summary as taught by Hu in order to utilize means to compare the model’s generated summary output to example or reference output to determine the performance or quality of the model so that the model can utilize that objective/loss function feedback to automatically fine-tune the model to be more accurate with respect to how it generates the summaries.
Wang in view of Bailey and Hu teach wherein the language generation model is trained to generate an output that conforms to the complexity preference based on an objective function that computes a percentage of technical words over a total number of words in a portion of the output or a percentage of technical sections over a total number of technical sections (see Hu, paragraphs [0017], [0021], and [0024]; Bailey, paragraph [0052]; the system can utilize a loss/objective function that is based on a ratio/percentage of the technical words/jargon in the description/summary which is utilized to fine-tune the model where the examples used to train the model relates to the user/audience own data sources).
With regard to claim 2, Wang in view of Bailey and Hu teach wherein: the complexity preference comprises a topic length preference or an expertise level of the user (see Bailey, paragraphs [0031]-[0035]; the complexity refers to the expertise level of the user of particular topics or the preferred length of summarization).
With regard to claim 3, Wang in view of Bailey and Hu teach generating a prompt for the language generation model based on the user characteristic, wherein the topic description is generated based on the prompt (see Wang, Figure 2; user interface 230; paragraph [0036]; see Bailey, paragraphs [0003] and [0036] and [00038]; the system can receive a request to perform the summarization and the respective system can generate an input to the language generation model where the input can be include information about the user).
With regard to claim 4, Wang in view of Bailey and Hu teach generating an output document based on the topic description (see Wang, Figure 1 and paragraphs [0028]-[0029] and [0045]; the system can create an output document/presentation based on the topic description/summary; note that portions of the summary/topic description can be used with the generated presentation).
With regard to claim 5, Wang in view of Bailey and Hu teach wherein generating the output document comprises: generating a prompt that includes instructions to generate the output document (see Wang, Figure 2; user interface 230; a prompt with instructions to generate the output document is displayed to the user and awaiting their input).
With regard to claim 8, Wang in view of Bailey and Hu teach displaying the topic description to the user; and receiving feedback from the user based on the topic description (see Wang, paragraph [0052]; the system can display information including the topic description/abstractive summary to the user and receive corresponding feedback from the user).
With regard to claim 9, this claim is substantially similar to claim 1 and is rejected for similar reasons as discussed above.
With regard to claim 13, Wang in view of Bailey and Hu teach obtaining a user characteristic that indicates the complexity preference of the user, wherein the topic description is generated based on the complexity preference (see Wang, paragraph [0045] and [0046]; see Bailey, paragraphs [0031], [0036], and [0038]; the system is able to retrieve profile information about the user including user characteristics indicating their complexity preferences associated with language reading understanding including of various topical knowledge too and be able to create a summary (description) associated with the subject matter (topic) of the source document based on their complexity preference).
Claims 6, 14, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al [US 2022/0269713 A1] in view of Bailey et al [US 2025/0005064 A1] and Hu et al [US 2025/0094704 A1] (and Wikipedia: Transformer [https://en.wikipedia.org/w/index.php?title=Transformer_(deep_learning_architecture)&oldid=1153144108], published 4 May 2023 (hereinafter Wiki-Transformer; and BERT (language model) [https://en.wikipedia.org/w/index.php?title=BERT_(language_model)&oldid=1155736984], published 19 May 2023; (hereinafter Wiki-BERT) and Lewis et al, BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension; provided as evidence) in further view of Religa et al [US 2023/0315969 A1].
With regard to claim 6, Wang in view of Bailey and Hu teach all the claim limitations of claims 1 and 4 as discussed above.
Wang in view of Bailey and Hu do not appear to explicitly teach wherein generating the output document comprises: generating a plurality of topics; and clustering a plurality of sentences from the source document based on the plurality of topics, wherein the output document is based on the clustering.
Religa teaches generating a plurality of topics; and clustering a plurality of sentences from the source document based on the plurality of topics, wherein the output document is based on the clustering (see paragraphs [0032], [0021], and [0091]; the system can generate a plurality of topics/sections and be able to cluster/associate various sentences from the source document as part of the summary that is used when creating the slides representative of the source document).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the slide generation system of Wang in view of Bailey and Hu by including means to identify various sections/sub-topics and have respective summaries for those sub-topics as taught by Religa in order to focus summary generation to the particular topic represented by the respective section(s) so that more detailed and relevant summaries can be created and be section/topic specific versus a single lengthy overall document summary.
With regard to claim 14, Wang in view of Bailey and Hu teach all the claim limitations of claim 9 as discussed above.
Wang in view of Bailey and Hu do not appear to explicitly teach clustering, using a clustering model, a plurality of sentences of the source document to obtain a plurality of clustered sentences; receiving user feedback based on the plurality of clustered sentences; and updating parameters of the clustering model based on the user feedback.
Religa teaches clustering, using a clustering model, a plurality of sentences of the source document to obtain a plurality of clustered sentences (see paragraphs [0022], [0021], and [0091]; the system can generate a plurality of topics/sections and be able to cluster/associate various sentences from the source document as part of the summary that is used when creating the slides representative of the source document).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the text summarization system of Wang in view of Bailey and Hu by including means to identify various sections/sub-topics and have respective summaries for those sub-topics as taught by Religa in order to focus summary generation to the particular topic represented by the respective section(s) so that more detailed and relevant summaries can be created and be section/topic specific versus a single lengthy overall document summary.
Wang in view of Bailey and Hu in further view of Religa teach receiving user feedback based on the plurality of clustered sentences; and updating parameters of the clustering model based on the user feedback (see Religa, paragraphs [0022], [0021], and [0091]; see Wang, paragraphs [0051] and [0052]; the system allows for receiving user feedback and retraining the models based on the user feedback).
With regard to claim 17, this claim is substantially similar to claim 14 and is rejected for similar reasons as discussed above.
With regard to claim 18, Wang in view of Bailey, Hu, and Religa teach an extraction component configured to extract text from the source document (see Religa, paragraph [0049]; see Bailey, paragraph [0043]; Wang, paragraphs [0027] and [0028]; the system can extract and utilize text from the source document).
With regard to claim 19, Wang in view of Bailey, Hu, and Religa teach a user interface configured to receive feedback on the topic description or the plurality of clustered sentences (see Wang, paragraphs [0051] and [0052]; the system has means for the user to provide feedback about the generated summary/topic description).
With regard to claim 20, Wang in view of Bailey, Hu, and Religa teach wherein: the language generation model and the clustering model each comprises a transformer network (see Religa, paragraph [0067] and [0088]; see Wang, paragraphs [0039] and [0046]; the system can perform functions including encoding and decoding via transformer models).
Claims 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al [US 2022/0269713 A1] in view of Bailey et al [US 2025/0005064 A1] and Hu et al [US 2025/0094704 A1] (and Wikipedia: Transformer [https://en.wikipedia.org/w/index.php?title=Transformer_(deep_learning_architecture)&oldid=1153144108], published 4 May 2023 (hereinafter Wiki-Transformer; and BERT (language model) [https://en.wikipedia.org/w/index.php?title=BERT_(language_model)&oldid=1155736984], published 19 May 2023; (hereinafter Wiki-BERT) and Lewis et al, BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension; provided as evidence) in further view of Hou et al [US 2022/0084524 A1].
With regard to claim 11, Wang in view of Bailey and Hu teach all the claim limitations of claim 9 as discussed above.
Wang in view of Bailey and Hu do not appear to explicitly teach generating, using the language generation model, a plurality of topic descriptions based on the source document, wherein the objective function is based on a number of the plurality of topic descriptions.
Hou teaches generating, using the language generation model, a plurality of topic descriptions based on the source document, wherein the objective function is based on a number of the plurality of topic descriptions (see Hou, paragraphs [0033], [0041], and [0044]; the system can utilize multiple classification levels with a model for each classification level where the objective function measures the loss for the particular classification level so that each particular model is tailored for that reading level/classification level).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the summarization system of Wang in view of Bailey and Hu by including utilizing a plurality of models for generating different topic descriptions/summaries as taught by Hou in order to allow various summary generators to focus on particular summarization level such as expertise/reading level so that the system can cater to a wide-variety of users with varying degrees of expertise and knowledge and be able to provide summarizations that the respective users can understand.
With regard to claim 12, Wang in view of Bailey and Hu teach all the claim limitations of claim 9 as discussed above.
Wang in view of Bailey and Hu do not appear to explicitly teach wherein updating the language generation model comprises: performing a reinforcement learning process based on the objective function.
Hou teaches wherein updating the language generation model comprises: performing a reinforcement learning process based on the objective function (see Hou, paragraph [0047]; reinforcement learning can be used).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the summarization system of Wang in view of Bailey and Hu by using a widely used and well-known machine learning process as taught by Hou in order to use a particular known machine learning process that the designers or administrators of the system prefer to use while still providing means for the system to learn how to generate summaries tailored to the user’s characteristics.
Claims 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al [US 2022/0269713 A1] in view of Bailey et al [US 2025/0005064 A1] and Hu et al [US 2025/0094704 A1] (and Wikipedia: Transformer [https://en.wikipedia.org/w/index.php?title=Transformer_(deep_learning_architecture)&oldid=1153144108], published 4 May 2023 (hereinafter Wiki-Transformer; and BERT (language model) [https://en.wikipedia.org/w/index.php?title=BERT_(language_model)&oldid=1155736984], published 19 May 2023; (hereinafter Wiki-BERT) and Lewis et al, BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension; provided as evidence) in further view of Religa et al [US 2023/0315969 A1] and in further view of Vajjiparti et al [US 2024/0420189 A1].
With regard to claim 15, Wang in view of Bailey and Hu in further view of Religa teach all the claim limitations of claims 9 and 14 as discussed above.
Wang in view of Bailey and Hu in further view of Religa do not appear to explicitly teach generating a description of an intent of the user feedback.
Vajjiparti teaches generating a description of an intent of the user feedback (see paragraph [0162]; the system can utilize the user feedback and process it to generate a description/vector representation of the semantic meaning/intent).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the text summarization system of Wang in view of Bailey and Hu in further view of Religa by including to utilize vector encoding/modeling of user feedback as taught by Vajjiparti in order to allow the system to deterministically determine the semantic meaning of a user’s feedback and be able to also utilize that representation to find similar feedback, even if worded differently, from other users and be able to organize the users’ feedback so that the system can determine the most common/prevalent feedback intent so that the system can address the most pressing feedback issues first.
With regard to claim 16, Wang in view of Bailey and Hu in further view of Religa and in further view of Vajjiparti teach receiving a modified description of the intent of the user feedback (see Wang, paragraphs [0051] and [0052]; see Vajjiparti, paragraphs [0162] and [0179]-[0181]; the system receives the intent of the user and create a modified intent);
computing a likelihood loss based on the modified description of the intent; and updating the parameters of the clustering model based on the likelihood loss (see Wang, paragraphs [0051] and [0052]; Hu, paragraphs [0017], [0021], and [0024]; the system can retrain/update the model including its parameters/weights in accordance with a loss function).
Response to Arguments
Applicant's arguments (see the first paragraph on page 10 through the last paragraph on page 26) have been fully considered but they are not persuasive. The applicant argues that the claims are not drawn to an abstract idea and the respective 35 USC 101 rejections should be withdrawn. The Examiner respectfully disagrees. The particular arguments and respective response are discussed below.
In particular the applicant argues (see first last paragraph on page 11 through the last paragraph on page 13) that at least the generating a topic description limitation and the language generation model is trained limitation are not related to mental processes because these steps involve complicated computations and operations in high dimension spaces that can hardly be performed in the mind or by pen and paper. However, as noted by applicant, the usage of computations relates to mental process steps especially when the claim limitations are broadly recited. The Examiner appreciates that complicated computations could occur; but the claims themselves are not limited to strictly that embodiment. Additionally, as noted in the 35 USC 101 rejections, the abstract idea relates to the generating topic descriptions (outline) for a document that is tailored to a target audience limitation where the usage of the machine learning model (language generation model) and its respective training were deemed additional elements (i.e. not part of the abstract idea). Therefore, applicant's arguments are not persuasive with regards to Step 2A, Prong 1.
The applicant argues (see last paragraph on page 14 through the first paragraph on page 16) that the claims recite technological implementation details regarding usage of performing self-attention mechanism on a sequence of tokens/words in high dimension space which are complex steps that cannot be practically performed in the mind or by pen and paper and also that the method involves training a specialized machine learning model for NLP and document summarization based on a uniquely designed objective function. However, with regards to the specialized machine learning model; the claims merely indicate a model type with no architectural details of the model and indicate its intended methodology and field of use for document summarization. Although the terms objective function is utilized, the usage of training including an objective function (i.e. what and how an output is evaluated to know what to iterative train the ML model on) is "required for any machine learning model" (see Recentive Analytics Inc). With broad recitation of the intended usage of the ML model, the applicant's arguments of a specialized model is not persuasive. With regards to self-attention mechanism, the concept itself relates to just remembering or storing previous state(s)/variable(s)/input(s) where the claim merely recites the idea of a solution by indicating that the above noted concept occurs but with no exact or specific details on its implementation or how the claimed 'self-attention mechanism' is different from ordinary ML model usage of Transformer ML models.
The applicant argues (see third to last and second to last paragraph on page 16) that the claims do not recite a mental process step because the cited limitations are not comparable to any examples outlined in the MPEP 2106.04(a)(2)(III). The Examiner notes that the applicant concludes that the claims do not recite mental process without explaining why the steps do not relate to the identified mental process steps. E.g., generating a topic description can relate to mental evaluation of a document to think (or use a notepad with pencil) of ideas or brainstorming topics. With regards to the complexity preference of a user, that also relates to mental evaluation (determine what is expected for target audience to gain from a presentation) and decision/judgment steps (what topics to present and at what level of detail). Therefore, as noted above, the claim limitations include at least one claim limitation that recites an abstract idea with respect to Step 2A, Prong 1.
The applicant argues (see the first paragraph on page 17 through the first paragraph on page 20) that the cited claim limitations are integrated into a practical application since the claims recite a “unique and specific implementation" of training a specialized language generation model. As noted in the Recentive decision, an abstract idea does not become nonabstract by limiting the invention to a particular field of use . The current claims appear to recite an already available technology, with its already available basic functions, to use as a tool in executing the claimed process, i.e. document summarization. Additionally, with respect to computer-assisted methods, merely speeding up human activity via increased speed/ efficiency resulting from the use of computers (with no improved computer techniques) do not themselves create eligibility. In other words, the instant claims indicate the usage of generic/standard machine learning models and its associated functionality (i.e. self-attention and training via objective function) at a high-level of generality which amounts to merely using the computer as a tool to implement the judicial exception (i.e. document evaluation/summarization). Additionally, applicant argues that the claims are similar to the updated examples since the additional elements recite a unique and specific implementation of training a specialized language generation model that results in improved machine learning processes and improved document summarization with the specific implementation and specialized ML model is trained based on an objective function that computes a percentage of technical words over a total number of words or percentage of technical sections over total number of sections. With regard to applicant’s arguments regarding an improvement to the functioning of a computer or to any other technology or technical field, the Examiner notes that, per MPEP 2106.05(a), that “[a]n important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107.” (emphasis added). Additionally, it is important to note that “the judicial exception alone cannot provide the improvement” and that “the claim reflects the asserted improvement”. Currently, the claim indicates that a topic description (or output) is generated based on an objective function with particular parameters; however, the Examiner notes that the output is information and is not used for anything and that having a particular meaning for the information relates to intended use or field of use. With regards to the objective function, this relates to the judicial exception in particular to the grading or evaluation of information/output with the objective function under the broadest reasonable interpretation, relating to the percentage of big/technical words are being used. Although the system makes mention of a language generation model, the training of the model is used for generating topic description and doesn’t appear to relate to system or computer performance; additionally, although applicant provides arguments with regard to conventional document processing models by providing more accuracy and control over generated topics and section content; the claim limitations do not necessarily require sections nor discuss the accuracy of the topics. Although training is performed, the generated output is meant to “conform” to the complexity preference which relates to being similar. As such, applicant’s arguments are not persuasive.
The applicant argues (see second paragraph on page 20 through the second paragraph on page 22) with regard to improving document summarization, per MPEP 2106.05(a), the extent to which the claim covers a particular solution or a particular way to achieve a desired outcome is an important consideration when determining whether a claim improves a technology or merely claiming the idea of a solution. As discussed above, the claims make mention of the desire to have topic descriptions generated (determined ) using machine learning via a model that makes use of self-attention and already trained based on an objective function; however, these recitations merely recite a high-level usage of those tools for the judicial exception and thus appear to be more directed towards claiming the idea of a solution since no specific details about the model or how the self-attention or objective function is different from standard ML usages except by indicating that it is for a particular field of use. Which, accordingly, the particular field of use of document summarization relates to the judicial exception which, MPEP 2106.05(a) indicates that the judicial exception alone cannot provide the improvement. Also, as noted in Recentive merely using computer-assisted methods to achieve speed up and efficiency does not themselves make a claim eligible which applicant argues, however, the extent that a claim covers the improvement or solution is an important consideration and indicating that a machine learning model is trained and used appears to cover more the idea of a solution. As such, applicant’s arguments are not persuasive.
The applicant argues (see third paragraph on page 22 through the second to last paragraph on page 23) that the combination of elements as a whole integrate into a practical application since they recite a unique and specific implementation of training a specialized language generation model. However, as discussed above, the training merely indicates usage of self-attention and using an objective function with no details how this methodology is different from other machine learning processes nor how the document summarization is improved; the system provides an output, which as noted above, relates to the judicial exception of users evaluating/summarizing information. Therefore, applicant’s arguments are not persuasive.
The applicant argues (see last paragraph on page 23 through the last paragraph on page 26) that the claims with respect to NLP and document summarization industry relates to unconventional operations that are not well-understood, routine, or conventional (WWURC) in the field because the combination of elements in the generating using a language generation model a topic description limitation. As noted above, although applicant's arguments may recite various details about reinforcement learning, per MPEP 2106.05(a), the extent to which the claim covers the solution is an important consideration; in addition, the judicial exception alone cannot provide the improvement. With regards to the generating topic description argument, as noted above, this limitation recites the judicial exception. With respect to the usage of the machine learning models, as noted in the 35 USC 101 rejections, the claim recitations recite the machine learning model and its intrinsic features at a high-level of generality (e.g. self-attention and objective functions). Also, the claim recites usage of a trained model while applicant's arguments indicate that their method is "training a model"; the Examiner notes that no active training step appears to be required in the claim. Therefore, as such, applicant's arguments are not persuasive.
Applicant's arguments (see the first paragraph on page 27 through the last paragraph on page 33) have been fully considered but they are not persuasive. The applicant argues that the cited references do not teach the language generation model is trained to generate an output that conforms to the complexity preference based on the claimed objective function. The Examiner respectfully disagrees. In particular, applicant argues that Wang does not teach the limitation and Bailey makes no mention of an objective function based on the percentage of technical words in the topic description or percentage of technical sections and that Hu’s teachings are different from the claim limitations. As illustrated in the 35 USC 103 rejections, the combination of references teach the claim limitations as recited with the Wang reference illustrating the machine learning model and the technique of receiving a source document and generating slides with topic descriptions (summaries) for an audience with Bailey illustrating a need and desire to tailor the summaries/description to the characteristics of the use since the knowledge level of users for various subjects differ as well as their understanding and vocabulary/reading level. However, Bailey hints at how the process is done including utilizing user’s own data sources to determine their comprehension level and when to incrementally include new jargon (technical words/concepts) with Hu illustrating in more detail the training process including some detail on the loss function including relevant words. The combination illustrates that for technical summarizations, such as how much jargon to include in the summary, is based on a ratio or percentage of a set of relevant words (e.g. jargon or technical words) to the overall count of words; thus allowing the system to tailor the summarization based on the user’s characteristics. Additionally, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., reinforcement learning and reward) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The Examiner notes that such limitations are not recited in claim 1 but rather claim 12 and thus those respective arguments are not persuasive with regard to claim 1. Therefore, applicant’s arguments are not persuasive.
Applicant’s arguments (see the first paragraph on page 34 through top of page 35) with respect to claim 7 have been fully considered and are persuasive. The 35 USC 103 rejection of claim 7 has been withdrawn. The applicant amended the claim to incorporate new limitations that required further search and consideration. However, no new references were found that would teach or fairly suggest the claim limitation as amended. Accordingly, the respective 35 USC 103 rejection has been withdrawn.
Applicant's arguments (see the last whole paragraph on page 35) have been fully considered but they are not persuasive. The applicant argues that the respective dependent claims are allowable based on their dependence upon the independent claims as discussed above. The Examiner respectfully disagrees. As discussed above, the applicant’s arguments were not persuasive therefore the rejections of the independent claims still stand and accordingly the respective rejections of the dependent claims still stand too.
Examiner Comments
Upon review of the arguments and respective sections in the specification, the Examiner believes clarifying detail regarding the output of the language generation model can help advance prosecution with respect to at least the 35 USC 101 rejections. For example, upon reviewing claim 1, the system generates a topic description; however, the respective limitation of the output is broad. Additional details including focusing on the output being a document or slides with many sections and respective topic descriptions for each section can help integrate the claim into a practical application since the scope of the claim. Additionally, any clarifying details regarding the layout of the document and/or how topic description is affected by topic length preference for different complexity levels could also add some distinguishing features from the teachings of the cited prior art.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARC S SOMERS whose telephone number is (571)270-3567. The examiner can normally be reached M-F 11-8 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann Lo can be reached on 5712729767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARC S SOMERS/Primary Examiner, Art Unit 2159 3/21/2026