Prosecution Insights
Last updated: April 19, 2026
Application No. 18/674,218

METHOD FOR GENERATING SUMMARY AND SYSTEM THEREOF

Non-Final OA §103
Filed
May 24, 2024
Examiner
MUELLER, PAUL JOSEPH
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Seoul National University R&Db Foundation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
97 granted / 128 resolved
+13.8% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
153
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 128 resolved cases

Office Action

§103
DETAILED ACTION Introduction This office action is in response to Applicant’s submission filed on May 24, 2024. Claims 1-22 are pending in the application. As such, claims 1-22 have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings were received on May 24, 2024. These drawings have been accepted and considered by the Examiner. Specification The disclosure is objected to because of the following informalities: In the specification, paragraph [0079] refers to Fig. 20. However, Fig. 20 does not exist. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Saleh et al. (US Patent Pub. No. 20210350229 A1), hereinafter Saleh, in view of Mitsui (US Patent Pub. No. 20230409830 A1), in view of Lee et al. (US Patent Pub. No. 20180150905 A1), hereinafter Lee, in view of Kascenas et al. (US Patent Pub. No. 20230103692 A1), hereinafter Kascenas. Regarding claims 1 and 22, Saleh teaches a method and a system for generating a summary (Saleh in [0024] teaches a method and system for generating summaries) performed by at least one computing device (Saleh in [0017] teaches using computer programs on one or more computers), the method comprising: one or more processors; and a memory that stores a computer program executed by the one or more processors, wherein the computer program includes instructions that cause to be performed (Saleh in [0060] teaches using a data processing apparatus which executes instructions and has a memory with program instructions): [claim 22 only an operation of] acquiring a first sample pair, the first sample pair including an original text and a summary corresponding to the original text (Saleh in [0022] teaches using a labeled training dataset which includes text documents and corresponding ground-truth summaries of the text documents); and [claim 22 only an operation of] updating the summary model by performing a summary task [using the second sample pair] (Saleh in [0024] teaches using a training engine which needs multiple millions of pairs of text documents and human-written summaries in order to train the network to generate meaningful and linguistically fluent summaries). Saleh does not teach, however Mitsui teaches [claim 22 only an operation of] extracting a common phrase that appears simultaneously in the original text and the summary of the first sample pair (Mitsui in [0049] teaches specifying frequent appearance parts of words included in the summary sentences in the original text); [claim 22 only an operation of] selecting a first phrase among common phrases based on a prediction probability of a summary model for the common phrases (Mitsui in [0050] teaches the frequent appearance part may not be a most frequent appearance part, and is, for example, a part where the sum of the index values calculated for each word included in the summary sentences is equal to or greater than a threshold value [here threshold maps to prediction probability]). Mitsui is considered to be analogous to the claimed invention because it is in the same field of text summarization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh further in view of Mitsui to allow for identifying common segments in a summary as the original text. Motivation to do so would allow for an information processing apparatus which includes a processor configured to acquire a summary sentence obtained by summarizing an original text, and performs control such that frequent appearance parts of words included in the summary sentence in the original text are displayed as corresponding parts in the original text corresponding to the summary sentence in a case where the summary sentence is designated (Mitsui [0009]). Saleh, as modified above, does not teach, however Lee teaches [claim 22 only an operation of] generating a second sample pair by modifying [the first phrase in the original text] and the summary of the first sample pair (Lee in [0131] teaches generate new summarized content by changing the content summarization range of the summarized content). Lee is considered to be analogous to the claimed invention because it is in the same field of text summarization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Lee to allow for generating a new summary. Motivation to do so would provide for the model learning part to allow the data summarization model to learn through reinforcement learning using feedback as to whether a result of the content summarization according to the learning is correct (Lee [0090]). Saleh, as modified above, does not teach, however Kascenas teaches [claim 22 only an operation of] generating a second sample pair by modifying the first phrase in the original text and [the summary of the first sample pair] (Kascenas in [0116] teaches negative pair data may be generated by the negative pair generator from training data sets that may be normal before modification by the negative pair generator); Kascenas is considered to be analogous to the claimed invention because it is in the same field of generating training data sets. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Kascenas to allow for negative pair data to be generated by the negative pair generator. Motivation to do so would allow for generating training samples for training a machine learning model to perform a task (Kascenas [0033]). Regarding claim 2, Saleh, as modified above, teaches the method of claim 1. Saleh further teaches wherein the summary model is a pretrained model (Saleh in [0007] teaches using a pre-trained text summarization neural network). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Eisenstadt et al. (US Patent Pub. No. 20240386209 A1), hereinafter Eisenstadt. Regarding claim 3, Saleh, as modified above, teaches the method of claim 1. Saleh, as modified above, teaches the common phrase, and obtaining a prediction probability. Saleh, as modified above, does not teach, however Eisenstadt teaches wherein when [the common phrase] includes a plurality of tokens, [the prediction probability] for [the common phrase] is obtained based on [a prediction probability of the summary model] for a first token among the plurality of tokens (Eisenstadt in [0086] teaches ranking a plurality of input tokens, and in [0051] teaches ranking all the tokens). Eisenstadt is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Eisenstadt to allow for ranking a plurality of input tokens. Motivation to do so would allow for assisting the user in understanding the summarization model outputs thereby improving user trust in the summarization model results as well as providing insights into decision making by the summarization model (Eisenstadt [0017]). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Eisenstadt, in view of Embar et al. (US Patent Pub. No. 20220068279 A1), hereinafter Embar. Regarding claim 4, Saleh, as modified above, teaches the method of claim 1. Saleh, as modified above, teaches the common phrase, and the first phrase. Saleh, as modified above, does not teach, however Mitsui teaches wherein the selecting of the first phrase includes: selecting the specific [common phrase] as the [first phrase] based on a determination that the obtained prediction probability is greater than or equal to a reference value (Mitsui in [0050] teaches the frequent appearance part may not be a most frequent appearance part, and is, for example, a part where the sum of the index values calculated for each word included in the summary sentences is equal to or greater than a threshold value [here threshold maps to prediction probability]). Mitsui is considered to be analogous to the claimed invention because it is in the same field of text summarization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Mitsui to allow for identifying where the sum of the index values calculated for each word included in the summary sentences is equal to or greater than a threshold value. Motivation to do so would allow for an information processing apparatus which includes a processor configured to acquire a summary sentence obtained by summarizing an original text, and performs control such that frequent appearance parts of words included in the summary sentence in the original text are displayed as corresponding parts in the original text corresponding to the summary sentence in a case where the summary sentence is designated (Mitsui [0009]). Saleh, as modified above, does not teach, however Eisenstadt teaches obtaining a prediction probability for the specific [common phrase] by inputting the [extracted text] into the summary model without inputting the original text of the first sample pair into the summary model (Eisenstadt in [0086] teaches ranking a plurality of input tokens, and in [0051] teaches ranking all the tokens [here the input may be specifically chosen to not include the original text of the first sample pair]). Eisenstadt is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Eisenstadt to allow for ranking a plurality of input tokens. Motivation to do so would allow for assisting the user in understanding the summarization model outputs thereby improving user trust in the summarization model results as well as providing insights into decision making by the summarization model (Eisenstadt [0017]). Saleh, as modified above, does not teach, however Embar teaches extracting a text positioned before a specific [common phrase] from the summary of the first sample pair (Embar in [0050] teaches identifying a window of text that starts three tokens before the matched trigger phrase). Embar is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Embar to allow for identifying text before a phrase. Motivation to do so would allow for generating the conversation summary including providing the actual highlight as at least a portion of the conversation summary (Embar [0016]). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Eisenstadt, in view of Embar, in view of Deligia et al. (US Patent Pub. No. 20180253344 A1), hereinafter Deligia. Regarding claim 5, Saleh, as modified above, teaches the method of claim 4. Saleh, as modified above, teaches the common phrase, the original text, and the first sample pair. Saleh, as modified above, does not teach, however Deligia teaches wherein the obtaining of the prediction probability for the specific common phrase includes inputting an empty text instead of the original text of the first sample pair (Deligia in [0106] teaches providing analyzer with an empty text input). Deligia is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Deligia to allow for providing analyzer with an empty text input. Motivation to do so would allow for using an ingestion pipeline to ingest and process data in a synchronous manner (in addition to asynchronously) which advantageously allows for a more flexible workflow, allowing clients to customize data processing based on their individual needs and get results back (Deligia [0116]). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Eisenstadt, in view of Embar, in view of Kidd et al. (US Patent Pub. No. 20240232401 A1, as supported by 18/153,315 published 20230111), hereinafter Kidd, in view of Tiwari et al. (US Patent Pub. No. 20210133251 A1), hereinafter Tiwari. Regarding claim 6, Saleh, as modified above, teaches the method of claim 1. Saleh, as modified above, teaches the common phrase, and the first phrase. Saleh, as modified above, does not teach, however Mitsui teaches wherein the selecting of the first phrase includes: selecting the specific [common phrase] as the [first phrase] based on [a determination that a difference between the first prediction probability and the second prediction probability] is greater than or equal to a reference value (Mitsui in [0050] teaches the frequent appearance part may not be a most frequent appearance part, and is, for example, a part where the sum of the index values calculated for each word included in the summary sentences is equal to or greater than a threshold value [here threshold maps to prediction probability]). Mitsui is considered to be analogous to the claimed invention because it is in the same field of text summarization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Mitsui to allow for identifying where the sum of the index values calculated for each word included in the summary sentences is equal to or greater than a threshold value. Motivation to do so would allow for an information processing apparatus which includes a processor configured to acquire a summary sentence obtained by summarizing an original text, and performs control such that frequent appearance parts of words included in the summary sentence in the original text are displayed as corresponding parts in the original text corresponding to the summary sentence in a case where the summary sentence is designated (Mitsui [0009]). Saleh, as modified above, does not teach, however Eisenstadt teaches obtaining a first prediction probability for the specific common phrase by inputting the original text of the first sample pair and the extracted text into the summary model (Eisenstadt in [0086] teaches ranking a plurality of input tokens, and in [0051] teaches ranking all the tokens [here the input may be specifically chosen to be the original text of the first sample pair and the extracted text]); [replacing the specific common phrase with another phrase in the original text of the first sample pair and] obtaining a second prediction probability of the summary model for the another phrase using the replaced original text (Eisenstadt in [0086] teaches ranking a plurality of input tokens, and in [0051] teaches ranking all the tokens [here the input may be specifically chosen to be the another phrase]). Eisenstadt is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Eisenstadt to allow for ranking a plurality of input tokens. Motivation to do so would allow for assisting the user in understanding the summarization model outputs thereby improving user trust in the summarization model results as well as providing insights into decision making by the summarization model (Eisenstadt [0017]). Saleh, as modified above, does not teach, however Embar teaches extracting a text positioned before a specific common phrase from the summary of the first sample pair (Embar in [0050] teaches identifying a window of text that starts three tokens before the matched trigger phrase). Embar is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Embar to allow for identifying text before a phrase. Motivation to do so would allow for generating the conversation summary including providing the actual highlight as at least a portion of the conversation summary (Embar [0016]). Saleh, as modified above, does not teach, however Kidd teaches [selecting the specific common phrase as the first phrase based on] a determination that a difference between the first prediction probability and the second prediction probability is greater than or equal to a reference value (Kidd in [0097] teaches determining a probability difference between the first probability of detection and the second probability of detection; and based on determining that the probability difference is greater than a threshold probability difference). Kidd is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Kidd to allow for determining a probability difference between the first probability of detection and the second probability of detection. Motivation to do so would allow for the system to generate summaries across different devices (Kidd [0072]). Saleh, as modified above, does not teach, however Tiwari teaches replacing the specific common phrase with another phrase in the original text of the first sample pair [and obtaining a second prediction probability of the summary model for the another phrase using the replaced original text] (Tiwari in [0071] teaches replacing longer phrases, typically of two to four words, with their paraphrases) Tiwari is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tiwari to allow for replacing longer phrases. Motivation to do so would allow for handling certain situations by using a query template that allows the process to specify the weight of extracted entities (Tiwari [0046]). Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Tiwari, in view of Singh Bawa et al. (US Patent Pub. No. 20220237373 A1), hereinafter Singh. Regarding claim 7, Saleh, as modified above, teaches the method of claim 1. Saleh, as modified above, teaches the generating of the second sample pair, and the first phrase. Saleh, as modified above, does not teach, however Tiwari teaches wherein the first phrase is an entity (Tiwari in [0038] teaches extracting entities), and [the generating of the second sample pair includes] replacing the first phrase with a second phrase [belonging to a same entity category as the first phrase], the second phrase being different from the first phrase (Tiwari in [0071] teaches replacing longer phrases, typically of two to four words, with their paraphrases). Tiwari is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tiwari to allow for replacing longer phrases. Motivation to do so would allow for handling certain situations by using a query template that allows the process to specify the weight of extracted entities (Tiwari [0046]). Saleh, as modified above, does not teach, however Singh teaches [the generating of the second sample pair includes replacing] the first phrase with a second phrase belonging to a same entity category as the first phrase, the second phrase being different from the first phrase (Singh in [0062] teaches using different category-specific static text and entity values for different category-specific entities). Singh is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Singh to allow for using different category-specific entities. Motivation to do so would allow for systems that support type-specific automated document summarization (also referred to as encapsulation) (Singh [0004]). Regarding claim 8, Saleh, as modified above, teaches the method of claim 7. Saleh, as modified above, teaches the plurality of sample pairs. Saleh further teaches wherein the first sample pair is selected from a plurality of sample pairs included in a previously prepared training set (Saleh in [0022] teaches using a labeled training dataset which includes text documents and corresponding ground-truth summaries of the text documents). Saleh, as modified above, does not teach, however Tiwari teaches the second phrase is selected from entities extracted from the [plurality of sample pairs] (Tiwari in [0038] teaches extracting entities, and in [0045-0046] teaches selecting the best match to get the most relevant entity). Tiwari is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tiwari to allow for extracting entities. Motivation to do so would allow for handling certain situations by using a query template that allows the process to specify the weight of extracted entities (Tiwari [0046]). Regarding claim 9, Saleh, as modified above, teaches the method of claim 7. Saleh, as modified above, teaches the first phrase and the second phrase. Saleh, as modified above, does not teach, however Tiwari teaches wherein the replacing of [the first phrase] with [the second phrase] includes replacing a word in [the first phrase] with a word in [the second phrase] at a corresponding position, when both [the first phrase] and [the second phrase] include a plurality of words (Tiwari in [0058-0059] teaches replacing a phrase in the original sentence, and replacing a single word with another having equivalent meaning in the context). Tiwari is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tiwari to allow for replacing a phrase. Motivation to do so would allow for handling certain situations by using a query template that allows the process to specify the weight of extracted entities (Tiwari [0046]). Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Eisenstadt, in view of Embar, in view of Tiwari. Regarding claim 10, Saleh, as modified above, teaches the method of claim 1. Saleh, as modified above, teaches the generating of the second sample pair, the first phrase, the second phrase. Saleh, as modified above, does not teach, however Mitsui teaches wherein the generating of the second sample pair includes: selecting a [second phrase] different from the [first phrase] among the plurality of predefined phrases based on the obtained prediction probability (Mitsui in [0050] teaches the frequent appearance part may not be a most frequent appearance part, and is, for example, a part where the sum of the index values calculated for each word included in the summary sentences is equal to or greater than a threshold value [here threshold maps to prediction probability]). Mitsui is considered to be analogous to the claimed invention because it is in the same field of text summarization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Mitsui to allow for identifying where the sum of the index values calculated for each word included in the summary sentences is equal to or greater than a threshold value. Motivation to do so would allow for an information processing apparatus which includes a processor configured to acquire a summary sentence obtained by summarizing an original text, and performs control such that frequent appearance parts of words included in the summary sentence in the original text are displayed as corresponding parts in the original text corresponding to the summary sentence in a case where the summary sentence is designated (Mitsui [0009]). Saleh, as modified above, does not teach, however Eisenstadt teaches obtaining a prediction probability for each of a plurality of predefined phrases by inputting the original text of the first sample pair and the extracted text into the summary model (Eisenstadt in [0086] teaches ranking a plurality of input tokens, and in [0051] teaches ranking all the tokens [here the input may be specifically chosen to be the plurality of predefined phrases]); Eisenstadt is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Eisenstadt to allow for ranking a plurality of input tokens. Motivation to do so would allow for assisting the user in understanding the summarization model outputs thereby improving user trust in the summarization model results as well as providing insights into decision making by the summarization model (Eisenstadt [0017]). Saleh, as modified above, does not teach, however Embar teaches extracting a text positioned before the first phrase from the summary of the first sample pair (Embar in [0050] teaches identifying a window of text that starts three tokens before the matched trigger phrase). Embar is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Embar to allow for identifying text before a phrase. Motivation to do so would allow for generating the conversation summary including providing the actual highlight as at least a portion of the conversation summary (Embar [0016]). Saleh, as modified above, does not teach, however Tiwari teaches replacing the first phrase with the second phrase (Tiwari in [0058-0059] teaches replacing a phrase in the original sentence). Tiwari is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tiwari to allow for replacing longer phrases. Motivation to do so would allow for handling certain situations by using a query template that allows the process to specify the weight of extracted entities (Tiwari [0046]). Regarding claim 11, Saleh, as modified above, teaches the method of claim 10. Saleh, as modified above, teaches the common phrase, and the first phrase. Saleh, as modified above, does not teach, however Mitsui teaches wherein the second phrase is selected among phrases whose obtained prediction probability is less than a reference value (Mitsui in [0050] teaches the frequent appearance part may not be a most frequent appearance part, and is, for example, a part where the sum of the index values calculated for each word included in the summary sentences is equal to or greater than a threshold value [here threshold maps to prediction probability], and in [0062] teaches identifying cases where it is less than the threshold value). Mitsui is considered to be analogous to the claimed invention because it is in the same field of text summarization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Mitsui to allow for identifying where the sum of the index values calculated for each word included in the summary sentences is less than a threshold value. Motivation to do so would allow for an information processing apparatus which includes a processor configured to acquire a summary sentence obtained by summarizing an original text, and performs control such that frequent appearance parts of words included in the summary sentence in the original text are displayed as corresponding parts in the original text corresponding to the summary sentence in a case where the summary sentence is designated (Mitsui [0009]). Claims 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Eisenstadt, in view of Embar, in view of Tiwari, in view of Tuschman et al. (US Patent Pub. No. 20190102802 A1), hereinafter Tuschman. Regarding claim 12, Saleh, as modified above, teaches the method of claim 10. Saleh, as modified above, teaches the second phrase, and the prediction probability. Saleh, as modified above, does not teach, however Tuschman teaches wherein the [second phrase] is randomly selected from phrases whose obtained [prediction probability] is within a certain range (Tuschman in [0085] teaches identifying a respective percentile range of likelihood). Tuschman is considered to be analogous to the claimed invention because it is in the same field of generating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tuschman to allow for identifying a respective percentile range of likelihood. Motivation to do so would allow for real-time bidding for displaying online advertising (Tuschman [0074]). Regarding claim 13, Saleh, as modified above, teaches the method of claim 10. Saleh, as modified above, teaches the second phrase, and the prediction probability. Saleh, as modified above, does not teach, however Tuschman teaches wherein the selecting of the [second phrase] includes selecting the [second phrase] among remaining phrases, excluding phrases with the prediction probability in the top K% (where K is a real number between 0 and 50) among the plurality of predefined phrases (Tuschman in [0085] teaches identifying a respective percentile range of likelihood, for example, one audience can be the top five percent of users in measure of likelihood to engage [this range can be selected accordingly to exclude any particular range desired]). Tuschman is considered to be analogous to the claimed invention because it is in the same field of generating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tuschman to allow for identifying a respective percentile range of likelihood. Motivation to do so would allow for real-time bidding for displaying online advertising (Tuschman [0074]). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Wu et al. (US Patent Pub. No. 20230004589 A1), hereinafter Wu. Regarding claim 14, Saleh, as modified above, teaches the method of claim 1. Saleh, as modified above, teaches the original text, the summary model, the first sample pair, and the second sample pair. Saleh, as modified above, does not teach, however Wu teaches further comprising: acquiring a negative summary of an [original text] of a specific sample pair among [the first sample pair] and [the second sample pair], the summary of [the first sample pair] and the summary of [the second sample pair] being positive summaries (Wu in [0032] teaches using a document representation, a positive summary representation and a negative summary representation); and additionally updating [the summary model] by performing a contrastive learning task using the [original text] of the specific sample pair, a positive summary of the specific sample pair, and the negative summary (Wu in [0006] teaches training a summary generation model based on a total contrastive loss function, and in [0032] teaches using a document representation, a positive summary representation and a negative summary representation). Wu is considered to be analogous to the claimed invention because it is in the same field of generating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Wu to allow for training a summary generation model based on a total contrastive loss function. Motivation to do so would allow for a summary generation model which is trained based on the total contrastive loss function, so that contractive learning is introduced in model training, which improves the accuracy of the summary generation model (Wu [0035]). Claims 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Wu, in view of Tiwari. Regarding claim 15, Saleh, as modified above, teaches the method of claim 14. Saleh, as modified above, teaches the specific sample pair, the first phrase, the second phrase, the first sample pair, the second sample pair, the negative summary. Saleh, as modified above, does not teach, however Tiwari teaches wherein the specific sample pair is [the second sample pair], the [second sample pair] is generated by replacing the [first phrase] with a [second phrase] different from the [first phrase] (Tiwari in [0071] teaches replacing longer phrases, typically of two to four words, with their paraphrases), and the acquiring of the [negative summary] includes generating a [negative summary] of the [second sample pair] by replacing the [first phrase] included in the [negative summary] of the [first sample pair] with the [second phrase] (Tiwari in [0071] teaches replacing longer phrases, typically of two to four words, with their paraphrases). Tiwari is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tiwari to allow for replacing longer phrases. Motivation to do so would allow for handling certain situations by using a query template that allows the process to specify the weight of extracted entities (Tiwari [0046]). Regarding claim 16, Saleh, as modified above, teaches the method of claim 14. Saleh, as modified above, teaches the specific sample pair, the first phrase, the second phrase, the first sample pair, the second sample pair, the negative summary. Saleh, as modified above, does not teach, however Tiwari teaches wherein the specific sample pair is the [second sample pair], the [second sample pair] is generated by replacing the [first phrase] with a [second phrase] different from the first phrase (Tiwari in [0071] teaches replacing longer phrases, typically of two to four words, with their paraphrases), and the acquiring of the [negative summary] includes generating a [negative summary] of the [second sample pair] by replacing the [first phrase] in the summary of the [first sample pair] with a third phrase different from the second phrase (Tiwari in [0071] teaches replacing longer phrases, typically of two to four words, with their paraphrases [regarding the third phrase different from the second phrase, this can be repeated to obtain a third phrase]). Tiwari is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tiwari to allow for replacing longer phrases. Motivation to do so would allow for handling certain situations by using a query template that allows the process to specify the weight of extracted entities (Tiwari [0046]). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Wu, in view of Tiwari, in view of Modani et al. (US Patent Pub. No. 20180011931 A1), hereinafter Modani. Regarding claim 17, Saleh, as modified above, teaches the method of claim 14. Saleh, as modified above, teaches the specific sample pair, the first phrase, the second phrase, the first sample pair, the second sample pair, the positive summary. Saleh, as modified above, does not teach, however Tiwari teaches the [positive summary] of the [second sample pair] includes a summary generated by replacing the [first phrase] in [the second positive summary] with a [second phrase] that is different from the [first phrase] (Tiwari in [0071] teaches replacing longer phrases, typically of two to four words, with their paraphrases). Tiwari is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tiwari to allow for replacing longer phrases. Motivation to do so would allow for handling certain situations by using a query template that allows the process to specify the weight of extracted entities (Tiwari [0046]). Saleh, as modified above, does not teach, however Modani teaches wherein the specific sample pair is the [second sample pair], the [positive summary] of the [first sample pair] includes a first positive summary that is a reference summary and a second positive summary that is not the reference summary (Modani in [0017] teaches generating a second summary from a first summary). Modani is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Modani to allow for generating a second summary from a first summary. Motivation to do so would allow for multiple summaries to be produced that are likely to be diverse from one another (Modani [0016]). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Eisenstadt, in view of Embar, in view of Kidd, in view of Tiwari. Regarding claim 18, Saleh, as modified above, teaches the method of claim 1. Saleh further teaches further comprising: acquiring a third sample pair, the third sample pair including an original text and a summary corresponding to the original text (Saleh in [0022] teaches using a labeled training dataset which includes text documents and corresponding ground-truth summaries of the text documents [this can be repeated to obtain the third pair]). Saleh, as modified above, does not teach, however Mitsui teaches [generating a fourth sample pair by replacing] a third phrase that appears simultaneously in the original text and the summary of the third sample pair [with a fourth phrase] (Mitsui in [0049] teaches specifying frequent appearance parts of words included in the summary sentences in the original text [this can be repeated to determine a third phrase]). Mitsui is considered to be analogous to the claimed invention because it is in the same field of text summarization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Mitsui to allow for identifying where the sum of the index values calculated for each word included in the summary sentences is equal to or greater than a threshold value. Motivation to do so would allow for an information processing apparatus which includes a processor configured to acquire a summary sentence obtained by summarizing an original text, and performs control such that frequent appearance parts of words included in the summary sentence in the original text are displayed as corresponding parts in the original text corresponding to the summary sentence in a case where the summary sentence is designated (Mitsui [0009]). Saleh, as modified above, does not teach, however Eisenstadt teaches obtaining a prediction probability for the third phrase by inputting the original text of the third sample pair and the first text into the updated summary model (Eisenstadt in [0086] teaches ranking a plurality of input tokens, and in [0051] teaches ranking all the tokens [here the input may be specifically chosen to not include the original text of the first sample pair, and this can be repeated for the third phrase]); obtaining a prediction probability for the fourth phrase by inputting an original text of the fourth sample pair and the second text into the updated summary model (Eisenstadt in [0086] teaches ranking a plurality of input tokens, and in [0051] teaches ranking all the tokens [here the input may be specifically chosen to not include the original text of the first sample pair, and this can be repeated for the fourth phrase]). Eisenstadt is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Eisenstadt to allow for ranking a plurality of input tokens. Motivation to do so would allow for assisting the user in understanding the summarization model outputs thereby improving user trust in the summarization model results as well as providing insights into decision making by the summarization model (Eisenstadt [0017]). Saleh, as modified above, does not teach, however Embar teaches extracting a first text positioned before the third phrase from the summary of the third sample pair (Embar in [0050] teaches identifying a window of text that starts three tokens before the matched trigger phrase [this can be repeated for the third phrase]) and extracting a second text positioned before the fourth phrase from a summary of the fourth sample pair (Embar in [0050] teaches identifying a window of text that starts three tokens before the matched trigger phrase [this can be repeated for the fourth phrase]). Embar is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Embar to allow for identifying text before a phrase. Motivation to do so would allow for generating the conversation summary including providing the actual highlight as at least a portion of the conversation summary (Embar [0016]). Saleh, as modified above, does not teach, however Kidd teaches evaluating a performance of the updated summary model based on a difference between the prediction probability for the third phrase and the prediction probability for the fourth phrase (Kidd in [0097] teaches determining a probability difference between the first probability of detection and the second probability of detection; and based on determining that the probability difference is greater than a threshold probability difference [this can be repeated for the third and fourth phrases]). Kidd is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Kidd to allow for determining a probability difference between the first probability of detection and the second probability of detection. Motivation to do so would allow for the system to generate summaries across different devices (Kidd [0072]). Saleh, as modified above, does not teach, however Tiwari teaches generating a fourth sample pair by replacing [a third phrase that appears simultaneously in the original text and the summary of the third sample pair] with a fourth phrase (Tiwari in [0071] teaches replacing longer phrases, typically of two to four words, with their paraphrases [this can be repeated to replace the third phrase]). Tiwari is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tiwari to allow for replacing longer phrases. Motivation to do so would allow for handling certain situations by using a query template that allows the process to specify the weight of extracted entities (Tiwari [0046]). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Lee, in view of Kascenas, in view of Kidd, in view of Du et al. (US Patent Pub. No. 20250284724 A1), hereinafter Du. Regarding claim 19, Saleh, as modified above, teaches the method of claim 1. Saleh, as modified above, teaches the first original text, the first summary, the summary model, the original text, the summary, the second original text, and the second summary. Saleh further teaches further comprising: acquiring a first original text (Saleh in [0022] teaches using a labeled training dataset which includes text documents [original text] and corresponding ground-truth summaries of the text documents); generating a first summary for the first original text and generating a second summary for [the second original text] through the updated summary model (Saleh in [0024] teaches using a training engine which needs multiple millions of pairs of text documents and human-written summaries in order to train the network to generate meaningful and linguistically fluent summaries [this may be repeated for each text as needed]); generating a first summary for the first original text and generating a second summary for [the second original text] through the updated summary model (Saleh in [0024] teaches using a training engine which needs multiple millions of pairs of text documents and human-written summaries in order to train the network to generate meaningful and linguistically fluent summaries [this may be repeated for each text as needed]). Saleh, as modified above, does not teach, however Kascenas teaches generating a second original text by modifying a phrase included in the first original text (Kascenas in [0116] teaches negative pair data may be generated by the negative pair generator from training data sets that may be normal before modification by the negative pair generator [this may be repeated to modify the original text]). Kascenas is considered to be analogous to the claimed invention because it is in the same field of generating training data sets. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Kascenas to allow for negative pair data to be generated by the negative pair generator. Motivation to do so would allow for generating training samples for training a machine learning model to perform a task (Kascenas [0033]). Saleh, as modified above, does not teach, however Kidd teaches evaluating a performance of the updated summary model based on a difference between the [first consistency score] and the [second consistency score] (Kidd in [0097] teaches determining a probability difference between the first probability of detection and the second probability of detection; and based on determining that the probability difference is greater than a threshold probability difference [this can be repeated for comparing the difference between the consistency scores]). Kidd is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Kidd to allow for determining a probability difference between the first probability of detection and the second probability of detection. Motivation to do so would allow for the system to generate summaries across different devices (Kidd [0072]). Saleh, as modified above, does not teach, however Du teaches obtaining a first consistency score between the [first original text] and the [first summary] using a function that evaluates factual consistency between an [original text] and a [summary] (Du in [0097] teaches determining a numerical difference between two texts which measures ground truth variation); obtaining a second consistency score between the [second original text] and the [second summary] using the function (Du in [0097] teaches determining a numerical difference between two texts which measures ground truth variation [this can be repeated for the second score]). Du is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Du to allow for determining a numerical difference between two texts which measures ground truth variation. Motivation to do so would allow for improving speech-to-text for high-quality summaries (Du [0116]). Claims 20 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Saleh, in view of Mitsui, in view of Eisenstadt, in view of Tiwari. Regarding claim 20, Saleh teaches a method for generating a summary (Saleh in [0024] teaches a method and system for generating summaries) performed by at least one computing device (Saleh in [0017] teaches using computer programs on one or more computers), the method comprising: acquiring a first sample pair, the first sample pair including an original text and a summary corresponding to the original text (Saleh in [0022] teaches using a labeled training dataset which includes text documents and corresponding ground-truth summaries of the text documents); updating the summary model by performing a summary task [using the second sample pair] (Saleh in [0024] teaches using a training engine which needs multiple millions of pairs of text documents and human-written summaries in order to train the network to generate meaningful and linguistically fluent summaries). Saleh does not teach, however Mitsui teaches selecting a first phrase among common phrases that appear simultaneously in the original text and the summary of the first sample pair (Mitsui in [0049] teaches specifying frequent appearance parts of words included in the summary sentences in the original text); extracting a text positioned before the first phrase from the summary of the first sample pair (Mitsui in [0049] teaches specifying frequent appearance parts of words included in the summary sentences in the original text); selecting a second phrase different from the first phrase among the plurality of predefined phrases based on the prediction probability (Mitsui in [0050] teaches the frequent appearance part may not be a most frequent appearance part, and is, for example, a part where the sum of the index values calculated for each word included in the summary sentences is equal to or greater than a threshold value [here threshold maps to prediction probability]). Mitsui is considered to be analogous to the claimed invention because it is in the same field of text summarization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh further in view of Mitsui to allow for identifying common segments in a summary as the original text. Motivation to do so would allow for an information processing apparatus which includes a processor configured to acquire a summary sentence obtained by summarizing an original text, and performs control such that frequent appearance parts of words included in the summary sentence in the original text are displayed as corresponding parts in the original text corresponding to the summary sentence in a case where the summary sentence is designated (Mitsui [0009]). Saleh, as modified above, does not teach, however Eisenstadt teaches obtaining a prediction probability for each of a plurality of predefined phrases by inputting the original text of the first sample pair and the extracted text into a summary model (Eisenstadt in [0086] teaches ranking a plurality of input tokens, and in [0051] teaches ranking all the tokens). Eisenstadt is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Eisenstadt to allow for ranking a plurality of input tokens. Motivation to do so would allow for assisting the user in understanding the summarization model outputs thereby improving user trust in the summarization model results as well as providing insights into decision making by the summarization model (Eisenstadt [0017]). Saleh, as modified above, does not teach, however Tiwari teaches generating a second sample pair by replacing the first phrase with the second phrase in the original text and the summary of the first sample pair (Tiwari in [0071] teaches replacing longer phrases, typically of two to four words, with their paraphrases). Tiwari is considered to be analogous to the claimed invention because it is in the same field of creating a summary. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Tiwari to allow for replacing longer phrases. Motivation to do so would allow for handling certain situations by using a query template that allows the process to specify the weight of extracted entities (Tiwari [0046]). Regarding claim 21, Saleh, as modified above, teaches the method of claim 20. Saleh, as modified above, teaches the common phrase, and the first phrase. Saleh, as modified above, does not teach, however Mitsui teaches wherein the second phrase is selected among the plurality of predefined phrases whose obtained prediction probability is less than a reference value (Mitsui in [0050] teaches the frequent appearance part may not be a most frequent appearance part, and is, for example, a part where the sum of the index values calculated for each word included in the summary sentences is equal to or greater than a threshold value [here threshold maps to prediction probability], and in [0062] teaches identifying cases where it is less than the threshold value). Mitsui is considered to be analogous to the claimed invention because it is in the same field of text summarization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saleh, as modified above, further in view of Mitsui to allow for identifying where the sum of the index values calculated for each word included in the summary sentences is less than a threshold value. Motivation to do so would allow for an information processing apparatus which includes a processor configured to acquire a summary sentence obtained by summarizing an original text, and performs control such that frequent appearance parts of words included in the summary sentence in the original text are displayed as corresponding parts in the original text corresponding to the summary sentence in a case where the summary sentence is designated (Mitsui [0009]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL J. MUELLER whose telephone number is (571)272-1875. The examiner can normally be reached M-F 9:00am-5:00pm (Eastern). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel C. Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. PAUL MUELLER Examiner Art Unit 2657 /PAUL J. MUELLER/Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Jan 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597419
NATURAL LANGUAGE PROCESSING APPARATUS AND NATURAL LANGUAGE PROCESSING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12596867
Detecting Computer-Generated Hallucinations using Progressive Scope-of-Analysis Enlargement
2y 5m to grant Granted Apr 07, 2026
Patent 12596886
PERSONALIZED RESPONSES TO CHATBOT PROMPT BASED ON EMBEDDING SPACES BETWEEN USER AND SOCIETY
2y 5m to grant Granted Apr 07, 2026
Patent 12579378
USING LLM FUNCTIONS TO EVALUATE AND COMPARE LARGE TEXT OUTPUTS OF LLMS
2y 5m to grant Granted Mar 17, 2026
Patent 12562174
NOISE SUPPRESSION LOGIC IN ERROR CONCEALMENT UNIT USING NOISE-TO-SIGNAL RATIO
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+34.6%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 128 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month