Prosecution Insights
Last updated: April 19, 2026
Application No. 18/596,629

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM

Final Rejection §103§112
Filed
Mar 06, 2024
Examiner
VOGT, JACOB BUI
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
2 (Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
4 granted / 7 resolved
-4.9% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
33 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
35.1%
-4.9% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§103 §112
DETAILED ACTION This communication is in response to the Amendments and Arguments filed on 01/07/2026. Claims 1-14 are pending and have been examined. Hence, this action has been made FINAL. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority No certified English translation is in the official record for the application. Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action, 37 CFR 41.154(b) and 41.202(e). Failure to provide a certified translation may result in no benefit being accorded for the non-English application. Response to Arguments The reply filed on 01/07/2026 has been entered. Applicant’s arguments with respect to claims 1-14 have been considered but are not persuasive/moot in view of new ground(s) of rejection caused by the amendments. With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 101, Applicant has amended each of the independent claims and asserts that “claim 1, as amended, explicitly recites the features related to an operation phase of the information processing apparatus. As a result, Applicant respectfully submits that the claims are not directed to an abstract idea and requests withdrawal of the rejection.” The examiner agrees that these newly added limitations overcome the rejection under 35 U.S.C. 101. With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 102 and 103, the applicant’s arguments with respect to claims 1-14 have been considered but are not persuasive/moot in view of new ground(s) of rejection caused by the amendments. Applicant asserts that “there is no motivation to combine Zhang and Ishihara [because] Zhang and Ishihara solve different problems. Their techniques are also incompatible: one handles summary sentence selection, and the other detects structural divisions in fixed-format data. A skilled person would not combine them to obtain the claimed mechanism of learning to select an optimal boundary from overlapping candidates.” The examiner respectfully disagrees with these assertions. The applicant states that Zhang and Ishihara solve different problems, but then fails to state why solving different problems would invalidate a motivation to combine. Combining techniques from various places within a field of technology is expected from a person having ordinary skill in the art. Further, the applicant claims that Zhang’s techniques are incompatible with Ishihara’s techniques, but then proceeds to merely summarize each of the techniques without stating why such techniques would be incompatible. Finally, the applicant states that claimed mechanisms of learning include “select[ing] an optimal boundary from overlapping candidates,” but no such language can be found within the current claim limitations. As amended, the examiner finds Zhang in view of Ishihara in view of US Patent Publication 20070260613 A1 (Ippili et al.) to make obvious every claim limitation found in the independent claims, as addressed in further detail below with respect to claim rejections under 35 USC § 103. The Applicant further asserts that “Even in claim 10 regarding reinforcement learning, the reward is tied to boundary estimation, which Zhang does not teach. Rather, Zhang evaluates text only at the sentence level, and does not estimate finer-grained or overlapping boundary candidates.” The examiner respectfully disagrees with these assertions. Regarding claim 10, there is no claim language that ties the reward to a boundary estimation. Rather, according to the claim language, an “evaluation value of the character string set” is used “as a reward in a case in which the generative model is trained through reinforcement learning.” There is no mention of a reward that is strictly tied to boundary estimation as the applicant claims. As amended, the claim limitations are obviously covered by Zhang et al. in view of Ishihara et al. in view of Ippili et al, as addressed in further detail below with respect to claim rejections under 35 USC § 103. Claim Interpretation The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification. The following terms in the claims have been given the following interpretations in light of the specification: Second document data: paragraph [0019]-[0022], “document data 36 is an example of second document data according to the technology of the present disclosure. ... In the present embodiment, the document data 36 is document data in which the document data 34 is summarized. The document data 36 is, for example, created by the user.” Thus, a second document data is any document created from the first document data, including user-generated summaries. This definition is used for purposes of searching for prior art, but cannot be incorporated into the claims. Should applicant wish different definitions, Applicant should point to the portions of the specification that clearly show a different definition. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 10-13 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites the limitations "the first character string set" and "the second character string set". There is insufficient antecedent basis for these limitations in the claim. Claim 11 is rejected for depending upon the rejection of claim 10. Claims 12 and 13 are rejected for similar reasons as described above with respect to claim 10. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 5, 8-10, and 12-14 are rejected under 35 U.S.C. 103 as obvious over "Neural Latent Extractive Document Summarization" (Zhang et al.) in view of US Patent Publication in view of US Patent Publication 20070260613 A1 (Ippili et al.) in view of US Patent Publication 20130246444 A1 (Ishihara et al.). Claim 1 Regarding claim 1, Zhang et al. disclose an information processing apparatus comprising: at least one processor (Zhang et al. pg. 4, Section 3, Paragraph 2, "We trained our extractive model on an Nvidia K80 GPU card") [connected to the bus], wherein the at least one processor is configured to: acquire first document data (Zhang et al. pg. 2, Section 2.1, Paragraph 2, "our extractive model has three parts: a sentence encoder to convert each sentence into a vector, a document encoder to learn sentence representations given surrounding sentences as context, and a document decoder to predict sentence labels based on representations learned by the document encoder."); generate a first character string set by dividing the first document data (Zhang et al. pg. 2, Section 2.1, Paragraph 2, "Let D = ( S 1 , S 2 , … , S | D | ) denote a document and S i = ( w 1 i , w 2 i , … , w | S i | i ) a sentence in D (where w j i is a word in S i ).") [into different lengths]; derive an evaluation value of each character string constituting the first character string set (Zhang et al. pg. 3, Section 2.3, Paragraph 2, "We estimate the likelihood of summary sentence H l given document sentence C k with the compression model introduced in Section 2.2 and calculate the normalized probability s k l " Probability s k l is considered analogous to an evaluation value) by using the first character string set (Zhang et al. pg. 3, Section 2.3, Paragraph 2, "We use the extractive model from Section 2.1 to produce probability distributions for latent variables (see Equation (3)) and obtain them by sampling z i ∼ p ( z i | z 1 : i - 1 , h i - 1 D ) (see Figure 1). C = { S i | z i = 1 } , the set of sentences whose latent variables equal to one, are our current extractive summaries." See Figure 1. Latent variables are calculated for each sentence (character string) using the neighboring sentences (first character string set) as context dependent sentence representations. Therefore, using latent variables to create a subset of character strings for evaluation is considered analogous to using a first character string set to derive an evaluation value) and second document data created from the first document data in accordance with a purpose (Zhang et al. pg. 3, Section 2.3, Paragraph 2, "Let D   =   ( S 1 , S 2 , … , S | D | ) denote a document and H = ( H 1 , H 2 , … , H | H | ) its human summary ( H k is a sentence in H )." Human summaries is considered analogous to second document data; see Claim Interpretation), select, based on the derived evaluation value, a plurality of character strings from the first character string set as correct answer data of a generative model (Zhang et al. pg. 3, Section 2.3, Paragraph 2, "For simplicity, we assume one document sentence can only find one summary sentence to explain it. Therefore, for all H l , we only retain the most evident s k l .") that receives input of document data (Zhang et al. pg. 2, Section 2.1, Paragraph 2, "Let D = ( S 1 , S 2 , … , S | D | ) denote a document and S i = ( w 1 i , w 2 i , … , w | S i | i ) a sentence in D (where w j i is a word in S i ).") and outputs a second character string set including a plurality of character strings included in the input document data (Zhang et al. pg. 4, Section 3, Paragraph 2, "During inference, for both extractive and latent models, we rank sentences ... and select the top three as summary"), train the generative model so that an error between the second character string set and the first character string set is minimized (Zhang et al. pg. 3, Section 2.3, Paragraph 2-3, "Let D   =   ( S 1 , S 2 , … , S | D | ) denote a document and H = ( H 1 , H 2 , … , H | H | ) its human summary ( H k is a sentence in H ).... C = { S i | z i = 1 } , the set of sentences whose latent variables equal to one, are our current extractive summaries.... We train the model by minimizing the negative expected R ( C , H ) " C = { S i | z i = 1 } is considered analogous to the first character string set. H = ( H 1 , H 2 , … , H | H | ) is considered analogous to the second character string set), and generate a third character string set by inputting target document data to the generative model (Zhang et al. pg. 2, Section 2.1, Paragraph 1-2, "In extractive summarization, a subset of sentences in a document is selected as its summary. We model this problem as an instance of sequence labeling. ... As shown in the lower part of Figure 1, our extractive model has ... a document encoder to learn sentence representations given surrounding sentences as context, and a document decoder to predict sentence labels based on representations learned by the document encoder. Let D = ( S 1 , S 2 , … , S | D | ) denote a document and S i = ( w 1 i , w 2 i , … , w | S i | i ) a sentence in D (where w j i is a word in S i )." D   =   ( S 1 , S 2 , … , S | D | ) is consdiered analogous to a a third character string set). Zhang et al. do not explicitly disclose an interactive user interface. However, Ippili et al. disclose an information processing apparatus comprising: an input device connected to a bus (Ippili et al. ¶ [0058], "Digital processing system 100 may contain ... input interface 190. All the components except display unit 170 may communicate with each other over communication path 150, which may contain several buses as is well known in the relevant arts."); a display connected to the bus (Ippili et al. ¶ [0060], "Graphics controller 160 generates display signals (e.g., in RGB format) to display unit 170 based on data/instructions received from CPU 110. Display unit 170 contains a display screen to display the images defined by the display signals."); and at least one processor connected to the bus (Ippili et al. ¶ [0059], "CPU 110 may execute instructions stored in RAM 120 to provide several features of the present invention described in the present application."), wherein the at least one processor is configured to: generate a third character string set by inputting target document data (Ippili et al. ¶ [0171], "In step 2020, script maintenance system receives data indicating a portion of the displayed text selected ("selection text") by a user, a search string containing multiple lines of text and a replace string containing multiple lines of text." A replace string containing multiple lines of text is considered analogous to a third character string set) [to the generative model], receive a position in the target document data designated by a user via the input device (Ippili et al. ¶ [0176], "FIG. 21A depicts a user interface to specify a selection of text"), and specify and display, on the display, a character string described at the position designated by the user from among a plurality of character strings included in the third character string set (Ippili et al. ¶ [0178]-[0179], "The user on clicking multi-line replace button 2160, script maintenance system substitutes each occurrence of the search string with the replace string in the selected text as per step 2050. FIG. 21B depicts the results corresponding to the substitutions requested in FIG. 21A." See Figures 21A-B, which illustrate displaying the character strings from the replace string specified in text box 2120 at the position designated by the user). Zhang et al. teach a generating sentences using generative model; see above citations. It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Zhang et al.’s information processing system to incorporate Ippili et al.’s user interface elements. The suggestion/motivation for doing so would have been that, “There is a general need to provide user interface features, which can be conveniently used or adapted for various environments of interest,” as noted by the Ippili et al. disclosure in paragraph [0006]. Zhang et al. in view of Ippili et al. do not explicitly disclose all of dividing a document into different lengths. However, Ishihara et al. disclose generating a first character string set by dividing the first document data into different lengths (Ishihara et al. ¶ [0064] "FIG. 5A, FIG. 5B, ... [and] FIG. 6A ... illustrate a process of dividing the fixed-length data 12 in accordance with every divisor of the length of the entirety of the fixed-length data 12 and assuming one record length. FIG. 5A illustrates the fixed-length data 12 of 30 characters as an example of the fixed-length data 12. Accordingly, the entirety of the fixed-length-data 12 has a length of 30, and the divisors thereof are 30, 15, 10, 6, 5, 3, 2 and 1. ... FIG. 5B illustrates divided data 12A-1 and 12A-2 into which the fixed-length data 12 is divided by the divisor of 15. ... FIG. 6A illustrates divided data 12B-1, 12B-2 and 12B-3 into which the fixed-length data 12 is divided by the divisor of 10."). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Zhang et al. in view of Ippili et al.’s information processing system to incorporate Ishihara et al.’s different length divisions. The suggestion/motivation for doing so would have been that, “break positions of items are automatically determined … This reduces the time and effort taken for manually input the start position and length of each item included in the fixed-length data to a computer,” as noted by the Ishihara et al. disclosure in paragraph [0052]. Claim 2 Regarding claim 2, the rejection of claim 1 is incorporated. Zhang et al. further disclose wherein the at least one processor is configured to select, based on the evaluation value, any one of the [plurality of] first character string [sets] as the correct answer data of the generative model (Zhang et al. pg. 3, Section 2.3, Paragraph 2, "For simplicity, we assume one document sentence can only find one summary sentence to explain it. Therefore, for all H l , we only retain the most evident s k l ."). Ishihara et al. further disclose generating a plurality of the first character string sets by dividing the first document data into different lengths from each other (Ishihara et al. ¶ [0064]-[0066] "FIG. 5A illustrates the fixed-length data 12 of 30 characters as an example of the fixed-length data 12. Accordingly, the entirety of the fixed-length-data 12 has a length of 30, and the divisors thereof are 30, 15, 10, 6, 5, 3, 2 and 1. ... FIG. 5B illustrates divided data 12A-1 and 12A-2 into which the fixed-length data 12 is divided by the divisor of 15. ... FIG. 6A illustrates divided data 12B-1, 12B-2 and 12B-3 into which the fixed-length data 12 is divided by the divisor of 10." ¶ [0061], "the first analyzing unit 16, in operation 126, determines whether processing has been completed for all the divisors. If the determination of operation 126 is negative, the first analyzing unit 16 returns to operation 112 and continues processing from operation 112." See Fig. 4, which illustrates the flowchart for choosing and applying divisors of different lengths). Claim 4 Regarding claim 4, the rejection of claim 1 is incorporated. Zhang et al. further disclose wherein the at least one processor is configured to select, based on the derived evaluation value, a plurality of character strings from the first character string set as the correct answer data of the generative model (Zhang et al. pg. 3, Section 2.3, Paragraph 2, "For simplicity, we assume one document sentence can only find one summary sentence to explain it. Therefore, for all H l , we only retain the most evident s k l .") in a state in which there is no overlapping portion between the plurality of character strings (Zhang et al. pg. 3, Section 2.3, Paragraph 2, "We use the extractive model from Section 2.1 to produce probability distributions for latent variables (see Equation (3)) and obtain them by sampling z i ∼ p ( z i | z 1 : i - 1 , h i - 1 D ) (see Figure 1). C = { S i | z i = 1 } , the set of sentences whose latent variables equal to one, are our current extractive summaries." pg.2, Section 2.1, Paragraph 1, "a document is viewed as a sequence of sentences and the model is expected to predict a True or False label for each sentence, where True indicates that the sentence should be included in the summary." Sentences in a document are each labelled individually, which implies zero overlap between sentences. Sentence set C is a subset of all document sentences that are labelled with z i = 1 (e.g. True). Therefore, sentence set C is considered analogous to a non-overlapping plurality of character strings.). Ishihara et al. further disclose generating one first character string set by repeating processing of dividing the first document data while varying the length of the division (Ishihara et al. ¶ [0064]-[0065], "FIG. 5A illustrates the fixed-length data 12 of 30 characters as an example of the fixed-length data 12. Accordingly, the entirety of the fixed-length-data 12 has a length of 30, and the divisors thereof are 30, 15, 10, 6, 5, 3, 2 and 1. Note that the results of dividing the entire length by individual divisors are 1, 2, 3, 5, 6, 10, 15 and 30. FIG. 5B illustrates divided data 12A-1 and 12A-2 into which the fixed-length data 12 is divided by the divisor of 15. ... FIG. 6A illustrates divided data 12B-1, 12B-2 and 12B-3 into which the fixed-length data 12 is divided by the divisor of 10." ¶ [0061], "the first analyzing unit 16, in operation 126, determines whether processing has been completed for all the divisors. If the determination of operation 126 is negative, the first analyzing unit 16 returns to operation 112 and continues processing from operation 112."). Claim 5 Regarding claim 5, the rejection of claim 1 is incorporated. Zhang et al. further disclose wherein the evaluation value is a value of which evaluation is higher as a rate of match between each character string constituting the first character string set and the second document data is higher (Zhang et al. pg. 3, Section 2.3, Paragraph 2, "Let D   =   ( S 1 , S 2 , … , S | D | ) denote a document and H = ( H 1 , H 2 , … , H | H | ) its human summary ( H k is a sentence in H ) ... We estimate the likelihood of summary sentence H l given document sentence C k with the compression model introduced in Section 2.2 and calculate the normalized probability s k l . ... The score R p measures the extent to which H can be inferred from C : R p C , H = 1 | C | ∑ k = 1 | C | m a x k = 1 | H | s k l ." R p C , H increases as a rate of match between sentence set C (first character string set) and sentence set H (second document data). Therefore, R p C , H is considered analogous to an evaluation value). Claim 8 Regarding claim 8, the limitations of claim 8 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claim 9 Regarding claim 9, Ishihara et al. disclose a non-transitory computer-readable storage medium storing an information processing program for causing a processor provided in an information processing apparatus to execute a process (Ishihara et al. ¶ [0009], "a device includes a memory configured to store a program; and a processor coupled to the memory and configured to execute a process based on the program"). The remaining limitations of claim 9 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claim 10 Regarding claim 10, the limitations of claim 10 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claim 12 Regarding claim 12, the limitations of claim 12 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claim 13 Regarding claim 13, the limitations of claim 13 are similar in scope to that of claim 9 and claim 10 and therefore are rejected for similar reasons as described above. Claim 14 Regarding claim 14, the rejection of claim 2 is incorporated. Ishihara et al. further disclose generating a plurality of the first character string sets by dividing the first document data by units having certain lengths, wherein the certain lengths are different between the plurality of first character string sets (Ishihara et al. ¶ [0064]-[0065], "FIG. 5A illustrates the fixed-length data 12 of 30 characters as an example of the fixed-length data 12. Accordingly, the entirety of the fixed-length-data 12 has a length of 30, and the divisors thereof are 30, 15, 10, 6, 5, 3, 2 and 1. ... FIG. 5B illustrates divided data 12A-1 and 12A-2 into which the fixed-length data 12 is divided by the divisor of 15. ... FIG. 6A illustrates divided data 12B-1, 12B-2 and 12B-3 into which the fixed-length data 12 is divided by the divisor of 10." ¶ [0061], "the first analyzing unit 16, in operation 126, determines whether processing has been completed for all the divisors. If the determination of operation 126 is negative, the first analyzing unit 16 returns to operation 112 and continues processing from operation 112."). Claims 3 and 11 are rejected under 35 U.S.C. 103 as obvious over Zhang et al. in view of Ippili et al. in view of Ishihara et al. as applied to claim 2 above, and further in view of US Patent Publication 20200401797 A1 (Hirao et al.). Claim 3 Regarding claim 3, the rejection of claim 2 is incorporated. Zhang et al. in view of Ippili et al. in view of Ishihara et al. disclose all the elements of the claimed invention as stated above. Zhang et al. further disclose wherein the at least one processor is configured to derive a total value of the evaluation values of the character strings constituting the first character string set as the evaluation value of the first character string set (Zhang et al. pg. 3, Section 2.3, Paragraph 2, "We estimate the likelihood of summary sentence H l given document sentence C k with the compression model introduced in Section 2.2 and calculate the normalized probability s k l ... The score R p measures the extent to which H can be inferred from C : R p C , H = 1 | C | ∑ k = 1 | C | m a x k = 1 | H | s k l . ... R p C , H can be viewed as the “precision” of document sentences with regard to summary sentences." R p is considered analogous to a total value of evaluation scores). Zhang et al. in view of Ippili et al. in view of Ishihara et al. do not explicitly disclose all of decreasing a derived evaluation value based on a quantity of character strings. However, Hirao et al. disclose deriving a total value of the evaluation values of the character strings constituting the first character string set as the evaluation value of the first character string set (Hirao et al. ¶ [0031], "the score of the system summary is the sum of the scores of the units" The system summary is considered analogous to a first character string set), and decreasing the derived evaluation value of the first character string set in accordance with at least one of a quantity of character strings, which are included in the first character string set but not included in the second document data (Hirao et al. ¶ [0029]-[0031], "one unit u of the system summary is allocated to a unit c included in the set of oracles by solving the allocation problem of Formula (2) below, which is provided with a restriction such that there is at most one u that is associated with a c . ... Letting o ( i ) be the index of u associated with c i , the score of the unit c i of the system summary is c i = s i m c i , u o i c n t ( u o i )" ¶ [0027], " c n t ( ) is a function that returns the number of oracles in which u appears. That is, c n t ( ) has a maximum of K and a minimum of 1." The c n t ( ) function decreases the score of system summary unit u (evaluation value of a character string) if the unit u is present in fewer oracle summaries. An oracle is considered analogous to second document data. The score of a system summary is the sum of scores of its individual units. Therefore, it follows that the evaluation value of the first character string set (the score of the system summary) decreases in accordance with a quantity of character strings included in a first character string set (system summary) but not included in the second document data ( K oracles)), or a quantity of character strings, which are included in the second document data but not included in the first character string set. It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Zhang et al. in view of Ippili et al. in view of Ishihara et al. to incorporate Hirao et al.’s summary evaluation system. The suggestion/motivation for doing so would have been that, “an automatic evaluation index is proposed which uses an oracle…. By using this automatic evaluation index, it is possible to intuitively analyze whether or not a summary system has successfully extracted sentences or phrases,” as noted by the Hirao et al. disclosure in paragraph [0017]. Claim 11 Regarding claim 11, the rejection of claim 10 is incorporated. The limitations of claim 11 are similar in scope to that of claim 3 and therefore are rejected for similar reasons as described above. Claim 6 is rejected under 35 U.S.C. 103 as obvious over Zhang et al. in view of Ippili et al. in view of Ishihara et al. as applied to claim 1 above, and further in view of US Patent Publication 20140229159 A1 (Branton). Claim 6 Regarding claim 6, the rejection of claim 1 is incorporated. Zhang et al. in view of Ippili et al. in view of Ishihara et al. disclose all the elements of the claimed invention as stated above. Zhang et al. in view of Ippili et al. in view of Ishihara et al. do not explicitly disclose all of a higher evaluation if sentence units are longer. However, Branton disclose wherein the evaluation value is a value of which evaluation is higher as each character string constituting the first character string set is longer (Branton ¶ [0053], "The sentence length can be divided by the average sentence length (this can be pre-computed using a rolling mean calculation during the initial sentence processing). If less than 1, the modifier is adjusted to 1. The score for the sentence is divided by this modifier. ... This provides a handicap for longer sentences to stop their scores from dwarfing shorter, more eligible, sentences. The sentences can be ranked based on their scores, and in certain embodiments, the top three sentences can be chosen." Necessitating a handicap for longer sentences implies that an evaluation value is higher as each character string is longer). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Zhang et al. in view of Ippili et al. in view of Ishihara et al. to include Branton’s length evaluation system because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Zhang et al. in view of Ippili et al. in view of Ishihara et al. as modified by Branton’s length evaluation system can yield a predictable result of improving summary quality since longer sentence units have a greater probability of comprising a complete sentence structure versus shorter sentence units. Thus, a person of ordinary skill would have appreciated including in Zhang et al. in view of Ippili et al. in view of Ishihara et al. the ability to do Branton’s length evaluation system since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 7 is rejected under 35 U.S.C. 103 as obvious over Zhang et al. in view of Ippili et al. in view of Ishihara et al. as applied to claim 1 above, and further in view of Japan Patent Publication JP 2013097723 A (Nishikawa et al.). Claim 7 Regarding claim 7, the rejection of claim 1 is incorporated. Zhang et al. in view of Ippili et al. in view of Ishihara et al. disclose all the elements of the claimed invention as stated above. Zhang et al. further disclose wherein the at least one processor is configured to generate a second character string set corresponding to the first document data (Zhang et al. pg. 4, Section 3, Paragraph 2, "During inference, for both extractive and latent models, we rank sentences ... and select the top three as summary" The output summary is considered analogous to a second character string set) by inputting the first document data to the generative model (Zhang et al. pg. 3, Section 2.3, Paragraph 2, "Our latent variable model views sentences in a document as binary variables (i.e., zeros and ones) and uses sentences with activated latent variables (i.e., ones) to infer gold summaries."), and train the generative model (Zhang et al. pg. 2, Section 2.1, Paragraph 5, " The model described above is usually trained by minimizing the negative log-likelihood of sentence labels in training documents ") [so that an error between the second character string set] and a plurality of character strings is selected from the first character string set (Zhang et al. pg. 3, Section 2.3, Paragraph 2, "We use the extractive model from Section 2.1 to produce probability distributions for latent variables (see Equation (3)) and obtain them by sampling z i ∼ p ( z i | z 1 : i - 1 , h i - 1 D ) (see Figure 1). C = { S i | z i = 1 } , the set of sentences whose latent variables equal to one, are our current extractive summaries." Sentence set C is considered analogous to a plurality of strings selected from the first character string set) [is minimized]. Zhang et al. in view of Ippili et al. in view of Ishihara et al. do not explicitly disclose all of training a generative model using an error between sentence sets. However, Nishikawa et al. disclose training the generative model so that an error between the second character string set and a plurality of character strings [selected from the first character string set] is minimized (Nishikawa et al. pg. 2, Paragraph 7-8, "the old value of the weight vector as the parameter is w o l d , the text included in the training example is x i , and the summary included in the training example is y i . The summary generated from the text x i included in the training set using a value before the update of the weight vector as the parameter y   ' ... When the evaluation scale is ROUGE and the error is loss (y ′; y), l o s s y ' ; y i = 1 - R O U G E ( y ' ; y i ) . The parameter can be estimated by calculating the updated value w n e w of the weight vector as the parameter according to the above equations (1) to (3)." pg. 2, Paragraph 7-8, "In learning the parameters, the parameters are learned to minimize some error function." y ' is considered analogous to a second character string set. y i is considered analogous to a plurality of character strings. Therefore, l o s s y ' ; y i is considered analogous to an error between a plurality of character strings and a second character string set). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Zhang et al. in view of Ippili et al. in view of Ishihara et al. to include Nishikawa et al.’s error-based training because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Zhang et al. in view of Ippili et al. in view of Ishihara et al. as modified by Nishikawa et al.’s error-based training can yield a predictable result of improving summary quality since minimizing the difference between a selection of character strings and its corresponding summary would reduce the inherent discrepancies in transforming the selection of character strings into the output summary. Thus, a person of ordinary skill would have appreciated including in Zhang et al. in view of Ishihara et al the ability to do Nishikawa et al.’s error-based training since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB B VOGT whose telephone number is (571)272-7028. The examiner can normally be reached Monday - Friday 9:30am - 7pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACOB B VOGT/ Examiner, Art Unit 2653 /Paras D Shah/ Supervisory Patent Examiner, Art Unit 2653 03/11/2026
Read full office action

Prosecution Timeline

Mar 06, 2024
Application Filed
Nov 04, 2025
Non-Final Rejection — §103, §112
Jan 07, 2026
Response Filed
Mar 11, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12505279
METHOD AND SYSTEM FOR DOMAIN ADAPTATION OF SOCIAL MEDIA TEXT USING LEXICAL DATA TRANSFORMATIONS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+100.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month