DETAILED ACTION
This communication is in response to the Amendments and Arguments filed on 10/07/2025. Claims 1-11 are pending and have been examined. Hence, this action has been made FINAL.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application claims priority to foreign application with application number PCT/JP2023/017860 dated May 12, 2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Response to Arguments
The reply filed on 10/07/2025 has been entered. Applicant’s arguments with respect to claims 1-10 have been considered but are not persuasive/moot in view of new ground(s) of rejection caused by the amendments.
With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 101, Applicant has amended each of the independent claims and asserts that “measures are not mere numerical outputs; they serve as intermediate control variables that are transformed into reliability scores via a pre-learned mapping function.” The examiner respectfully disagrees with these assertions. While the Applicant asserts that the nature of “measures” in the independent claims surpass mere numerical outputs, this structure is not apparently defined in the claim limitations. Instead, measures are only claimed to be a result of a calculation process measuring the relationship between components. Under the broadest reasonable interpretation, a measure cannot be limited to control variables as the Applicant asserts.
Applicant further asserts that “reliability-based gating and source attribution are embedded at the core of the workflow, providing a technical solution to the computer-specific issue of "hallucination," and clearly demonstrating integration into practical applications.” The examiner respectfully disagrees with these assertions. While hallucinations are a known issue in the field of generative large language models, they are entirely unrelated to the claim set at hand. As amended, the claims only specify a “machine learning model trained to generate a text.” Nowhere in either the claim language nor the specification of the instant application is the use of an LLM specifically noted, nor is any mention of hallucination or the problems associated with hallucination asserted.
Applicant further asserts that “intelligent feedback operates by dynamically adjusting internal parameters of generation and retrieval based on reliability evaluation, thereby enabling self-correction. This leads to direct system-level effects such as suppression of invalid outputs, avoidance of unnecessary downstream processing, reduction of computational resources and latency, and improved traceability of sources.” The examiner respectfully disagrees with these assertions. While the computation of a reliability value from measures of relationships between input variables is noted, the examiner fails to see how the reliability value is further used for generation or how the reliability value is used to adjust internal parameters of a generation. According to the amended claim limitations, the reliability value is only used for comparing retrieved documents to a threshold and determining whether to output them to the user or not. As amended, there is no mention of self-correction, intelligent feedback, or dynamic adjustment of generation parameters present in either the claim language or the instant specification.
Applicant further asserts that “the flow, with cascading dependencies, comprising generation → retrieval → computation of two relational measures → reliability transformation via a learned mapping → behavior switching based on thresholding (selection of supporting subset and identifier assignment/ suppression and reevaluation) functions directly in line with the objective of mitigating hallucinations and providing source attribution in LLMs.” The examiner respectfully disagrees with these assertions. As stated above, while hallucinations are a known issue in the field of generative large language models, they are entirely unrelated to the claim set at hand. As amended, the claims only specify a “machine learning model trained to generate a text.” Nowhere in either the claim language or the specification of the instant application is the use of an LLM specifically noted, nor is any mention of hallucination or the problems associated with hallucination noted or explained.
Applicant further asserts that “the amended claims define a concrete control flow as an apparatus, centered on the design of two types of relational measures and the generation of reliability scores via a learned mapping. This flow includes threshold-based output gating, selection of supporting subsets, and suppression/re-evaluation of output. This is not merely an "analysis or presentation of information," but a practical application that actually improves computer functionality by mitigating hallucinations and providing source attribution in generative AI.” The examiner respectfully disagrees with these assertions. As stated above, while hallucinations are a known issue in the field of generative large language models, they are entirely unrelated to the claim set at hand. Nowhere in either the claim language or the specification of the instant application is the use of an LLM specifically noted, nor is any mention of hallucination or the problems associated with hallucination noted or explained. As amended, there is no language in the independent claims that would prevent a human from performing these steps, as addressed in further detail below with respect to claim rejections under 35 USC § 101.
With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 103, the applicant’s arguments with respect to claims 1-10 have been considered but are moot in view of new ground(s) of rejection caused by the amendments.
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification.
The following terms in the claims have been given the following interpretations in light of the specification:
Reliability: paragraph [0049], “For example, the calculation section 21 may calculate the reliability of the first result with use of existing techniques. … Examples of a method of calculating the reliability of the first result carried out by the calculation section 21 with use of both the first result and the document extracted by the extraction section 14 may include (a) a method based on inter-word distance, (b) a method based on inter-document distance, or (c) a method based on a learning model.”
Thus, reliability is any measure of similarity or distance for evaluating the output of a machine learning model. This definition is used for purposes of searching for prior art, but cannot be incorporated into the claims.
Matching degree: paragraph [0058]-[0059], “The matching degree calculation section 23 calculates the matching degree between the document extracted by the extraction section 14 and the first result (text etc.) generated by the generation section 12. … when the first result includes a string, the matching degree calculation section 23 may, for example, calculate the matching degree based on the degree of matching of strings measured by comparing the first result and the document extracted by the extraction section 14.”
Thus, matching degree is any measure of similarity between text or documents. This definition is used for purposes of searching for prior art, but cannot be incorporated into the claims.
Should applicant wish different definitions, Applicant should point to the portions of the specification that clearly show a different definition.
Claim Objections
Claims 2 and 5-7 are objected to because of the following informalities:
Claim 2, line 3, should be “calculating reliability of the generated text
Claims 5-7 are objected to for reasons analogous to claim 2 above.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 1-11 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, had possession of the claimed invention.
Independent claim 1 has been amended to recite:
“in response to a determination that the reliability value does not exceed the threshold, refraining from outputting the result and optionally repeating valuation of the reliability” (emphasis added).
Applicant fails to identify paragraphs or figures of the instant application that allegedly support these amendments. Additionally, the instant application fails to provide an adequate written description of “repeating valuation of the reliability” or the option to do so after failing to exceed a threshold. First of all, it is unclear what is meant by “repeating valuation” as this term has not been discussed in the specification or defined in the claim. Secondly, Applicant failed to show adequate support in their instant specification for these amendments in direct contradiction to the requirements of MPEP 2163(II)(A) and 2163.04. Furthermore, the support for these limitations is not apparent. ¶ [0083] of the instant specification states that “in a case where it has been determined that the reliability does not exceed the threshold in step S27 (step S26: NO), the information processing apparatus 2 terminates the information processing method S2.” Upon failing to meet a threshold, the method immediately terminates according to ¶ [0083] of the instant specification. There is no explicit mention of repeating valuation of the reliability upon the reliability failing to meet a threshold. Thus, the amended limitations relating to the “optionally repeating valuation of the reliability” are not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventors, at the time the application was filed, had possession of the claimed invention.
Independent claims 9 and 10 are similarly rejected due to parallel claim language.
Claims 2-8 and 11 are rejected due to their dependency upon claims 1, 9, and 10.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. All of the claims are method claims (9), apparatus/machine claims (1-8, 10-11) or manufacture claim under Step 1, but under Step 2A all of these claims recite abstract ideas and specifically mental processes. These mental processes are more particularly recited in claims 1, 9, and 10 as:
generating, by the at least one processor using a machine learning model trained to generate a text based on an input text, a generated text corresponding to the target text…
generating, by the at least one processor, a query from the generated text…
retrieving, by the at least one processor, by using the query, one or more candidate documents from a database;
calculating, by the at least one processor, a first measure representing a relationship between the generated text and at least one of the candidate documents, and calculating a second measure representing a relationship between the target text and the generated text…
obtaining, by the at least one processor, by inputting at least the first measure and the second measure into a learned mapping function, a reliability value indicative of a likelihood that the generated text is supported by the at least one candidate document…
determining, by the at least one processor, whether the reliability value exceeds a threshold…
selecting, by the at least one processor, from the one or more candidate documents, a subset of documents that supports the generated text…
outputting a result comprising the generated text and information identifying the selected subset of documents…
refraining, by the at least one processor, from outputting the result…
optionally repeating evaluation of the reliability…
Under Step 2A Prong One, claims 1, 9, and 10 are directed to an abstract idea and specifically a mental process. As detailed above, the steps of generating, retrieving, calculating, obtaining, determining, selecting, refraining, repeating, etc. may be practically performed in the human mind with the use of a physical aid such as a pen and paper. For example, a human could receive a document as input text, split the document into text segments, create query texts from each text segment, retrieve documents similar to each query text from a filing cabinet, calculate a first measure of similarity between each of the text segments and their associated retrieved documents, calculate a second measure of similarity between the input text and each of the text segments, add or multiply the two measures together in order to obtain a reliability value, compare that reliability value to a preset threshold, and only select documents that surpass that required threshold of similarity. For documents that fail to surpass the threshold, the human could optionally repeat valuation of reliability for each of the documents using the above process.
Under Step 2A Prong Two, this judicial exception is not integrated into a practical application because claims 1-11 do not recite additional elements that integrate the exception into a practical application. In particular, claims 1, 9, and 10 recite the additional elements of a processor (¶ [0146]), a computer-readable non-transitory storage medium (¶ [0148]), a computer (¶ [0145]-[0149], Figure 12), and a machine learning model (¶ [0021]). These additional elements are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). Further, claims 1, 9, and 10 recite the additional elements of “acquiring…”, which amounts to insignificant extra-solution activities which are not indicative of integration into a practical application as per MPEP 2106.05(g). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Under Step 2B, the claims do not recite additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a computer is noted as a general computer {a processor (¶ [0146]), a computer-readable non-transitory storage medium (¶ [0148]), a computer (¶ [0145]-[0149], Figure 12)}. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitations in the claims noted above are directed towards insignificant extra-solution activities. The claims are not patent eligible.
With respect to claim 2, the claim relates to further calculating the reliability of generated text. This relates to a human calculating the reliability value by hand. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claim 3, the claim relates to further comparing the calculated reliability with a threshold and outputting identifying document information as an optimized result. This relates to a human comparing the hand-calculated reliability value to a preset threshold, then selecting only documents that surpass that reliability value. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claim 4, the claim relates to outputting the reliability in addition to the modified result. This relates to a human including the reliability score alongside the selected documents when giving them to another human. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claim 5, the claim relates to calculating reliability based on the generated text, the extracted document, and the acquired text. This relates to a human using the input text, the text segments, and the retrieved documents for each text segment to obtain a reliability score. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claim 6, the claim relates to further calculating matching degree between an extracted document and a generated text. This relates to a human calculating a similarity between the retrieved documents and each associated text segment. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claim 7, the claim relates to further parsing the generated text to obtain parts, and then calculating a matching degree between each part and the extracted document. This relates to a human splitting the input text into individual text segments, then calculating a similarity between each text segment and their respective set of retrieved documents. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claim 8, the claim relates to outputting parts and extracted documents in association with one another. This relates to a human formatting a table such that each text segment corresponds row-wise to their associated extracted documents. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claim 11, the claim relates to automatically validating generated text before it is presented. This relates to a human ensuring that a text segment has associated retrieved documents before presenting it in a table, removing the text segment from the output if it doesn’t. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
For all of the above reasons, taken alone or in combination, claims 1-11 recite a non-statutory mental process.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-11 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 20200193153 A1 (Lee et al.) in view of US Patent Publication 20120226687 A1 (Xu et al.).
Claim 1
Regarding claim 1, Lee et al. disclose an information processing apparatus comprising at least one processor (Lee et al. ¶ [0269], "The example computer system 1200 includes a processing device 1202"), the at least one processor carrying out:
an acquisition process comprising acquiring a target text (Lee et al. ¶ [0089], "In step 302, an input text block may be received from a user.");
a generation process comprising generating, using a machine learning model trained to generate a text based on an input text, a generated text corresponding to the target text (Lee et al. ¶ [0090], "In step 303, the input text block may be split into input text segments." ¶ [0101], "Machine learning algorithms for entity recognition may be used to split the input text block 400 based on semantic concepts rather than punctuation.");
a query generation process comprising generating a query from the generated text (Lee et al. ¶ [0092], "In an embodiment, the input text segments may be considered query text segments");
a retrieval process comprising retrieving, by using the query, one or more candidate documents from a database (Lee et al. ¶ [0091], "In step 304, text similarity matching may be performed on each input text segment using the machine learning model. The machine learning model may identify for each input text segment one or more similar stored text segments from the document storage 124");
a reliability obtaining process comprising obtaining, by inputting [at least the first measure and the second measure] into a learned mapping function (Lee et al. ¶ [0092], "the input text segments may be considered query text segments and the stored text segments in document storage 124 may be considered candidate text segments. The trained machine learning model may be run to compare a query text segment with all candidate text segments." ¶ [0194], "The text similarity machine learning model 505 learns a function for performing a regression or classification to generate a similarity score." The text similarity machine learning model is considered analogous to a learned mapping function), a reliability value indicative of a likelihood that the generated text is supported by the at least one candidate document (Lee et al. ¶ [0092], "The trained machine learning model may be run once for every candidate text segment to compare the candidate to the query text segment. ... The candidate text segment with the highest similarity score to the query text segment may be identified as the most similar." A text segment being present in both a query text and a document text is considered analogous to a candidate document supporting a text. Thus, a similarity score based on text segment similarity is considered analogous to a reliability value);
a determination process comprising determining whether the reliability value exceeds a threshold (Lee et al. ¶ [0095], "In an embodiment, for each input text segment, at least one of the reference documents includes one text segment that is similar to it. In some embodiments, the required level of similarity is defined by a similarity threshold.");
an output process comprising, in response to a determination that the reliability value exceeds the threshold, selecting, from the one or more candidate documents, a subset of documents that supports the generated text (Lee et al. ¶ [0277], "In step 1308, document similarity matching is performed on the input text block and a set of reference documents. Document similarity matching selects a subset of the reference documents which are most likely to be related to the input text block or are most likely to contain text segments which are similar to input text segments in the input text block."), and outputting a result comprising the generated text and information identifying the selected subset of documents (Lee et al. ¶ [0256], "FIG. 11A illustrates an exemplary output user interface 1100 that may be generated by the search engine to display the input text segments and the generated combination of stored text segments and reference documents." Input text segments are considered analogous to generated text. Stored text segments and/or reference documents are considered analogous to information identifying a selected subset of documents); and
in response to a determination that the reliability value does not exceed the threshold, refraining from outputting the result and optionally repeating valuation of the reliability (Lee et al. ¶ [0226], "the set of candidates may be reduced, such as by requiring a threshold level of similarity or only choosing candidates above a threshold rank when the candidates are ranked by similarity score." Removing candidate documents is considered analogous to refraining from outputting a result. It is noted that the claim scope does not necessitate repeating valuation of reliability).
Lee et al. do not explicitly disclose all of using a first measure and a second measure to obtain reliability
However, Xu et al. disclose an acquisition process comprising acquiring a target text (Xu et al. ¶ [0018], "the user 102 may submit a search query 124 to the similar query finder 110 of the online module 106. ");
a generation process comprising generating, [using a machine learning model trained to generate a text based on an input text,] a generated text corresponding to the target text (Xu et al. ¶ [0018]-[0019], "the online module 106 may receive the search query 124 as input and the similar query finder 110 may determine whether there are any additional queries (i.e., similar queries 126) that are different but similar to the search query 124. ... the similar queries 126 may be generated using any technique known in the art.");
a retrieval process comprising retrieving, [by using the query,] one or more candidate documents from a database (Xu et al. ¶ [0022], "the search query 124 and the similar queries 126 may be transmitted to the retrieval interface 112, which may then access the index 122... the index 122 may identify the search results 128 (i.e., web documents, URLs, etc.) that are relevant and/or responsive to the search query 124 and each of the similar queries 126.");
a calculation process comprising calculating a first measure representing a relationship between the generated text and at least one of the candidate documents (Xu et al. ¶ [0021], "
r
(
q
'
,
d
'
)
may represent the documents
d
identified in response to similar queries
q
'
"), and calculating a second measure representing a relationship between the target text and the generated text (Xu et al. ¶ [0037], "
s
Q
q
,
q
'
may represent the similarities between query
q
and similar queries
q
'
");
a reliability obtaining process comprising obtaining, by inputting at least the first measure and the second measure into a [learned] mapping function (Xu et al. ¶ [0036], "The re-ranking model may be defined as Equation 2:
f
q
,
d
=
α
0
r
q
,
d
+
∑
q
'
,
d
'
∈
P
α
q
'
,
d
'
s
Q
q
,
q
'
s
D
d
,
d
'
r
(
q
'
,
d
'
)
"), a reliability value indicative of a likelihood that the generated text is supported by the at least one candidate document (Xu et al. ¶ [0015], "since the search results are retrieved by different queries, the combined search results may then be re-ranked with a re-ranking model. Re-ranking the combined search results may identify which web documents are more and/or less relevant in view of the original query." Ranking search results is considered analogous to indicating a likelihood that generated text is supported by candidate documents. Therefore, re-ranking combined search results is considered analogous to obtaining a reliability value); and
an output process comprising, [in response to a determination that the reliability value exceeds the threshold,] selecting, from the one or more candidate documents, a subset of documents that supports the generated text (Xu et al. ¶ [0056], "given query
q
...document set
D
may be created by merging the search results 128 retrieved by query
q
and all
q
'
∈
Q
'
. The basic ranking scores
r
(
q
,
d
)
between all of the retrieved query-document pairs may also be returned. Then, the document similarities may be calculated, the combination parameters may be determined, and the final ranking scores
f
(
q
,
d
)
may be calculated. The retrieved documents may then be ranked with the re-ranking model
f
(
q
,
d
)
and returned to the user 102." Final ranking score function
f
(
q
,
d
)
re-ranks documents based on similarity to a set of similar queries (see above mapping). The set of similar queries is generated by the system (see above mapping). Therefore, re-ranking documents using the final ranking score function
f
(
q
,
d
)
is considered analogous to selecting a subset of documents that support a generated text), and outputting a result comprising [the generated text and] information identifying the selected subset of documents (Xu et al. ¶ [0056], "The retrieved documents may then be ranked with the re-ranking model
f
(
q
,
d
)
and returned to the user 102.").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Lee et al.’s information processing apparatus to incorporate Xu et al.’s measure-based reliability calculation because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Lee et al.’s reliability value and Xu et al.’s reliability value perform the same general and predictable function, the predictable function being representing similarities between each generated text and their associated retrieved documents. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself - that is in the substitution of Lee et al.’s reliability calculation by replacing it with Xu et al.’s reliability calculation. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Claim 2
Regarding claim 2, the rejection of claim 1 is incorporated. Lee et al. in view of Xu et al. disclose all the elements of the claimed invention as stated above.
Lee et al. further disclose wherein the at least one processor further carries out a calculation process comprising calculating reliability of the text generated in the generation process (Lee et al. ¶ [0092], "The trained machine learning model may be run once for every candidate text segment to compare the candidate to the query text segment. ... The candidate text segment with the highest similarity score to the query text segment may be identified as the most similar." A text segment being present in both a query text and a document text is considered analogous to a candidate document supporting a text. Therefore, a similarity score based on text segment similarity is considered analogous to a reliability value).
Claim 3
Regarding claim 3, the rejection of claim 2 is incorporated. Lee et al. in view of Xu et al. disclose all the elements of the claimed invention as stated above.
Lee et al. further disclose wherein:
the at least one processor further carries out a determination process comprising determining whether the reliability exceeds a threshold (Lee et al. ¶ [0095], "In an embodiment, for each input text segment, at least one of the reference documents includes one text segment that is similar to it. In some embodiments, the required level of similarity is defined by a similarity threshold."); and
in a case where it has been determined, in the determination process, that the reliability exceeds the threshold, the at least one processor outputs, in the output process, the result obtained by adding the information identifying the document as an optimized result (Lee et al. ¶ [0224], "In step 1004, combination generation algorithm may be performed. The combination generation algorithm may be an optimization algorithm that optimizes across a plurality of variables. The combination generation algorithm may determine a set of matching stored texts segments from the document storage and associated reference documents that satisfy the selection criteria. In an embodiment, the combination generation algorithm may generate a plurality of combinations that satisfy the required criteria to create a set of candidate combinations." ¶ [0256], "FIG. 11A illustrates an exemplary output user interface 1100 that may be generated by the search engine to display the input text segments and the generated combination of stored text segments and reference documents.").
Claim 4
Regarding claim 4, the rejection of claim 2 is incorporated. Lee et al. in view of Xu et al. disclose all the elements of the claimed invention as stated above.
Lee et al. further disclose wherein in the output process, the at least one processor outputs the reliability in addition to the result obtained by adding the information identifying the document (Lee et al. ¶ [0256]-[0258], "FIG. 11A illustrates an exemplary output user interface 1100 that may be generated by the search engine to display the input text segments and the generated combination of stored text segments and reference documents. ... A similarity score 1106 generated by the search engine may be displayed to score the match between the claim text and reference text.").
Claim 5
Regarding claim 5, the rejection of claim 4 is incorporated. Lee et al. in view of Xu et al. disclose all the elements of the claimed invention as stated above.
Xu et al. further disclose wherein, in the calculation process, the at least one processor calculates the reliability with use of the text generated in the generation process, the document extracted in the extraction process, and the target text acquired in the acquisition process (Xu et al. ¶ [0036]-[0037], "The re-ranking model may be defined as Equation 2:
f
q
,
d
=
α
0
r
q
,
d
+
∑
q
'
,
d
'
∈
P
α
q
'
,
d
'
s
Q
q
,
q
'
s
D
d
,
d
'
r
(
q
'
,
d
'
)
... The query space 202 may include user query
q
and one or more similar queries
q
'
(i.e., the similar queries 126). ... Moreover, the document space 204 may include retrieved document
d
and similar document(s)
d
'
." Similar queries
q
'
are considered analogous to generated texts. Document
d
and/or similar document(s)
d
'
are considered analogous to extracted documents. User query
q
is considered analogous to target text).
Claim 6
Regarding claim 6, the rejection of claim 1 is incorporated. Lee et al. in view of Xu et al. disclose all the elements of the claimed invention as stated above.
Lee et al. further disclose wherein:
the at least one processor further carries out a matching degree calculation process comprising calculating a matching degree between the document extracted in the extraction process and the text generated in the generation process (Lee et al. ¶ [0092], "The trained machine learning model may be run once for every candidate text segment to compare the candidate to the query text segment. ... The candidate text segment with the highest similarity score to the query text segment may be identified as the most similar." A similarity score is considered analogous to a matching degree); and
in the output process, the at least one processor outputs a result obtained by adding, to the text generated in the generation process, information identifying a document with a matching degree satisfying a predetermined condition, among documents extracted in the extraction process (Lee et al. ¶ [0095], "In an embodiment, for each input text segment, at least one of the reference documents includes one text segment that is similar to it. In some embodiments, the required level of similarity is defined by a similarity threshold." A similarity threshold is considered analogous to a predetermined condition).
Claim 7
Regarding claim 7, the rejection of claim 6 is incorporated. Lee et al. in view of Xu et al. disclose all the elements of the claimed invention as stated above.
Lee et al. further disclose wherein:
in the query generation process, the at least one processor cuts out one or more parts of the text generated in the generation process and generates a query for each of the cut parts (Lee et al. ¶ [0090]-[0092], "In step 303, the input text block may be split into input text segments. ... the input text segments may be considered query text segments"); and
in the matching degree calculation process, the at least one processor calculates the matching degree between a part of the parts and a document extracted with use of the query generated for the part (Lee et al. ¶ [0091], "In step 304, text similarity matching may be performed on each input text segment using the machine learning model. The machine learning model may identify for each input text segment one or more similar stored text segments from the document storage 124").
Claim 8
Regarding claim 8, the rejection of claim 7 is incorporated. Lee et al. in view of Xu et al. disclose all the elements of the claimed invention as stated above.
Lee et al. further disclose wherein, in the output process, the at least one processor outputs the part and the document extracted with use of the query generated for the part, in association with each other (Lee et al. ¶ [0256]-[0257], "FIG. 11A illustrates an exemplary output user interface 1100 that may be generated by the search engine to display the input text segments and the generated combination of stored text segments and reference documents. Claim chart 1120 displays the text of one or more patent claims that were input to the system, such as in step 302, and the corresponding stored text segments in one of the combinations identified by the search engine. ... The current combination of reference documents is displayed in reference display fields 1107, 1108, 1109, which identify three references used in the combination currently displayed in the claim chart 1120." See Fig. 11A, which illustrates each part of a query corresponding to a respective associated reference document).
Claim 9
Regarding claim 9, the limitations of claim 9 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above.
Claim 10
Regarding claim 10, Lee et al. disclose a computer-readable non-transitory storage medium storing a program for causing a computer to function as an information processing apparatus (Lee et al. ¶ [0298], "The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.").
The remaining limitations of claim 10 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above.
Claim 11
Regarding claim 11, the rejection of claim 1 is incorporated. Lee et al. in view of Xu et al. disclose all the elements of the claimed invention as stated above.
Lee et al. further disclose wherein the determination process and the subsequent output process, which conditionally outputs the result or refrains from outputting, collectively constitute a decision making process for automatically validating the generated text before it is presented (Lee et al. ¶ [0091], "In step 304, text similarity matching may be performed on each input text segment using the machine learning model. The machine learning model may identify for each input text segment one or more similar stored text segments from the document storage 124 or may determine that there is no similar stored text segment." Determining that there are no similar stored text segments to an input text segment is considered analogous to invalidating a generated text. Therefore, determining whether there are similar documents to an input text segment is considered analogous to a decision making process for automatically validating a generated text.).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB B VOGT whose telephone number is (571)272-7028. The examiner can normally be reached Monday - Friday 9:30am - 7pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACOB B VOGT/Examiner, Art Unit 2653
/Paras D Shah/Supervisory Patent Examiner, Art Unit 2653
02/05/2026