DETAILED ACTION
Notice of AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
2. Amendment filed 08/26/2025 has been considered by Examiner. New claims 9-12 have been added . Claims 1, 3-5 are pending, and likewise claims 1, 3-5, 9-12 have been examined.
Response to Arguments
Applicant’s amendments and arguments filed 08/26/2025, with respect to claim(s) 1, 3-5 and new claim 9-12 have been fully considered.
Applicant’s arguments filed 08/26/2025, in pages 9-12, with respect to claim(s) 1, 3-5, under 35 U.S.C. 103 have been fully considered but are not persuasive. Applicant argued that Torisawa doesn’t teach the limitation “selecting a normalized text string from among the plurality of normalized text strings stored in the suggestion database by using comparing the search text string with the plurality of normalized text strings”. Applicant argued that Torisawa merely discloses normalizing a plurality of sentences into one sentence and scoring the question sentences based on the classes of words appearing in the source sentences, or combinations of appearing words, but does not teach comparing the search text string with the plurality of normalized text strings to select a normalized text string. Examiner respectfully disagree. Torisawa in column 5, lines 51-65, column 6, lines 11-17, Fig. 2 illustrates a question-answering system comprised of a preprocessing unit 202 configured to perform pre-process for preparation of an answer and a question sentence generating DB, a factoid type question sentence generating subsystem 242, which is using outputs from pre-processing unit 202. Column 6, lines 41-52, Fig. 3 shows the pre-processing unit 202 includes a morphological analyzing unit 280 that performs morphological analysis of each sentence in question-answering system corpus 200, adds grammatical information such as parts of speech, inflected forms and readings, and outputs a sequence of morphemes; and a dependency analyzing unit 282 that analyzes dependency relation of sentences using the sequence of morphemes output from morphological analyzing unit 280, which are nothing but normalization. In column 11, lines 20-30, Fig. 6, shows Factoid type question sentence generating sub-system 242 further includes: a distinct question sentence selecting unit 512 configured to select distinct questions from cumulative questions in question sentences using thesaurus 508, which means distinct question sentence selecting unit 512 already have normalized text strings. Applicant further argued that the examiner has interpreted the phrase "[t]he question sentences are classified in accordance with the scores" of Torisawa into comparing. Torisawa in column 11, lines 20-30, teaches by using scoring rule storage unit 514, question sentences of high scores can be selected, which is nothing but comparison. Applicant also argued that Torisawa scores question sentences based on classes, specified by thesaurus 508, of words appearing in the source sentences of question sentences, or scores of question sentences may be made higher or lower in accordance with combinations of appearing words but does not teach comparing the search text string with the plurality of normalized text strings to select a normalized text string. Examiner believes that selecting question sentences based on scoring on different criteria is similar to comparing the criteria between two questions and selecting.
Applicant further argued that the claimed invention uses the same synonym dictionary in a process of generating the search text string from the input text string and generating the normalized text string from the question text string (i.e., uses one representative term selected from among the term included in the input text string and the plurality of synonyms) and cited references do not teach or even suggest performing both the normalization of generating the normalized text strings form the plurality of question text strings based on the synonym dictionary and the normalization of selecting the normalized text string by converting the input text string based on the synonym dictionary. Examiner agreed that two references have been used to teach the above limitations, but it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971 ).
Torisawa and Katayama both teach synonym dictionary. Both of them are using synonym dictionary to store synonymous relation or entailment relation of a word or an answer sentence pattern or both. In both references user can change a word whose meaning does not change even if it is replaced instead of an element word by using synonym dictionary. Torisawa teaches the synonym dictionary 510 in Column 11, lines 20-55, fig. 6 which shows how the synonymous relations of words, those patterns having synonymous relations are uniformly replaced by a representative term/pattern. the synonym/entailment dictionary store in advance what representative words and what patterns should be used. Katayama teaches in page 3, para.1, 6, synonym dictionary 10 and by using the dictionary a search text sting can be generated by selecting among the plurality of synonyms corresponding to the term included in the input text string. So both of them use synonym dictionary in similar manner.
Therefore, the previous rejection for claims 1, 3-5 is maintained and claim 11, 12 is also rejected with same references. Please see the rejections below.
Compact Prosecution
In order to advance prosecution of this case, the examiner reached out to the attorney with a proposed examiner’s amendment to add the newly added claims 9 and 10 with the independent claims accordingly, but no agreement was reached to place the case in condition for allowance. Please note this is an attempt to provide suggestions to further advance prosecution but has not been searched. An updated search would be required if agreed to.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 , 3-5, 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Henmi et al. ( US 10339222 B2), hereinafter referenced as Henmi, in view of Torisawa et al. ( US 10380149 B2), hereinafter referenced as Torisawa, further in view of Katayama et al. ( JP 2006004045 A), hereinafter referenced as Katayama .
Regarding Claim 1, Henmi teaches an information providing system comprising:
a knowledge database configured to store a plurality of question text strings respectively associated with a plurality of response contents ( Henmi: Column 18, lines 34-41, Fig.8, Knowledge database 310 includes plurality of units, each units stores reference texts and response texts, which are associated with each other and constitute a set);
a network interface ( Henmi: Column 16, lines 17-19, Fig.6, network 700 and the network I/F module 440, using a protocol such as HTTP);
and a first central processing unit (CPU) configured to: receive, via the network interface, an input text string that is currently input by a user by using a user terminal ( Henmi: Column 16, lines 17-18, Fig.6, Input reception module ( receiver) 410 receives input text entered by user 10 by using user terminal 100. Column 38, lines 62-67, Fig. 38, CPU 1001 controls the processing);
and send, via the network interface, the specified question text string to the user terminal ( Henmi: Column 16, lines 17-19, Fig.6, network 700 and the network I/F module 440, send the question to user terminal 100, using a protocol such as HTTP),
wherein the user terminal comprising: a display ( Henmi: Column 10, lines 9-13, Fig. 1, user terminal 100 comprising a display);
and a second CPU configured to: display an input area on the display ( Henmi: Column 10, lines 60-65, Fig. 1, user terminal 100 could be the user’s personal computer which will have a processor ( second CPU) which is configured to control the display ( by inherent design) );
receive an update of the input text string on the input area ( Henmi: Column 12, lines 7-14, Fig. 2A, user receives an update on the input area ( 112));
in response to receiving the update, send a request for performing a question-answer process to the first CPU ( Henmi: Column 12, lines 26-28, Fig. 2B, user enters input text ( question), “what is recommendation”, in response to receiving the update and by pressing “send” button 114, send the request to perform QA process to the information provider ( first CPU 1001) );
determine whether a response is supplied from the first CPU ( Henmi: Column 12, lines 32-34, Fig.2B, it can be determined from the response display area 112 that a response has been received information provider ( first CPU 1001));
generate, as a suggested keyword, the specified question text string based on the response when the response is supplied ( Henmi: Column 12, lines 35-44, Fig.2C, based on the question asked about delivery, generating suggested keyword “shipping type” );
and display the suggested keyword on a suggested keyword area of the display ( Henmi: Column 12, lines 35-44, Fig.2C, suggested keyword “shipping type” is displayed on display area 116 ).
Henmi while teaching the system of claim 1, fails to explicitly teach the claimed, a synonym database configured to store a synonym dictionary in which synonyms that are similar to one another in definition are set, only one of the synonyms being set as a representative term in the synonym dictionary; a suggestion database configured to store the plurality of question text strings respectively associated with a plurality of normalized text strings; each of the plurality of normalized text strings being generated by converting a term included in an associated question text string among the plurality of question text strings into a representative term based on the synonym dictionary; generate a search text string by converting a term included in the input text string into a representative term based on the synonym dictionary, the representative term including only one term among the term included in the input text string and a plurality of synonyms corresponding to the term included in the input text string, and select a normalized text string from among the plurality of normalized text strings stored in the suggestion database by using comparing the search text string with the plurality of normalized text strings, so as to specify a question text string associated with the selected normalized text string among the plurality of question text strings as a question text string related to the search text string; the input text string being not used to select the normalized text string from among the plurality of normalized text strings stored in the suggestion database; and send, [via the network interface], the specified question text string to the user terminal.
However, Torisawa does teach the claimed, a synonym database configured to store a synonym dictionary in which synonyms that are similar to one another in definition are set, only one of the synonyms being set as a representative term in the synonym dictionary ( Torisawa: Column 11, lines 20-37, 50-52, Fig.6, Synonymous words are replaced by a representative word and store in Synonym dictionary 510);
a suggestion database configured to store the plurality of question text strings respectively associated with a plurality of normalized text strings ( Torisawa: Column 18, lines 1-17, Fig. 6, distinct question sentence selecting unit 512 normalizes those of resulting sentences that come to be the same into one sentence with reference to thesaurus 508 and synonym/entailment dictionary 510 and output (store) them in question sentence list 482 ( suggestion database)),
each of the plurality of normalized text strings being generated by converting a term included in an associated question text string among the plurality of question text strings into a representative term based on the synonym dictionary ( Torisawa: Column 17, lines 60-67, column 18, lines 1-11, Fig.5, Based on the synonym dictionary 510, question sentence selecting unit 512 normalizes the question text strings);
and select a normalized text string from among the plurality of normalized text strings stored in the suggestion database by using comparing the search text string with the plurality of normalized text strings, so as to specify a question text string associated with the selected normalized text string among the plurality of question text strings as a question text string related to the search text string ( Torisawa: Column 5, lines 51-65, column 6, lines 11-17, Fig. 2, illustrates a question-answering system comprised of a preprocessing unit 202 configured to perform pre-process for preparation of an answer and a question sentence generating DB, a factoid type question sentence generating subsystem 242, which is using outputs from pre-processing unit 202. Column 6, lines 41-52, Fig. 3 shows the pre-processing unit 202 includes a morphological analyzing unit 280 that performs morphological analysis of each sentence in question-answering system corpus 200, adds grammatical information such as parts of speech, inflected forms and readings, and outputs a sequence of morphemes; and a dependency analyzing unit 282 that analyzes dependency relation of sentences using the sequence of morphemes output from morphological analyzing unit 280 (normalization). Column 11, lines 20-30, Fig. 6, shows Factoid type question sentence generating sub-system 242 further includes: a distinct question sentence selecting unit 512 configured to select distinct questions from cumulative questions in question sentences using thesaurus 508, which means distinct question sentence selecting unit 512 already have normalized text strings. Column 18, lines 1-17, Fig. 6, distinct question sentence selecting unit 512 normalizes those of resulting sentences that come to be the same into one sentence with reference to thesaurus 508 and synonym/entailment dictionary 510 and output (store) them in question sentence list 482 ( suggestion database). The question sentences are classified in accordance with the scores ( comparing), and a prescribed number of question sentence candidates having higher scores are selected ).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Torisawa’s teaching of a question sentence generating device, into the information providing system and method, taught by Henmi, because, this would allow the user to prepare a question sentence that leads to an answer guaranteed to have a certain accuracy or higher on an issue of his/her interest through a question-answering system.(Torisawa [ Column 2-4]).
Henmi in view of Torisawa, while teaching the system of claim 1, fail to explicitly teach the claimed, generate a search text string by converting a term included in the input text string into a representative term based on the synonym dictionary, the representative term including only one term among the term included in the input text string and a plurality of synonyms corresponding to the term included in the input text string; the input text string being not used to select the normalized text string from among the plurality of normalized text strings stored in the suggestion database; and send, [via the network interface], the specified question text string to the user terminal.
However, Katayama does teach the claimed, generate a search text string by converting a term included in the input text string into a representative term based on the synonym dictionary, the representative term including only one term among the term included in the input text string and a plurality of synonyms corresponding to the term included in the input text string ( Katayama: Page 3, para.1, synonym directory 10. Page 3, para.6, based on the input text string “I want to check frequently used memory dials” and the synonym dictionary 10, the memory dial” is converted to “phone book” ( representative term) and the term “check “ is converted to “search” to generate the search text string. Page 4, para. 7, there can be plurality of synonyms corresponding to one term, such as for “alarm”, there can be “setting”, “sound”, “timer”, “sleeping”, “early”, “time”, “clock”);
the input text string being not used to select the normalized text string from among the plurality of normalized text strings stored in the suggestion database( Katayama: Page 3, para.6, based on the input text string “I want to check frequently used memory dials” and the synonym dictionary 10, the input text string is normalized to “ phone book search”, which is not the original input text string)
and send, [via the network interface], the specified question text string to the user terminal ( Katayama: Page 4, para.1, “phone book: search” is displayed on phone display),
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Katayama’s teaching of information providing method, into the information providing system and method, taught by Henmi in view of Torisawa, because, this would ensure a certain level of clarity, readability, and ease of searching. Katayama [ Page 1].
Regarding Claim 3, Henmi in view of Torisawa, further in view of Katayama teach the information providing system according to claim 1. Henmi further teaches wherein, the user terminal confirms the input text string and sends the confirmed input text string to the receiver, each time an input is made by the user (Henmi: Column 12, lines26-31, Fig. 2A, user press the send button 114, after each input text string to confirm and sends the input text string to the receiver),
and the user terminal presents the question text string in a selectable manner, each time the question text string is supplied from the transmitter ( Henmi: Column 4, lines 11-16, Column 28, lines 11-29, Fig.18 shows selectable question text strings).
Regarding Claim 4, Henmi in view of Torisawa, further in view of Katayama teach the information providing system according to claim 1. Henmi further teaches wherein, in association with the representative term, the knowledge database stores a text string for input conversion, which corresponds to each of the synonyms ( Henmi: Column 18, lines 30-33, 51-58, Fig.8, Q1-1 representative term Q, Q1-2, Q1-3 are synonym).
Claim 5 is a method claim performing the steps in system claim 1 above and as such, claim 5 is similar in scope and content to claim 1 and therefore, claim 5 is rejected under similar rationale as presented against claim 1 above.
Regarding Claim 11, Henmi teaches an information providing system comprising:
a knowledge database configured to store a plurality of question text strings respectively associated with a plurality of response contents ( Henmi: Column 18, lines 34-41, Fig.8, Knowledge database 310 includes plurality of units, each units stores reference texts and response texts, which are associated with each other and constitute a set);
a network interface ( Henmi: Column 16, lines 17-19, Fig.6, network 700 and the network I/F module 440, using a protocol such as HTTP);
and a first central processing unit (CPU) configured to: receive, via the network interface, an input text string that is currently input by a user by using a user terminal ( Henmi: Column 16, lines 17-18, Fig.6, Input reception module ( receiver) 410 receives input text entered by user 10 by using user terminal 100. Column 38, lines 62-67, Fig. 38, CPU 1001 controls the processing);
and send, via the network interface, the specified question text string to the user terminal ( Henmi: Column 16, lines 17-19, Fig.6, network 700 and the network I/F module 440, send the question to user terminal 100, using a protocol such as HTTP),
wherein the user terminal comprising: a display ( Henmi: Column 10, lines 9-13, Fig. 1, user terminal 100 comprising a display);
and a second CPU configured to: display an input area on the display ( Henmi: Column 10, lines 60-65, Fig. 1, user terminal 100 could be the user’s personal computer which will have a processor ( second CPU) which is configured to control the display ( by inherent design) );
receive an update of the input text string on the input area ( Henmi: Column 12, lines 7-14, Fig. 2A, user receives an update on the input area ( 112));
in response to receiving the update, send a request for performing a question-answer process to the first CPU ( Henmi: Column 12, lines 26-28, Fig. 2B, user enters input text ( question), “what is recommendation”, in response to receiving the update and by pressing “send” button 114, send the request to perform QA process to the information provider ( first CPU 1001) );
determine whether a response is supplied from the first CPU ( Henmi: Column 12, lines 32-34, Fig.2B, it can be determined from the response display area 112 that a response has been received information provider ( first CPU 1001));
generate, as a suggested keyword, the specified question text string based on the response when the response is supplied ( Henmi: Column 12, lines 35-44, Fig.2C, based on the question asked about delivery, generating suggested keyword “shipping type” );
and display the suggested keyword on a suggested keyword area of the display ( Henmi: Column 12, lines 35-44, Fig.2C, suggested keyword “shipping type” is displayed on display area 116 ).
Henmi while teaching the system of claim 11, fails to explicitly teach the claimed, a synonym database configured to store a synonym dictionary in which synonyms that are similar to one another in definition are set, only one of the synonyms being set as a representative term in the synonym dictionary; a suggestion database configured to store the plurality of question text strings respectively associated with a plurality of normalized text strings; each of the plurality of normalized text strings being generated by converting a term included in an associated question text string among the plurality of question text strings into a representative term based on the synonym dictionary; generate a search text string by converting a term included in the input text string into a representative term based on the synonym dictionary, the representative term including only one term among the term included in the input text string and a plurality of synonyms corresponding to the term included in the input text string, and select a normalized text string from among the plurality of normalized text strings stored in the suggestion database by comparing the search text string with the plurality of normalized text strings, so as to specify a question text string associated with the selected normalized text string among the plurality of question text strings as a question text string related to the search text string, the input text string being not used to select the normalized text string from among the plurality of normalized text strings stored in the suggestion database, wherein the plurality of normalized text strings to be compared with the search text string are already stored in the suggestion database at a time when the input text string is currently input by the user; and send, [via the network interface], the specified question text string to the user terminal.
However, Torisawa does teach the claimed, a synonym database configured to store a synonym dictionary in which synonyms that are similar to one another in definition are set, only one of the synonyms being set as a representative term in the synonym dictionary ( Torisawa: Column 11, lines 20-37, 50-52, Fig.5, Synonymous words are replaced by a representative word and store in Synonym dictionary 510);
a suggestion database configured to store the plurality of question text strings respectively associated with a plurality of normalized text strings ( Torisawa: Column 18, lines 1-17, Fig. 6, distinct question sentence selecting unit 512 normalizes those of resulting sentences that come to be the same into one sentence with reference to thesaurus 508 and synonym/entailment dictionary 510 and output (store) them in question sentence list 482 ( suggestion database)),
each of the plurality of normalized text strings being generated by converting a term included in an associated question text string among the plurality of question text strings into a representative term based on the synonym dictionary ( Torisawa: Column 17, lines 60-67, column 18, lines 1-11, Fig.5, Based on the synonym dictionary 510, question sentence selecting unit 512 normalizes the question text strings);
and select a normalized text string from among the plurality of normalized text strings stored in the suggestion database by comparing the search text string with the plurality of normalized text strings, so as to specify a question text string associated with the selected normalized text string among the plurality of question text strings as a question text string related to the search text string ( Torisawa: Column 5, lines 51-65, column 6, lines 11-17, Fig. 2, illustrates a question-answering system comprised of a preprocessing unit 202 configured to perform pre-process for preparation of an answer and a question sentence generating DB, a factoid type question sentence generating subsystem 242, which is using outputs from pre-processing unit 202. Column 6, lines 41-52, Fig. 3 shows the pre-processing unit 202 includes a morphological analyzing unit 280 that performs morphological analysis of each sentence in question-answering system corpus 200, adds grammatical information such as parts of speech, inflected forms and readings, and outputs a sequence of morphemes; and a dependency analyzing unit 282 that analyzes dependency relation of sentences using the sequence of morphemes output from morphological analyzing unit 280 (normalization). Column 11, lines 20-30, Fig. 6, shows Factoid type question sentence generating sub-system 242 further includes: a distinct question sentence selecting unit 512 configured to select distinct questions from cumulative questions in question sentences using thesaurus 508, which means distinct question sentence selecting unit 512 already have normalized text strings. Column 18, lines 1-17, Fig. 6, distinct question sentence selecting unit 512 normalizes those of resulting sentences that come to be the same into one sentence with reference to thesaurus 508 and synonym/entailment dictionary 510 and output (store) them in question sentence list 482 ( suggestion database). The question sentences are classified in accordance with the scores ( comparing), and a prescribed number of question sentence candidates having higher scores are selected ).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Torisawa’s teaching of a question sentence generating device, into the information providing system and method, taught by Henmi, because, this would allow the user to prepare a question sentence that leads to an answer guaranteed to have a certain accuracy or higher on an issue of his/her interest through a question-answering system.(Torisawa [ Column 2-4]).
Henmi in view of Torisawa, while teaching the system of claim 11, fail to explicitly teach the claimed, generate a search text string by converting a term included in the input text string into a representative term based on the synonym dictionary, the representative term including only one term among the term included in the input text string and a plurality of synonyms corresponding to the term included in the input text string; the input text string being not used to select the normalized text string from among the plurality of normalized text strings stored in the suggestion database, wherein the plurality of normalized text strings to be compared with the search text string are already stored in the suggestion database at a time when the input text string is currently input by the user; and send, [via the network interface], the specified question text string to the user terminal.
However, Katayama does teach the claimed, generate a search text string by converting a term included in the input text string into a representative term based on the synonym dictionary, the representative term including only one term among the term included in the input text string and a plurality of synonyms corresponding to the term included in the input text string ( Katayama: Page 3, para.1, synonym directory 10. Page 3, para.6, based on the input text string “I want to check frequently used memory dials” and the synonym dictionary 10, the memory dial” is converted to “phone book” ( representative term) and the term “check “ is converted to “search” to generate the search text string. Page 4, para. 7, there can be plurality of synonyms corresponding to one term, such as for “alarm”, there can be “setting”, “sound”, “timer”, “sleeping”, “early”, “time”, “clock”);
the input text string being not used to select the normalized text string from among the plurality of normalized text strings stored in the suggestion database, wherein the plurality of normalized text strings to be compared with the search text string are already stored in the suggestion database at a time when the input text string is currently input by the user ( Katayama: Page 3, para.6, based on the input text string “I want to check frequently used memory dials” and the synonym dictionary 10, the input text string is normalized to “ phone book search”, which is not the original input text string. Page 3, para.3, 4, dictionary database ( suggestion database) includes various dictionaries such as an associative word dictionary (vocabulary information) 9, a different notation dictionary (word form information) 11, and a synonym dictionary (semantic information) 10, and these dictionary databases perform “fuzzy search”. For example if “noisy” or “noisy” is entered from the screen of the mobile phone 1, a headword is searched from the associative word dictionary 9 and “equalizer function setting” ,"Ring tone erasure", "Button confirmation sound setting", "Mail reception confirmation sound erasure", "Alarm function volume setting", "Sound erasure" "Alarm volume adjustment" ,"Play volume" ,"Ring volume adjustment" ,"Button confirmation sound", "Receiving sound", “Volume adjustment” or the like will be displayed which are normalized text string, already saved in the dictionary ).
and send, [via the network interface], the specified question text string to the user terminal ( Katayama: Page 4, para.1, “phone book: search” is displayed on phone display),
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Katayama’s teaching of information providing method, into the information providing system and method, taught by Henmi in view of Torisawa, because, this would ensure a certain level of clarity, readability, and ease of searching. Katayama [ Page 1].
Regarding Claim 12, Henmi in view of Torisawa, further in view of Katayama teach the information providing system according to claim 11. Torisawa further teaches wherein, wherein when the knowledge database is constructed, all of the plurality of question text strings stored in the knowledge database are normalized to the plurality of normalized text strings ( Torisawa: Column 5, lines 51-64, column 6, lines 11-17, Fig. 2 illustrates a question-answering system comprised of a preprocessing unit 202 configured to perform pre-process for preparation of an answer and a question sentence generating DB, a question-answering sub-system 240 ( knowledge database) responsive to a question sentence, configured to generate and output an answer sentence in natural language by searching the answer-generating DB held therein. DB 240 receives the output from preprocessing unit 202, which are normalized as mentioned in column 6, lines 41-52, Fig. 3. Where it shows the pre-processing unit 202 includes a morphological analyzing unit 280 that performs morphological analysis of each sentence in question-answering system corpus 200, adds grammatical information such as parts of speech, inflected forms and readings, and outputs a sequence of morphemes; and a dependency analyzing unit 282 that analyzes dependency relation of sentences using the sequence of morphemes output from morphological analyzing unit 280 ( normalization)).
and the plurality of normalized text strings are automatically stored in the suggestion database ( Torisawa: Column 11, lines 30, normalized text string are stored in DB 482 in question sentence list 482 ( suggestion database)),
and wherein when the knowledge database is updated, updated question text strings are normalized to updated normalized text strings, and the updated normalized text strings are automatically stored in the suggestion database ( Torisawa: Column 4, lines 4-17, question sentence database configured to store a plurality of question sentences each generated from any passage in the corpus and having as an answer a passage as a source of generating the question sentence; and question sentence generating means, responsive to reception of a word or a word sequence as a source of generating a question sentence, for generating and outputting a new question sentence from the word or word sequence as the source of generating the question sentence or synonyms or entailments of these and from a question sentence stored in the question sentence database by referring to the question sentence database).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Torisawa’s teaching of a question sentence generating device, into the information providing system and method, taught by Henmi, because, this would allow the user to prepare a question sentence that leads to an answer guaranteed to have a certain accuracy or higher on an issue of his/her interest through a question-answering system.(Torisawa [ Column 2-4]).
Allowable Subject Matter
Claims 9 and 10 contain subject matter that is allowable over the prior art of record. They would be considered allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NADIRA SULTANA whose telephone number is (571)272-4048. The examiner can normally be reached M-F,7:30 am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D. Shah can be reached on (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NADIRA SULTANA/Examiner, Art Unit 2653
/Paras D Shah/Supervisory Patent Examiner, Art Unit 2653
11/15/2025