DETAILED ACTION
Introduction
1. This office action is in response to Applicant’s submission filed on 10/06/2023. Claims 1-14 are cancelled. Claims 15-34 are pending in the application and have been examined.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
3. The drawings filed on 10/06/2023 have been accepted and considered by the Examiner.
Nonstatutory Double Patenting
4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 15-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-20 of U.S. Patent No. 10,339,823. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of patent ‘823 anticipate the instant claims as presented in the chart below. Independent claims 15 and 25 in the current App. ‘555 are anticipated by independent claims 1 and 9 in the patent ‘823. Dependent claims 16-24 and 26-34 follow likewise the similar mapping to the corresponding dependent claims 2-8, and 10-17 in the patent ‘823.
Present App. 18/377,555:
15. (NEW) An electronic apparatus, comprising: a display; a communicator; a voice receiver; and a processor configured to:
receive a user voice input through the voice receiver while a function corresponding to voice recognition is executed, transmit text corresponding to the user voice input to a server through the communicator, receive a search result corresponding to the user voice input from the server, and control the display to output at least one question sentence based on the received search result.
16. (NEW) The electronic apparatus as claimed in claim 15, wherein the at least one question sentence is a question sentence related to a keyword included in the search result.
17. (NEW) The electronic apparatus as claimed in claim 15, further comprising: a manipulation receiver, wherein the processor is configured to transmit a question sentence selected based on a user input received through the manipulation receiver from among the at least one question sentence, to the server.
18. (NEW) The electronic apparatus as claimed in claim 15, wherein the user voice input is a first user voice input, and the processor is configured to transmit a question sentence selected based on a second user input received through the voice receiver from among the at least one question sentence, to the server, through the communicator.
19. (NEW) The electronic apparatus as claimed in claim 15, wherein the user voice input includes two or more words.
20. (NEW) The electronic apparatus as claimed in claim 19, wherein the processor is configured to receive the search result based on attribute information related to the two or more words, from the server.
21. (NEW) The electronic apparatus as claimed in claim 15, wherein the processor is configured to, based on there being a plurality of question sentences, control the display to output a list including a plurality of question sentences to be selected by a user.
22. (NEW) The electronic apparatus as claimed in claim 21, wherein the plurality of question sentences are included or not included in the list depending on a number of times previously selected.
23. (NEW) The electronic apparatus as claimed in claim 15, wherein the user voice input is an input received through the communicator from an external apparatus rather than from the voice receiver.
24. (NEW) The electronic apparatus as claimed in claim 15, wherein the processor is configured to, based on the user voice input corresponding to a sentence speech, control the display to output the at least one question sentence based on an object name obtained from the user voice input.
25. (NEW) A controlling method of an electronic apparatus, the controlling method comprising: receiving a user voice input through a voice receiver while a function corresponding to voice recognition is executed; transmitting text corresponding to the user voice input to a server through a communicator; receiving a search result corresponding to the user voice input from the server; and outputting at least one question sentence based on the received search result.
26. (NEW) The controlling method as claimed in claim 25, wherein the at least one question sentence is a question sentence related to a keyword included in the search result.
27. (NEW) The controlling method as claimed in claim 25, further comprising: transmitting a question sentence selected based on a user input received through a manipulation receiver from among the at least one question sentence, to the server.
28. (NEW) The controlling method as claimed in claim 25, wherein the user voice input is a first user voice input, and the controlling method comprises transmitting a question sentence selected based on a second user input received through the voice receiver from among the at least one question sentence, to the server.
29. (NEW) The controlling method as claimed in claim 25, wherein the user voice input includes two or more words.
30. (NEW) The controlling method as claimed in claim 29, wherein the receiving a search result comprises receiving the search result based on attribute information related to the two or more words, from the server.
31. (NEW) The controlling method as claimed in claim 25, wherein the outputting at least one question sentence comprises, based on there being a plurality of question sentences, outputting a list including a plurality of question sentences to be selected by a user.
32. (NEW) The controlling method as claimed in claim 31, wherein the plurality of question sentences are included or not included in the list depending on a number of times previously selected.
33. (NEW) The controlling method as claimed in claim 25, wherein the user voice input is an input received from an external apparatus.
34. (NEW) The controlling method as claimed in claim 25, wherein the outputting at least one question sentence comprises, based on the user voice input corresponding to a sentence speech, outputting the at least one question sentence based on an object name obtained from the user voice input.
U.S. Patent 10,339,823:
1. A display apparatus comprising: a display; an input unit; a communicator; and a processor configured to:
receive at least two words included in speech through the input unit in a first order, generate a plurality of different question sentences which include the at least two words in a plurality of different second orders regardless of the first order in which the at least two words are received, control the display to display the generated plurality of different question sentences, receive a selection of a question sentence from the displayed plurality of different question sentences, transmit information corresponding to the selected question sentence to a server via the communicator, and, based on at least one answer result corresponding to the information being received from the server via the communicator, control the display to display the received at least one answer result to provide an answer result appropriate to a question intention of a user although a non-sentence speech is input.
2. The display apparatus as claimed in claim 1, further comprising: a storage unit configured to store a plurality of sentences, and keywords corresponding to characteristic vectors for the plurality of respective sentences, wherein the processor compares a similarity in a pronunciation column between the stored keywords corresponding to the characteristic vectors for the plurality of respective sentences and the speech, determines a sentence including a keyword having a high similarity with the speech, and displays the determined sentence as the at least one question sentence.
3. The display apparatus as claimed in claim 2, wherein, based on a plurality of sentences being determined as the at least one question sentence, the processor displays the plurality of sentences determined as the at least one question sentence in order of a highest number of times the respective at least one question sentence has been previously selected based on selection history information for each of the plurality of sentences.
4. The display apparatus as claimed in claim 2, wherein, based on a plurality of sentences being determined as the at least one question sentence, the processor selects and displays a predetermined number of sentences having a highest number of times the respective at least one question sentence has been previously selected based on selection history information for each of the plurality of sentences.
5. The display apparatus as claimed in claim 1, wherein, based on keywords related to the speech being received from the server, the processor combines the received keywords, creates the question sentence with respect to the speech, and displays the question sentence.
6. The display apparatus as claimed in claim 5, wherein the server is a triple structure knowledge base server and extracts the keywords related to the speech using attribute information related to the speech.
7. The display apparatus as claimed in claim 2, wherein, when there is no sentence including the keyword having the highest similarity with the speech, the processor receives the speech and the keywords from the server, combines the received keywords, and creates a question sentence related to the speech.
8. The display apparatus as claimed in claim 1, wherein, based on the speech being a sentence, the processor extracts an object name from the speech using a natural language processing based algorithm and creates a question language based on the extracted object name.
9. A method, performed by a display apparatus, of providing questions and answers, the method comprising: receiving at least two words included in speech in a first order; generate a plurality of different question sentences which include the received at least two words in a plurality of different second orders regardless of the first order in which the at least two words are received; displaying the generated plurality of different question sentences; receive a selection of a question sentence from the displayed plurality of different question sentences; transmitting information corresponding to the selected question sentence to a server; and based on at least one answer result, corresponding to the information, being received from the server, displaying the received at least one answer result to provide an answer result appropriate to a question intention of a user although a non-sentence speech is input.
10. The method as claimed in claim 9, wherein the displaying of the at least one question sentence further includes: comparing a similarity in a pronunciation column between keywords corresponding to characteristic vectors for each of a plurality of previously stored sentences and the speech, determining a sentence including a keyword having a high similarity with the speech, and displaying the determined sentence as the at least one question sentence.
11. The method as claimed in claim 10, wherein the displaying of the at least one question sentence further includes: based on a plurality of sentences being determined as the at least one question sentence, displaying the plurality of sentences determined as the at least one question sentence in order of a highest number of times the respective at least one question sentence has been previously selected based on selection history information for each of the plurality of sentences.
12. The method as claimed in claim 10, wherein the displaying of the at least one question sentence further includes: based on a plurality of sentences being determined as the at least one question sentence, selecting and displaying a predetermined number of sentences having a highest number of times the respective at least one question sentence has been previously selected based on selection history information for each of the plurality of sentences.
13. The method as claimed in claim 9, wherein the displaying of the at least one question sentence further includes: based on keywords related to the speech being received from the server, combining the received keywords, creating the question sentence with respect to the speech, and displaying the created question sentence.
14. The method as claimed in claim 13, wherein the server is a triple structure knowledge base server and extracts keywords associated with the core vocabulary using attribute information related to the core vocabulary.
15. The method as claimed in claim 10, wherein the displaying of the at least one question sentence further includes: when there is no sentence including the keyword having the highest similarity with the speech, receiving the keywords associated with the speech from the server, combining the received keywords, and creating and displaying a question sentence related to the speech.
16. The method as claimed in claim 9, further comprising: determining whether the speech is a word speech or a sentence speech, wherein the transmitting includes, based on the speech being a sentence speech, extracting an object name from the speech using a natural language processing based algorithm, creating a question language based on the extracted object name, and transmitting the created question language to the server.
17. The controlling method as claimed in claim 10, further comprising: receiving a third voice input from an external electronic apparatus, and based on a plurality of words being included in text information corresponding to the third voice input, receiving a plurality of suggested combination texts corresponding to a combination of words of the plurality of words included in text information corresponding to the third voice input through the communicator from the server.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
5. Claims 15 – 34 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 25 (representative of claim 15) , the claim recites “A controlling method of an electronic apparatus, the controlling method comprising:
(a) receiving a user voice input through a voice receiver while a function corresponding to voice recognition is executed;
(b) transmitting text corresponding to the user voice input to a server through a communicator; and
(c) receiving a search result corresponding to the user voice input from the server;
(d) and outputting at least one question sentence based on the received search result.” These may be practically performed in the human mind using pen and paper. For example, limitations (a)-(d) can be done by evaluation and judgement, where a person determines by auditory observation on an audio recognition result evaluation of a predefined verbally provided audio message and recognizes content in that predefined audio message received in the form of “a spoken predefined sentence question from a search result” and writes it down as “a received search result from a sentence question.” Under its broadest reasonable interpretation when read in light of the specification, the actions “receiving”, “transmitting”, “receiving”, and “outputting” encompass mental processes practically performed in the human mind. Accordingly, the claim recites an abstract idea (Step 2A, Prong one).
The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of (a) “…voice receiver…” (b) “…server…communicator…” and (c) “…server…” and (d) “…display…the server…” which are mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity. See e.g., MPEP 2106.05(g) (“whether the limitation is significant”). Further, limitations (a) - (d) are recited as being performed by a computing device potentially including at least one server. The server is described as any generic computer device (Specification, see e.g., paras. 53, 54 “…conversation type server 200 located at a near distance and an external server (not shown) providing content, and may be, for example, Bluetooth, Zigbee, etc. The wireless communication module (not shown) is a module connected to an external network according to a wireless communication protocol such as WiFi, IEEE, etc. …” and “…The processor 140 is for controlling an apparatus, may be used with a central processing unit, a microprocessor, a controller, etc., and is used to control general operations of the apparatus. The processor 140 may be coupled to a different function part such as the voice processing unit 150, the communication unit 130, etc. and implemented as a system-on-a-chip (SOC) or a system on chip (SoC)…”). Likewise, the server and processor are recited at a high level of generality. In limitations (a)-(d), the server/receiver/communicator potentially including at least one processor is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See e.g., MPEP 2106.05(f). Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES).
The claim does not include additional elements that are sufficient to amount to more than the judicial exception. As discussed above, the recitation of a computing device including at least one processor to perform limitations (a)-(d) amounts to no more than mere instructions to apply the exception using a generic computer component. Limitations (a)-(d) are considered mere data gathering and output, and are additionally well-understood, routine, conventional activity. See e.g., MPEP 2106.05(d) and 2106.07(a)III. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer, which do not provide an inventive concept (Step 2B).
Accordingly, claims 15 and 25 are directed towards patent ineligible subject matter under 35 U.S.C. 101.
The remaining dependent claims fail to add patent eligible subject matter to their respective parent claims:
Claims 16, 26 regard a human performing interactive data gathering on pen and paper, writing down information on the people that asked a question (e.g., keyword found in a sentence question), and presenting a rewritten question to a crowd of other people to seek their approval (wherein the at least one question sentence is a question sentence related to a keyword included in the search result).
Claims 17, 27 regard transmitting a question sentence selected based on a user input received through a manipulation receiver from among the at least one question sentence, to the server, mentally processing question sentence for understanding the claimed interactive information based upon knowledge of a natural language, and mentally evaluating the intent/purpose of the question (e.g., manipulation receiver).
Claims 18, 28 regard wherein the user voice input is a first user voice input, and the controlling method comprises transmitting a question sentence selected based on a second user input received through the voice receiver from among the at least one question sentence, to the server through multiple verbal user inputs (e.g., question sentences), mentally processing text from verbal user inputs for understanding the claimed interactive information based upon knowledge of a natural language, and mentally evaluating the intent/purpose of the multiple verbal user inputs transmitted and received.
Claims 19, 29 regard also writing down spoken information (e.g., found in user voice input includes two or more words) mentally processing text for understanding the claimed interactive information based upon knowledge of a natural language, and mentally evaluating the intent/purpose of the question sentences.
Claims 20, 30, regard also writing down spoken information (e.g., found in receiving a search result comprises receiving the search result based on attribute information related to the two or more words, from the server) mentally processing text for understanding the claimed interactive information based upon knowledge of a natural language, and mentally evaluating the intent/purpose of the question sentences based on number of words of interest used.
Claims 21, 31, regard also writing down spoken information (e.g., found in outputting at least one question sentence comprises, based on there being a plurality of question sentences, outputting a list including a plurality of question sentences to be selected by a user) mentally processing text for understanding the claimed interactive information based upon knowledge of a natural language, and mentally evaluating the intent/purpose of the question sentences based on number of words of interest used in a list.
Claims 22, 32, regard also writing down spoken information (e.g., found in the plurality of question sentences are included or not included in the list depending on a number of times previously selected) mentally processing text for understanding the claimed interactive information based upon knowledge of a natural language, and mentally evaluating the intent/purpose of selected and/or repeated question sentences based on number of words of interest used in a list.
Claims 23, 33, regard also writing down spoken information (e.g., found in the user voice input is an input received from an external apparatus) mentally processing text for understanding the claimed interactive information based upon knowledge of a natural language, and mentally evaluating the intent/purpose of received question sentences from another human users.
Claims 24, 34, regard also writing down spoken information (e.g., found in wherein the outputting at least one question sentence comprises, based on the user voice input corresponding to a sentence speech, outputting the at least one question sentence based on an object name obtained from the user voice input) mentally processing text for understanding the claimed interactive information based upon knowledge of a natural language, and mentally evaluating the intent/purpose of received question sentences from another human user where a name is listed.
Claim Rejections - 35 USC § 102
6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
7. Claim(s) 15-34 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by
Hossain et al., (S. A. Hossain, A. S. M. M. Rahman, T. T. Tran and A. El Saddik, “Location Aware Question Answering Based Product Searching in Mobile Handheld Devices,” 2010 IEEE/ACM 14th International Symposium on Distributed Simulation and Real Time Applications, Fairfax, VA, USA, 2010, pp. 189-195), hereinafter referred to as HOSSAIN.
With respect to Claims 15 and 25, HOSSAIN discloses:
PNG
media_image1.png
756
616
media_image1.png
Greyscale
15. (NEW) An electronic apparatus, comprising: a display; a communicator; a voice receiver; and a processor (See e.g., “…mobile(s) or PDA device(s) …using natural language based question answering system…” and Fig. 5 providing a pocket PC with a display, communicator with voice receiving capabilities See e.g., Fig. 5 and §§ IV, VI) configured to, and 25. (NEW) A controlling method of an electronic apparatus, the controlling method comprising: receiv[ing] a user voice input through the voice receiver while a function corresponding to voice recognition is executed, transmit[ing] text corresponding to the user voice input to a server through the communicator, receiv[ing] a search result corresponding to the user voice input from the server, and control the display to output[ing] at least one question sentence based on the received search result (See e.g., the operative capabilities of Talkme, QA, Query Optimizer and Service Provider modules “…continually gets conversational text from the user and feedback the user if more filters are possible. This module receives the text query from the user, optimize the query to get better result, select the appropriate service providers based on the location information which is collected from the Talkme agent and finally search the optimized query to the web server and filter the result if necessary and then send it to the user device. The module consists of QA, Query Optimizer, Service locator and Filter sub modules…” See e.g., HOSSAIN, Figs. 4, 5 and §§ III, IV, VI).
With respect to Claims 16 and 26, HOSSAIN discloses:
16. (NEW) The electronic apparatus as claimed in claim 15, wherein the at least one question sentence is a question sentence related to a keyword included in the search result (See e.g., “…Talkme then send the first user question to QA. QA will reply if match with any AIML pattern also send the question to the query analyzer. Query analyzer extracts the keyword from this and fetches the search result from the search provider…” See e.g., HOSSAIN, Figs. 4, 5 and §§ III, IV, VI).
With respect to Claims 17 and 27, HOSSAIN discloses:
17. (NEW) The electronic apparatus as claimed in claim 15, further comprising: a manipulation receiver, wherein the processor is configured to transmit[ing] a question sentence selected based on a user input received through the manipulation receiver from among the at least one question sentence, to the server (See e.g., manipulation receiver can be observed from QA , Query Optimizer, and Service Provider and also how see e.g., “…Talkme then send the first user question to QA. QA will reply if match with any AIML pattern also send the question to the query analyzer. Query analyzer extracts the keyword from this and fetches the search result from the search provider. Then it determines the unmatched keyword and sends random unmatched keywords from them to the QA. QA then select and send to Talkme an appropriate template based on the pattern match. When the user again respond it will again send to the query analyzer in the same way as discussed above but only difference is now query analyzer need not to fetch the search result again from the search providers instead of this it use the previous search result to filter more…” See e.g., HOSSAIN, Figs. 4, 5 and §§ III, IV, VI).
With respect to Claims 18 and 28, HOSSAIN discloses:
18. (NEW) The electronic apparatus as claimed in claim 15, wherein the user voice input is a first user voice input, and the processor is configured to transmit[ing] a question sentence selected based on a second user input received through the voice receiver from among the at least one question sentence, to the server, through the communicator (See e.g., plurality of conversation query interactions capabilities in Talkme, QA, Query Optimizer and Service Provider modules “…continually gets conversational text from the user and feedback the user if more filters are possible. This module receives the text query from the user, optimize the query to get better result, select the appropriate service providers based on the location information which is collected from the Talkme agent and finally search the optimized query to the web server and filter the result if necessary and then send it to the user device. The module consists of QA, Query Optimizer, Service locator and Filter sub modules…” See e.g., HOSSAIN, Figs. 4, 5 and §§ III, IV, VI).
With respect to Claims 19 and 29, HOSSAIN discloses:
19. (NEW) The electronic apparatus as claimed in claim 15, wherein the user voice input includes two or more words (See e.g., “…<pattern>: A sequence of characters that want to match one or more user inputs. A pattern (see Fig. 2) may match only one input or several inputs. If a pattern is “I NEED TO BUY A WATCH”,…” See e.g., HOSSAIN, Figs. 4, 5 and §§ III, IV, VI).
With respect to Claims 20 and 30, HOSSAIN discloses:
20. (NEW) The electronic apparatus as claimed in claim 19, wherein the processor is configured to receiv[ing] the search result based on attribute information related to the two or more words, from the server (See e.g., “…<template>: It can specify the response to a particular matched pattern. In template we can use some feature like we can call previously saved value of a variable, we can check whether some portion of a pattern is fall in a class or not. Consider in Fig. 2 there is a pattern which contain , this means that when any input is try to match this pattern it first try to check whether it is a sex word or not that is if the input is either “Men” or “Women” then the input will matched with this pattern successfully. When the pattern matched successfully it’s then compile the template value. In the template portion may contain more sub tag like “That is its a ” which means that it assign previously stored value which name is sex and name respectively. That is if the input is “i want to buy a watch” then the word watch will be store in name variable…” and also see e.g., “…Algorithm 1 The function getSearchResult() get the search result with the help of the service locator and filter sub modules,…” See e.g., HOSSAIN, Figs. 4, 5 and §§ III, IV, VI).
With respect to Claims 21 and 31, HOSSAIN discloses:
21. (NEW) The electronic apparatus as claimed in claim 15, wherein the processor is configured to, based on there being a plurality of question sentences, control the display to output[ing] a list including a plurality of question sentences to be selected by a user (See e.g., “…If there are m number of keywords in the user query and n number of keywords on the search result where n ≥ m if the search result is not empty. So some common keywords available in user query and the search result because the search result come based on the user query. If the number of common keywords found between the user query and the search result is m0. Then we can calculate by Equation 1 the number of unmatched keywords available in the search result. Determination of unmatched keyword is shown in Algorithm 1 from line number 9 to 13. Every iteration of the while loop processed one user query at a time. If the number of conversational iteration is L then the total number of matched keywords will be P (see Equation 2). We call this list is optimized keywords list…” in combination with Fig. 4 showResult(), showFilteredResult(), and displayFinalResult() functions and outputs, See e.g., HOSSAIN, Figs. 4, 5 and §§ III, IV, VI).
With respect to Claims 22 and 32, HOSSAIN discloses:
22. (NEW) The electronic apparatus as claimed in claim 21, wherein the plurality of question sentences are included or not included in the list depending on a number of times previously selected (See e.g., plurality of question sentences capability according to QA, Query Optimizer, and Service Provides and see e.g., multiple m=determineKeywords(query) and multiple n=determineKeywords(searchResult) dwith selectAIMLPAttern(U), selectedAIML Template(U), showFitleredResult() and displayFinalResult() functions, See e.g., HOSSAIN, Figs. 4, 5 and §§ III, IV, VI).
With respect to Claims 23 and 33, HOSSAIN discloses:
23. (NEW) The electronic apparatus as claimed in claim 15, wherein the user voice input is an input received through the communicator from an external apparatus rather than from the voice receiver (See e.g., internal and external communication capapbilities with local and non-local server user interaction and receiving capabilities in see e.g.,“…this prototype only for some simple conversation and the prototype use locally running web service instead of actual web service running in the Internet. Here local web server use a simple XML file which act as a data store for the service. All the product details with the location information are already in this f ile. In the actual implementation need not to store the product information in a XML file. In that case we can use database server like SQL server or MySql or any other database…” and see e.g., “…Talkme first initiate a task by sending a new command to the QA and then QA will reply the Talkme with a clientID, rest of the time Talkme will have to communicate with the QA by using this ClientID. Talkme then send the first user question to QA. QA will reply if match with any AIML pattern also send the question to the query analyzer. Query analyzer extracts the keyword from this and fetches the search result from the search provider. Then it determines the unmatched keyword and sends random unmatched keywords from them to the QA. QA then select and send to Talkme an appropriate template based on the pattern match. When the user again respond it will again send to the query analyzer in the same way as discussed above but only difference is now query analyzer need not to fetch the search result again from the search providers instead of this it use the previous search result to filter more. In this way the system filter the data until user express interest to see the data that processed or there are no unmatched keywords available…” See e.g., HOSSAIN, Figs. 4, 5 and §§ III, IV, VI).
With respect to Claims 24 and 34, HOSSAIN discloses:
24. (NEW) The electronic apparatus as claimed in claim 15, wherein the processor is configured to, based on the user voice input corresponding to a sentence speech, control the display to output[ing] the at least one question sentence based on an object name obtained from the user voice input (See e.g., “…<template>: It can specify the response to a particular matched pattern. In template we can use some feature like we can call previously saved value of a variable, we can check whether some portion of a pattern is fall in a class or not. Consider in Fig. 2 there is a pattern which contain, this means that when any input is try to match this pattern it first try to check whether it is a sex word or not that is if the input is either “Men” or “Women” then the input will matched with this pattern successfully. When the pattern matched successfully it’s then compile the template value. In the template portion may contain more sub tag like “That is its a ” which means that it assign previously stored value which name is sex and name respectively. That is if the input is “i want to buy a watch” then the word watch will be store in name variable…” See e.g., HOSSAIN, Figs. 4, 5 and §§ III, IV, VI).
Conclusion
8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See e.g., Rosso et al., (P. Rosso, L. -F. Hurtado, E. Segarra and E. Sanchis, “On the Voice-Activated Question Answering,” in IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 1, pp. 75-85, Jan. 2012), disclosing see e.g., an architecture comprising “…Question answering (QA) …one of the most challenging tasks in the field of natural language processing. It requires search engines that are capable of extracting concise, precise fragments of text that contain an answer to a question posed by the user. The incorporation of voice interfaces to the QA systems adds a more natural and very appealing perspective for these systems. This paper provides a comprehensive description of current state-of-the-art voice-activated QA systems. Finally, the scenarios that will emerge from the introduction of speech recognition in QA will be discussed …” (See e.g., Rosso et al., Abstract).
Please, see PTO-892 for more details.
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Edgar Guerra-Erazo whose telephone number is (571) 270-3708. The examiner can normally be reached on M-F 7:30a.m.-5:00p.m. EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Bhavesh Mehta can be reached on (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDGAR X GUERRA-ERAZO/Primary Examiner, Art Unit 2656