DETAILED ACTION
This Office Action is in response to the correspondence filed by the applicant on 2/27/2026.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The Information Statements (IDS) filed on 3/6/2026 have been accepted and considered in this office action and are in compliance with the provisions of 37 CFR 1.97.
Allowable Subject Matter
Claims 17 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
Applicant’s arguments with respect to rejections have been fully considered, but they are not persuasive. The Examiner reviewed the specification as-filed, and the term, “synonym” is used as different vocabularies for the same command in [0297]. For example, recording, voice note, voice memo, and dictation for performing an action for dictation. Similarly, PARK describes a situation where “5.1 channels” is used as a synonym for activating the “home theatre device” as described in [0300]. Thus, the vocabulary entry includes an identifier (e.g., home theatre device) and its synonym (e.g., 5.1 channels). In another interpretation, the identifiers (e.g., [header], [sound preset], etc.) have synonyms such as, DVD 1, Home theater, channel 7.1, channel 5.1, show me, etc. For at least the reasons above, the Examiner maintains the rejections.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-25 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-55 of US PAT 11,978,436. Although the claims, at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are rejected as being unpatentable over the claims of the US PAT. Please see below for the mapping in the table, where the bolded limitations indicate the corresponding limitations between the US PAT and instant application.
Instant application: 18/611,526
US PAT 11,978,436
1. An electronic device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
obtaining, from a software application, a first vocabulary entry for the software application, wherein the first vocabulary entry includes a first identifier of a first class handled by the software application, and wherein the first vocabulary entry includes a first synonym for the first identifier;
registering the first vocabulary entry with a knowledge base for a digital assistant of the electronic device; and
while the software application is running:
receiving a request from the software application to register a second vocabulary entry for the software application; and
registering the second vocabulary entry with the knowledge base for the digital assistant.
1. An electronic device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
obtaining, from a software application, a first portion of an application vocabulary for the software application, wherein the first portion of the application vocabulary includes at least a vocabulary entry of a first type;
obtaining from the software application while the software application is running on the electronic device, a second portion of the application vocabulary, wherein the second portion includes at least a vocabulary entry of a second type;
registering the application vocabulary to a knowledge base for a digital assistant of the electronic device;
receiving a user input;
determining whether the user input corresponds to a first vocabulary entry of the application vocabulary;
in accordance with a determination that at least a first portion of the user input matches the first vocabulary entry:
determining, by the digital assistant using natural language processing techniques and based at least in part on the matching first vocabulary entry, a first action to be performed by the software application; and
causing the software application to perform the first action based on the first vocabulary entry; and
in accordance with a determination that the software application has been uninstalled from the electronic device, deregistering the first vocabulary entry and the second vocabulary entry from the knowledge base.
Other independent claims 24 and 25 are also similar to the independent claims 22 and 39 of the US PAT.
With respect to the dependent claims, each of the claims maps to a corresponding dependent claim of the US PAT or are found within the scope of the independent claim.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 4-9, 11-12, 19-20, and 24-25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by PARK (US 2017/0365251 A1).
REGARDING CLAIM 1, PARK discloses an electronic device, comprising:
one or more processors (PARK Fig. 34 – “control unit”);
a memory (PARK Fig. 34 – “memory”); and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions (PARK Par 425 – “computer readable codes”) for:
obtaining, from a software application, a first vocabulary entry for the software application (PARK Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520.”; Par 339 – “An application 2311 installed on the user device 2310 may include information regarding tasks that may be performed according to commands, For example, the application 2311 may include ‘Play,’ ‘Pause,’ and ‘Stop’ as information regarding tasks corresponding to commands ‘Play,’ ‘Pause,’ and ‘Stop.’ Furthermore, the application 2311 may include information regarding texts that may be included in commands.”; Fig. 23 – “Device information 2320”), wherein the first vocabulary entry includes a first identifier of a first class handled by the software application (PARK Figs. 15 and 19; Par 252 – “The header 1511 may include information for identifying the music player program 1510 and may include information regarding type, storage location, and name of the music player program 1510.”), and wherein the first vocabulary entry includes a first synonym for the first identifier (PARK Figs. 15, 18, and 19 – “Show me [video name] in 5.1 channels”; Par 300 – “Referring to FIG. 18, the module selecting and instructing unit 1852 may receive ‘show me winter kingdom in 5.1 channels’ as a result of speech recognition from the speech recognition device 1830. Since the result of the speech recognition does not include a device identifier or an application identifier, the DVD player device 1860 and the home theatre device 1870 to transmit a command thereto may be determined based on situation information or a keyword for a command.”; Par 312 – “The sound preset 1936 may include information about available settings regarding sound output of the home theatre device 1923. If the home theatre device 1923 may be set to 7.1 channels, 5.1 channels, and 2.1 channels, the sound preset 1936 may include 7.1 channels, 5.1 channels, and 2.1 channels as information regarding available settings regarding channels of the home theatre device 1923. Other than channels, the sound preset 1936 may include an equalizer setting, a volume setting, etc., and may further include information regarding various available settings with respect to the home theatre device 1923 based on user settings.”; Par 307 – “A speech instruction 1911 is an example of a result of speech recognition that may be output based on a speech recognition according to an embodiment. If the speech instruction 1911 includes name of a video and 5.1 channels, the module selecting and instructing unit 1922 may select the DVD player device 1921 and the home theatre device 1923 capable of playing back the video as devices for transmitting commands thereto”; Par 230 – “If only an appearance probability regarding a command ‘Play [Song]’ exists in a language model, appearance probability information regarding a command ‘Let me listen to [Song]’ may be added to the language model based on a user definition.”);
registering the first vocabulary entry with a knowledge base for a digital assistant of the electronic device (PARK Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520.”); and
while the software application is running (PARK Par 170 – “For example, when a new word is obtained at a particular module or while a module is being executed, situation information may include the particular module or information regarding the module being executed.”; Par 175 – “In an operation S1017, the speech recognition data updating device 420 may generate new word information for adding the word detected in the operation S1003 to the first language model.”; Par 105 – “Therefore, in the method of updating a language model according to an embodiment, a language model may be updated with respect to a new word within a few seconds, and the speech recognition device 230 may reflect the new word in speech recognition in real time.”; Par 256 – “Furthermore, when a new event regarding an application occurs, the user device 1450 may update information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, and transmit the updated information to the speech recognition data updating device 1520. Therefore, the speech recognition data updating device 1520 may update a language model based on the latest information regarding the application.”; Par 259 – “If a memo program is currently being executed, the speech recognition device 1430 may perform speech recognition by applying a weight to a language model corresponding to a music player program that has been simultaneously used with the memo program.”):
receiving a request from the software application to register a second vocabulary entry for the software application (PARK Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520. Furthermore, when a new event regarding an application occurs, the user device 1450 may update information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, and transmit the updated information to the speech recognition data updating device 1520. Therefore, the speech recognition data updating device 1520 may update a language model based on the latest information regarding the application.”; Par 254 – “The music information 1513 may include information regarding music that may be played back by the music player program 1510. For example, the music information 1513 may include identification information regarding music files that may be played back by the music player program 1510 and classification information thereof, such as information regarding albums and singers.”; Par 339 – “An application 2311 installed on the user device 2310 may include information regarding tasks that may be performed according to commands, For example, the application 2311 may include ‘Play,’ ‘Pause,’ and ‘Stop’ as information regarding tasks corresponding to commands ‘Play,’ ‘Pause,’ and ‘Stop.’ Furthermore, the application 2311 may include information regarding texts that may be included in commands. The user device 2310 may transmit at least one of information regarding tasks of the application 2311 that may be performed based on commands and information regarding texts that may be included in commands to the voice recognition data updating device 2320. The voice recognition data updating device 2320 may perform speech recognition based on the information received from the user device 2310.”); and
registering the second vocabulary entry with the knowledge base for the digital assistant (PARK Par 227 – “The speech recognition data updating device 1420 may determine a language model to add a new word included in the language data 1410 based on the situation information received from the situation information managing unit 1451. If no language model corresponding to the situation information exists, the speech recognition data updating device 1420 may generate a new language model and add appearance probability information regarding a new word to the newly generated language model.”; Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520. Furthermore, when a new event regarding an application occurs, the user device 1450 may update information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, and transmit the updated information to the speech recognition data updating device 1520. Therefore, the speech recognition data updating device 1520 may update a language model based on the latest information regarding the application.”).
REGARDING CLAIM 4, PARK discloses the electronic device of claim 1, wherein the second vocabulary entry includes a second identifier of a second class handled by the software application (PARK Figs. 15, 18, and 19 – “Show me [video name] in 5.1 channels”; Par 254 –“For example, the music information 1513 may include identification information regarding music files that may be played back by the music player program 1510 and classification information thereof, such as information regarding albums and singers.”; Par 300 – “Referring to FIG. 18, the module selecting and instructing unit 1852 may receive ‘show me winter kingdom in 5.1 channels’ as a result of speech recognition from the speech recognition device 1830. Since the result of the speech recognition does not include a device identifier or an application identifier, the DVD player device 1860 and the home theatre device 1870 to transmit a command thereto may be determined based on situation information or a keyword for a command.”; Par 307 – “A speech instruction 1911 is an example of a result of speech recognition that may be output based on a speech recognition according to an embodiment. If the speech instruction 1911 includes name of a video and 5.1 channels, the module selecting and instructing unit 1922 may select the DVD player device 1921 and the home theatre device 1923 capable of playing back the video as devices for transmitting commands thereto”).
REGARDING CLAIM 5, PARK discloses the electronic device of claim 1, wherein the first vocabulary entry is associated with a first command for the software application (PARK Par 290 – “The speech recognition data updating device 1820 may add appearance probability information regarding ‘winter kingdom’ and ‘5.1 channels’ to at least one or more language models respectively corresponding to the DVD player device 1860 and the home theatre device 1870.”; Fig. 15 – “Player 1 Play back”).
REGARDING CLAIM 6, PARK discloses the electronic device of claim 1, wherein the second vocabulary entry is associated with a second command for the software application (PARK Par 289 – “The speech recognition data updating device 1820 may detect new words ‘winter kingdom’ and ‘5.1 channels’ included in the language data 1810. Situation information regarding the word ‘winter kingdom’ may include information regarding related to a digital versatile disc (DVD) player device 1860 for movie playback. Furthermore, situation information regarding the word ‘5.1 channels’ may include information regarding a home theatre device 1870 for audio output.”).
REGARDING CLAIM 7, PARK discloses the electronic device of claim 1, wherein obtaining the first vocabulary entry for the software application is performed in response to receiving a user input requesting to install the software application (PARK Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520. Furthermore, when a new event regarding an application occurs, the user device 1450 may update information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, and transmit the updated information to the speech recognition data updating device 1520. Therefore, the speech recognition data updating device 1520 may update a language model based on the latest information regarding the application.”; Par 339 – “The user device 2310 may include various types of terminal devices that may be used by a user, where at least one application may be installed thereon. An application 2311 installed on the user device 2310 may include information regarding tasks that may be performed according to commands, For example, the application 2311 may include ‘Play,’ ‘Pause,’ and ‘Stop’ as information regarding tasks corresponding to commands ‘Play,’ ‘Pause,’ and ‘Stop.’ Furthermore, the application 2311 may include information regarding texts that may be included in commands. The user device 2310 may transmit at least one of information regarding tasks of the application 2311 that may be performed based on commands and information regarding texts that may be included in commands to the voice recognition data updating device 2320. The voice recognition data updating device 2320 may perform speech recognition based on the information received from the user device 2310.”).
REGARDING CLAIM 8, PARK discloses the electronic device of claim 1, wherein the software application is installed on the electronic device (PARK Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520.”; Par 339 – “The user device 2310 may include various types of terminal devices that may be used by a user, where at least one application may be installed thereon. An application 2311 installed on the user device 2310 may include information regarding tasks that may be performed according to commands, For example, the application 2311 may include ‘Play,’ ‘Pause,’ and ‘Stop’ as information regarding tasks corresponding to commands ‘Play,’ ‘Pause,’ and ‘Stop.’”).
REGARDING CLAIM 9, PARK discloses the electronic device of claim 1, wherein obtaining the first vocabulary entry for the software application is performed in response to launching the software application (PARK Par 84 – “The speech recognition data updating device 220 may collect the language data 210 and update speech recognition data periodically or when an event occurs. For example, when a screen image on a display unit of a user device is switched to another screen image, the speech recognition data updating device 220 may collect the language data 210 included in the switched screen image and update speech recognition data based on the collected language data 210. The speech recognition data updating device 220 may collect the language data 210 by receiving the language data 210 included in the screen image on the display unit from the user device.”; Par 170 – “Situation information according to an embodiment may include at least one of information regarding a user, module identification information, information regarding location of a device, and information regarding a location at which a new word is obtained. For example, when a new word is obtained at a particular module or while a module is being executed, situation information may include the particular module or information regarding the module being executed. If the new word is obtained while a particular speaker is using the speech recognition data updating device 420 or the new word is related to the particular speaker, situation information regarding the new word may include information regarding the particular speaker.”; Par 217 – “Weights that may be applied to respective appearance probabilities may be determined based on situation information or various other conditions, e.g., information regarding a user, a region, a command history, a module being executed, etc”; Par 238 – “Situation information may include information regarding a module being currently executed on the user device 1450, a history of using modules, a history of voice commands, information regarding an application that may be executed on the user device 1450 and corresponds to an existing language model, information regarding a user currently using the user device 1450, etc.”; Par 240 – “If situation information indicates that the speech data 1440 is obtained from the user device 1450 while the application A is being executed, the speech recognition device 1430 may select a language model corresponding to at least one of the application A and the user device 1450.”).
REGARDING CLAIM 11, PARK discloses the electronic device of claim 1, wherein obtaining the first vocabulary entry for the software application includes retrieving the first vocabulary entry from a first data file of the software application (PARK Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520.”; Par 332 – “The speech recognition data updating device 2220 may include information regarding a user from the user device 2210, the information including an address book 2211, an installed application list 2212, and a stored album list 2213. However, the present invention is not limited thereto, and the speech recognition data updating device 2220 may receive various information regarding the user device 2210 from the user device 2210.”; Par 339 – “The user device 2310 may include various types of terminal devices that may be used by a user, where at least one application may be installed thereon. An application 2311 installed on the user device 2310 may include information regarding tasks that may be performed according to commands, For example, the application 2311 may include ‘Play,’ ‘Pause,’ and ‘Stop’ as information regarding tasks corresponding to commands ‘Play,’ ‘Pause,’ and ‘Stop.’).
REGARDING CLAIM 12, PARK discloses the electronic device of claim 1, wherein registering the first vocabulary entry to the knowledge base includes registering first metadata in association with the first vocabulary entry (Fig. 23 Unit 2311; PARK Par 332 – “The speech recognition data updating device 2220 may include information regarding a user from the user device 2210, the information including an address book 2211, an installed application list 2212, and a stored album list 2213. However, the present invention is not limited thereto, and the speech recognition data updating device 2220 may receive various information regarding the user device 2210 from the user device 2210.”; Par 339 – “The user device 2310 may include various types of terminal devices that may be used by a user, where at least one application may be installed thereon. An application 2311 installed on the user device 2310 may include information regarding tasks that may be performed according to commands, For example, the application 2311 may include ‘Play,’ ‘Pause,’ and ‘Stop’ as information regarding tasks corresponding to commands ‘Play,’ ‘Pause,’ and ‘Stop.’”).
REGARDING CLAIM 19, PARK discloses the electronic device of claim 1, wherein registering the second vocabulary entry to the knowledge base includes registering second metadata in association with the second vocabulary entry (Fig. 23; PARK Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520.”; Par 332 – “The speech recognition data updating device 2220 may include information regarding a user from the user device 2210, the information including an address book 2211, an installed application list 2212, and a stored album list 2213. However, the present invention is not limited thereto, and the speech recognition data updating device 2220 may receive various information regarding the user device 2210 from the user device 2210.”).
REGARDING CLAIM 20, PARK discloses the electronic device of claim 19, wherein the second metadata includes automatic speech recognition (ASR) metadata for the second vocabulary entry (PARK Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520.”; Par 332 – “The speech recognition data updating device 2220 may include information regarding a user from the user device 2210, the information including an address book 2211, an installed application list 2212, and a stored album list 2213. However, the present invention is not limited thereto, and the speech recognition data updating device 2220 may receive various information regarding the user device 2210 from the user device 2210.”), and wherein the one or more programs further include instructions for: determining the ASR metadata for the second vocabulary entry (PARK Par 343 – “The language model updating unit 2322 may update a language model, which may be used to perform speech recognition, based on the device information 2321. A language model that may be updated based on the device information 2321 may include a second language model corresponding to the user device 2310 from among the at least one second language model 2323. Furthermore, a language model that may be updated based on the device information 2321 may include a second language model corresponding to the application 2311 from among the at least one second language model 2323.”).
REGARDING CLAIM 24, PARK discloses a non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first electronic device, cause the first electronic device to: perform the steps of claim 1. Thus, it is rejected under the same rationale.
REGARDING CLAIM 25, PARK discloses a method for registering application terminology, comprising: at an electronic device with one or more processors and memory (PARK Fig. 34): perform the steps of claim 1. Thus, it is rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over PARK (US 2017/0365251 A1), and in further view of ROBINSON (US 2018/0336893 A1).
REGARDING CLAIM 10, PARK discloses the electronic device of claim 1.
ROBINSON discloses a method/system for human-computer interaction using installed applications, wherein obtaining the first vocabulary entry for the software application is initiated by the digital assistant of the electronic device (ROBINSON Par 10 – “In some further embodiments, any user can customize commands associated with an action that corresponds to an application installed on their computing device. Very simply, a user can employ the digital assistant to identify the application from a list of installed applications, select an option to add a new command to the action, and announce a new command for association with the action. In this regard, the user can create any custom command to invoke the action with which the custom command is associated. In some aspects, the custom command and/or modified action can be uploaded to the server for analysis, as noted above. In some further aspects, based on the analysis, the server can distribute the custom command and/or modified action to a plurality of other computing devices having an instance of the digital assistant executing thereon. It is also possible that some of these commands are used in training the machine learning command matching algorithm, and other commands that are not suitable are dropped from the training model. In this regard, the list of possible actions and associated commands can continue to grow and become automatically available to any user of the digital assistant, or to the machine learning algorithms that are trained using these commands to create a more robust matching model for user commands”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of PARK to include obtaining vocabularies initiated by a digital assistant.
One of ordinary skill would have been motivated to include obtaining vocabularies initiated by a digital assistant, in order to automatically make the commands associated with applications available to a user.
Claims 13, 16, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over PARK (US 2017/0365251 A1), and in further view of KATAYAMA (US 2004/0006460 A1).
REGARDING CLAIM 13, PARK discloses the electronic device of claim 1, wherein registering the first vocabulary entry to the knowledge base [includes indexing the first vocabulary entry] in a searchable database for the digital assistant (PARK Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520.”; Par 339 – “An application 2311 installed on the user device 2310 may include information regarding tasks that may be performed according to commands, For example, the application 2311 may include ‘Play,’ ‘Pause,’ and ‘Stop’ as information regarding tasks corresponding to commands ‘Play,’ ‘Pause,’ and ‘Stop.’ Furthermore, the application 2311 may include information regarding texts that may be included in commands.”; Fig. 23 – “Device information 2320”; Par 230 – “If only an appearance probability regarding a command ‘Play [Song]’ exists in a language model, appearance probability information regarding a command ‘Let me listen to [Song]’ may be added to the language model based on a user definition.”).
PARK dose not explicitly teach the [square-bracketed] limitations. In other words, PARK teaches determining whether or not a command exist in a language model by searching the language model database. Since the data in the database is searchable, the data in the language model are implicitly are [indexed]. Although PARK implicitly suggests the [square-bracketed] limitations, Examiner provides KATAYAMA for the clarity of the rejection.
KATAYAMA discloses a method/system for natural language processing comprising: registering vocabulary entries [includes indexing the first vocabulary entry] in a searchable database for the digital assistant (KATAYAMA Par 51 – “The dictionary database 12 includes a merge dictionary 52 which stores synonyms indicating concepts similar to element words and an index word database 42 including element words related to index word character strings and character strings indicating the readings of index word character strings. The search processing section 121 includes an element word extract section 122 which searches for input words using element words and synonyms stored in the merge dictionary when the input word judgment section judges that the input word is neither kana nor Roman alphabetic and that the number of characters of the input word is over a predetermined number and an index word extract section 124 and extracts index word character strings which correspond to element words from the index word database and displays extracted index word character strings.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of PARK to include indexing the vocabulary entry.
One of ordinary skill would have been motivated to include indexing the vocabulary entry, in order to efficiently locate a data in a database.
REGARDING CLAIM 16, PARK discloses the electronic device of claim 1.
PARK does not explicitly teach an ordered set of vocabulary entries.
KATAYAMA discloses a method/system for natural language processing, wherein the second vocabulary entry is included in an ordered set of vocabulary entries (KATAYAMA Par 20 – “… and displays extracted the index word character strings with an order of high importance when the input word is input;”; Par 97 – “When there is more than one completely-matching index word character string 1, all the index word character strings 2 that correspond to the index word character strings 1 are displayed in the ascending order of the index word database.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of PARK to include an ordered set of data.
One of ordinary skill would have been motivated to include an ordered set of data, in order to efficiently locate a data in a database.
REGARDING CLAIM 21, PARK discloses the electronic device of claim 1, wherein registering the second vocabulary entry to the knowledge base [includes indexing the second vocabulary entry] in a searchable database for the digital assistant (PARK Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520.”; Par 339 – “An application 2311 installed on the user device 2310 may include information regarding tasks that may be performed according to commands, For example, the application 2311 may include ‘Play,’ ‘Pause,’ and ‘Stop’ as information regarding tasks corresponding to commands ‘Play,’ ‘Pause,’ and ‘Stop.’ Furthermore, the application 2311 may include information regarding texts that may be included in commands.”; Fig. 23 – “Device information 2320”; Par 230 – “If only an appearance probability regarding a command ‘Play [Song]’ exists in a language model, appearance probability information regarding a command ‘Let me listen to [Song]’ may be added to the language model based on a user definition.”).
PARK dose not explicitly teach the [square-bracketed] limitations. In other words, PARK teaches determining whether or not a command exist in a language model by searching the language model database. Since the data in the database is searchable, the data in the language model are implicitly are [indexed]. Although PARK implicitly suggests the [square-bracketed] limitations, Examiner provides KATAYAMA for the clarity of the rejection.
KATAYAMA discloses a method/system for natural language processing comprising: registering vocabulary entries [includes indexing the first vocabulary entry] in a searchable database for the digital assistant (KATAYAMA Par 51 – “The dictionary database 12 includes a merge dictionary 52 which stores synonyms indicating concepts similar to element words and an index word database 42 including element words related to index word character strings and character strings indicating the readings of index word character strings. The search processing section 121 includes an element word extract section 122 which searches for input words using element words and synonyms stored in the merge dictionary when the input word judgment section judges that the input word is neither kana nor Roman alphabetic and that the number of characters of the input word is over a predetermined number and an index word extract section 124 and extracts index word character strings which correspond to element words from the index word database and displays extracted index word character strings.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of PARK to include indexing the vocabulary entry.
One of ordinary skill would have been motivated to include indexing the vocabulary entry, in order to efficiently locate a data in a database.
Claims 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over PARK (US 2017/0365251 A1), and in further view of VINEGRAD (US 2016/0066360 A1).
REGARDING CLAIM 14, PARK discloses the electronic device of claim 1, wherein the request from the software application is received (PARK Par 256 – “When a new application is installed, the user device 1450 according to an embodiment may transmit information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, to the speech recognition data updating device 1520. Furthermore, when a new event regarding an application occurs, the user device 1450 may update information regarding the application, which includes the header 1511, the command language 1512, and the music information 1513, and transmit the updated information to the speech recognition data updating device 1520. Therefore, the speech recognition data updating device 1520 may update a language model based on the latest information regarding the application.”; Par 254 – “The music information 1513 may include information regarding music that may be played back by the music player program 1510. For example, the music information 1513 may include identification information regarding music files that may be played back by the music player program 1510 and classification information thereof, such as information regarding albums and singers.”; Par 339 – “An application 2311 installed on the user device 2310 may include information regarding tasks that may be performed according to commands, For example, the application 2311 may include ‘Play,’ ‘Pause,’ and ‘Stop’ as information regarding tasks corresponding to commands ‘Play,’ ‘Pause,’ and ‘Stop.’ Furthermore, the application 2311 may include information regarding texts that may be included in commands. The user device 2310 may transmit at least one of information regarding tasks of the application 2311 that may be performed based on commands and information regarding texts that may be included in commands to the voice recognition data updating device 2320. The voice recognition data updating device 2320 may perform speech recognition based on the information received from the user device 2310.”) [as an application programming interface (API) call].
PARK is silent to the [square-bracketed] limitations.
VINEGRAD discloses the [square-bracketed] limitations. VINEGRAD discloses a method/system for interacting with installed software applications on an electronic device, wherein the request from the software application is received as [an application programming interface (API) call] (VINEGRAD Par 42 – “In some embodiments, an application programming interface exposes routines that applications can call in order to communicate over the Internet. These routines can be implemented by a daemon process that executes on the same mobile device on which the applications execute.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of PARK to include an API call, as taught by VINEGRAD.
One of ordinary skill would have been motivated to include an API call, in order to enable applications to efficiently interact with interfaces (Par 4).
REGARDING CLAIM 15, PARK discloses the electronic device of claim 1.
PARK does not explicitly teach the request is received via a daemon.
VINEGRAD discloses a method/system for interacting with installed software applications on an electronic device, wherein the request from the software application is received via a daemon (VINEGRAD Par 42 – “In some embodiments, an application programming interface exposes routines that applications can call in order to communicate over the Internet. These routines can be implemented by a daemon process that executes on the same mobile device on which the applications execute.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of PARK to include a daemon, as taught by VINEGRAD.
One of ordinary skill would have been motivated to include a daemon, in order to enable applications to efficiently interact with interfaces (Pars 4 and 20).
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over PARK (US 2017/0365251 A1), and in further view of GUADARRAMA (US 20160313958 A1).
REGARDING CLAIM 22, PARK discloses the electronic device of claim 1, wherein the one or more programs further include instructions for:
in response to receiving a request to launch the software application, [deregistering] updating the second vocabulary entry from the knowledge base (PARK Par 84 – “The speech recognition data updating device 220 may collect the language data 210 and update speech recognition data periodically or when an event occurs. For example, when a screen image on a display unit of a user device is switched to another screen image, the speech recognition data updating device 220 may collect the language data 210 included in the switched screen image and update speech recognition data based on the collected language data 210. The speech recognition data updating device 220 may collect the language data 210 by receiving the language data 210 included in the screen image on the display unit from the user device.”; Par 170 – “Situation information according to an embodiment may include at least one of information regarding a user, module identification information, information regarding location of a device, and information regarding a location at which a new word is obtained. For example, when a new word is obtained at a particular module or while a module is being executed, situation information may include the particular module or information regarding the module being executed. If the new word is obtained while a particular speaker is using the speech recognition data updating device 420 or the new word is related to the particular speaker, situation information regarding the new word may include information regarding the particular speaker.”; Par 217 – “Weights that may be applied to respective appearance probabilities may be determined based on situation information or various other conditions, e.g., information regarding a user, a region, a command history, a module being executed, etc”; Par 238 – “Situation information may include information regarding a module being currently executed on the user device 1450, a history of using modules, a history of voice commands, information regarding an application that may be executed on the user device 1450 and corresponds to an existing language model, information regarding a user currently using the user device 1450, etc.”; Par 240 – “If situation information indicates that the speech data 1440 is obtained from the user device 1450 while the application A is being executed, the speech recognition device 1430 may select a language model corresponding to at least one of the application A and the user device 1450.”).
PARK does not explicitly teach the [square-bracketed] limitation. In other words, PARK teaches updating the vocabulary entry when the software application is launched, but does not explicitly teach the updating includes deregistering.
GUADARRAMA discloses a method/system for human-computer interface through commands comprising updating the command words including [deregistering] the second vocabulary entry from the knowledge base (GUADARRAMA Par 39 – “In further embodiments, the extension and/or commands may be updated in response to detecting an update of the client application. For example, one or more additional commands may be added to the extension, one or more of the commands may be removed from the extension, and/or one or more of the commands may be updated.”; Par 73 – “In further examples, performing the action based on the one or more definitions may include launching a task pane, a menu, and/or a dialog of the client application through the user interface of the host application. Performing the action based on the one or more definitions may also include triggering development of custom developer code to perform the action. The extension may be removed from the user interface of the host application by un-installing the client application from a device on which the host application is deployed, in response to detecting an update to the client application, an update to the extension may be enabled, where the update may include displaying one or more additional commands, removing at least one of the one or more commands from display, and updating the one or more commands displayed.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of PARK to include modifying the vocabulary entry by deregistering the vocabulary entry, as taught by GUADARRAMA.
One of ordinary skill would have been motivated to include modifying the vocabulary entry by deregistering the vocabulary entry, in order to save storage and reduce a computation cost by removing no longer needed/wanted commands.
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over PARK (US 2017/0365251 A1), and in further view of NAVARRO (US 2020/0142714 A1).
REGARDING CLAIM 23, PARK discloses the electronic device of claim 1.
PARK is silent to uninstalling the software and deregistering the vocabulary entries.
NAVARRO discloses a method/system for natural language processing for computing devices, wherein the one or more programs further include instructions for:
in accordance with a determination that the software application has been uninstalled from the electronic device, deregistering the first vocabulary entry and the second vocabulary entry from the knowledge base (NAVARRO claim 2 – “The method of claim 1, wherein installing the first plugin corresponds to adding a first command to the CLI and said uninstalling the second plugin corresponds to removing a second command from the CLI.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of PARK to include deregistering the vocabulary entry for uninstalled applications, as taught by NAVARRO.
One of ordinary skill would have been motivated to include deregistering the vocabulary entry for uninstalled applications, in order to save storage and reduce a computation cost by removing no longer needed/wanted commands.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN C KIM whose telephone number is (571)272-3327. The examiner can normally be reached Monday to Friday 8:00 AM thru 4:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew C Flanders can be reached at 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN C KIM/Primary Examiner, Art Unit 2655