DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Allowable Subject Matter
Claims 6-8 and 16-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-5, 9-15 and 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over LeBeau (US 9,881,608).
With respect to claim 12 (similarly claim 2), LeBeau teaches a system (e.g. system 100 Fig 1 col 4 ln 41-43, see also Figs 2 and 4), comprising:
input/output circuitry (e.g. GUI 400 Fig 4) configured to:
receive, at a device, a textual input comprising an erroneous word (e.g. receive, at mobile 200 Fig 2 and/or mobile 400 Fig 4A, textual input comprising an incorrect word, col 5 ln 55-61, col 7 ln 47-62);
control circuitry (e.g. control circuitry of mobile 200/400 Figs 2 and 4A) configured to: based at least in part on detecting that the textual input comprises the erroneous word (e.g. based on detecting that the textual input comprises “we’re” the incorrect word, Fig 4A col 7 ln 62-67-col 8 ln 1-2), automatically activate an input receiver for receiving an alternate phrase (e.g. automatically activate alternate phrase control 408 Fig 4A for receiving an alternate phrase selection col 8 ln 3-16);
Even though LeBeau detecting that the textual input comprises the erroneous/incorrect word “we’re” in Fig 4A, the embodiment of Fig 4A does not teach based at least in part on detecting that the textual input comprises the erroneous word, automatically activate an audio receiver for receiving an utterance;
cause to be transcribed, the utterance, using an audio file of the utterance and an indication of a location of the erroneous word within the textual input; and
modify the textual input to replace the erroneous word with a transcribed word from the utterance.
However, LeBeau teaches in Figs 1 and/or 5 automatically activate an audio receiver for receiving an utterance (e.g. automatically activate a microphone for receiving an utterance, Fig 5 S502 col 9 ln 5-8);
cause to be transcribed, the utterance, using an audio file of the utterance (e.g. cause server 104 of Fig 1 to transcribe the utterance using speech data output 108, col 9 ln 9-12) and an indication of a location of the erroneous word within the textual input (e.g. the word lattice 110 includes one or more weighting factors or probabilities that a particular word occurs at a particular location in the transcribed text col 5 ln 39-42 suggest an indication of a location of the erroneous word within the textual input); and
modify the textual input (e.g. modify the textual input, Fig 4A) to replace the erroneous word with a transcribed word from the utterance (e.g. replace the erroneous/incorrect word “we’re” with a transcribed word from the utterance, as suggested in col 9 ln 32-43).
Therefore, it would have been obvious to people having ordinary skill in the art before the effective filing date of the claimed invention to modify the embodiment of Fig 4A with the teachings of the embodiment of Figs 1 and 5 to include: based at least in part on detecting that the textual input comprises the erroneous word, automatically activate an audio receiver for receiving an utterance;
cause to be transcribed, the utterance, using an audio file of the utterance and an indication of a location of the erroneous word within the textual input; and
modify the textual input to replace the erroneous word with a transcribed word from the utterance, as taught in Figs 1 and 5. The benefit of the modification would be diversify the input methods of Fig 4 to include a speech input in addition to the text input.
With respect to claim 13 (similarly claim 3), LeBeau teaches the system of claim 12, wherein the control circuitry is configured to cause to be transcribed, the utterance by: transmitting, by the device, the textual input and the utterance to a speech recognition system for a transcription of the utterance (e.g. send audio to server Fig 5 S504 col 9 ln 9-12, see also speech data output 108 of Fig 1 which is sent to server 104 for transcription, the speech data output 108 includes the textual input of Fig 4 and the utterance as modified by the embodiment of Figs 1 and 5 in the rejection of claim 12); and receiving, by the device, the transcription of the utterance (e.g. the mobile 102 receives a word lattice transcribed from the speech audio data by the server 104 col 9 ln 13-17).
With respect to claim 14 (similarly claim 4), LeBeau teaches the system of claim 12, wherein the control circuitry is configured to cause to be transcribed, the utterance, by: generating, using the device, a transcription of the utterance (e.g. word lattice 210 transcribed from speech data output 208 generates a transcription of the utterance as modified by Fig 1 and/or 5).
With respect to claim 15 (similarly claim 5), LeBeau teaches the system of claim 12, wherein: the utterance is a first utterance (e.g. the utterance as modified in the rejection of claim 12 is a first utterance); and the textual input is a transcription of a second utterance that comprises an erroneously transcribed word (e.g. the textual input of Fig 4A is a transcription of a second utterance that comprises an erroneously/incorrectly transcribed word as suggested in col 7 ln 57-58 whereby a computing device that provides the GUI 400 can receive a voice or speech input).
With respect to claim 19 (similarly claim 9), LeBeau teaches the system of claim 12, wherein: the input/output circuitry is further configured to receive, via a user interface of the device, a user selection indicating the textual input comprises the erroneous word (e.g. receive via GUI 400, a user selection indicating the textual input comprises the erroneous word, see Fig 4A col 7 ln 63-67-col 8 ln 1-38); and
the control circuitry is further configured to detect the textual input comprises the erroneous word by receiving, via the input/output circuitry, the user selection indicating the textual input comprises the erroneous word (e.g. Fig 4A col 7 ln 63-67-col 8 ln 1-38 detect the textual input comprises the erroneous word by receiving, via the input/output circuitry, the user selection indicating the textual input comprises the erroneous word).
With respect to claim 20 (similarly claim 10), LeBeau teaches the system of claim 19, wherein: the control circuitry is further configured to generate, for presentation on a display of the device, the textual input (e.g. Fig 4A-B col 7 ln 63-67-col 8 ln 1-38 generates, for presentation on a display of the device, the textual input); and
the input/output circuitry is further configured to receive the user selection by receiving, via the display, an interaction with the erroneous word (e.g. GUI 400 receives the user selection, via the display, an interaction with the erroneous/incorrect word, see Fig 4A item 406).
With respect to claim 21 (similarly claim 11), LeBeau teaches the system of claim 12, wherein the control circuitry is further configured to automatically activate the audio receiver by automatically activating a microphone feature of the device (e.g. automatically activate the audio receiver by automatically activating a microphone feature of the device in Fig 5 S502 where a user may input an utterance into a microphone on a cellular telephone or smartphone).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IBRAHIM SIDDO whose telephone number is (571)272-4508. The examiner can normally be reached 9:00-5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at 5712703438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IBRAHIM SIDDO/Primary Examiner, Art Unit 2681