DETAILED ACTION
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Request for Continued Examination
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/05/2026 has been entered.
Response to Amendments and Arguments
Regarding an outstanding rejection to claims 1-20 under 35 U.S.C. §101, applicant argued (Remark, page 12) that the specification describes an improvement to an automatic speech recognition system. Applicant stated (Remarks, page 12):
PNG
media_image1.png
608
1156
media_image1.png
Greyscale
In response, the claim limitations recite broad limitations (“select a section type…”; “according to section type …”). These broadly recited limitations do not reflect technical features of how a section type being detected and how the detected section type being used to guide the speech recognition system. The claim limitations could still be interpreted as a mental process. For example, a person reviews a speech transcript (claimed “the first version of the speech transcript”) to correct transcription errors in an overlapping section of the speech (claimed “selecting a section type … for a portion of the audio data comprising a first speaker and a second speaker”). The amended claims are still directed to a mental process. The rejection under §101 has been maintained.
Regarding rejections over prior art references under 35 U.S.C. §102 and §103, applicant amended independent claims. Applicant argued (Remarks, pages 9-11) that the previously cited references fail to teach the newly added limitations. In particular, applicant argued (Remarks, page 9-10) that Tripathi evaluates audio embedding, not claimed “speech transcript”. Applicant stated that Tripathi does not anticipate the amended independent claim 1. Applicant also argued that two other independent claims 5 and 14 as well as dependent claims are not anticipated for the same reason as argued for claim 1.
In response, the examiner notices that Tripathi discloses a speech recognition system for generating a transcript from a conversation between two speakers (Fig. 1, Ted and Jane). The initially recognized text contains sequence of characters / words from each of speakers ([0035]). Tripathi discloses creating a transcript for the conversation by using output sequence characters / words from Ted and Jane ([0027-0028], [0035], correctly identify words spoken by Ted and that by Jane; Fig. 1, below replicated from Tripathi reference).
PNG
media_image2.png
796
1004
media_image2.png
Greyscale
Tripathi discloses identifying and processing different segments of speech (Tripathi, [0003], [0005], [0028], [0035], recognizing speech and evaluating transcripts from (1) Ted only speech segments, (2) Jane only segments, and (3) overlapped speech segments from both Ted and Jane). Tripathi meets the newly added limitations (“evaluating … selecting one section type…). In other words, the Tripathi’s speech recognition system identifies an overlapped segment from various speech segments are claimed “select a section type … that includes the overlapping speech of both the first speaker and the second speaker”. Tripathi discloses generating recognized characters / words from each speaker ([0035]), which corresponds to a claimed “a first version of the transcript” ([0035]). Tripathi further discloses combining output sequences for create a conversation transcript ([0038]). This corresponds to a claimed “a second version of the transcript”.
By giving a broadest and reasonable interpretation, Tripathi’s segment types meets broadly recited “a section type”. However, in light of an application’s arguments (Remarks, page 12) and the disclosure (Spec. [0012]), it appears that a claimed “the section type” refers to “topic or a type of conversation”. In other words, segment types in Tripathi are different from a claimed “section type”. To advance the prosecution, the examiner decides to combine a new reference to Hirshberg et al. (US PG Pub. 2024/0370661).
Hirshberg discloses recognizing speech and generating a summary from the recognized text according to different styles such as News, academic research presentations (Hirshberg, [0081], Fig. 3). In Hirshberg, the initial speech transcript is claimed “a first version of the transcript” and a generated summary corresponds to a claimed “a second version of the speech transcript”.
In the following rejection, the examiner rejects amended claims under 103 by combining Tripathi with Hirshberg. The arguments regarding previous rejection under §102 has been considered but are moot because the arguments do not apply to the new ground of the rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The Manual of Patent Examining Procedure (MPEP) provides detailed rules for determining subject matter eligibility for claims in §2106. Those rules provide a basis for the analysis and finding of ineligibility that follows. MPEP §2106(III) states that examiners should determine whether a claim satisfies the criteria for subject matter eligibility by evaluating the claim in accordance with the flowchart in this section.
Claims 1-20 are rejected under 35 U.S.C. §101. The claimed invention is directed to unpatentable subject matter because the claimed invention recites a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Eligibility Step 1 (MPEP 2106.03, Statutory category):
Claims 1-4 are directed to a system, claims 5-13 are directed to a method and claims 14-20 are directed to a computer readable medium. The claims 1-20 fall into one of the four statutory categories of invention (YES branch of step 1).
Eligibility Step 2A, Prong One (does a claim recites a judicial exception?) (MPEP 2106.04(a) – (c)):
Step 2A is a two-prong inquiry, in which examiners determine in Prong One whether a claim recites a judicial exception, and if so, then determine in Prong Two if the recited judicial exception is integrated into a practical application of that exception. Together, these prongs represent the first part of the Alice/Mayo test, which determines whether a claim is directed to a judicial exception (See a flowchart in MPEP 2106.04(II)(A)). In the prone one of the two prong inquiry, the above limitations recited in claims are directed to at least one of groups of abstract ideas (MPEP 2106.04(a), “Mathematical concepts”, “Certain methods of organizing human activity”, “Mental Processes”). It should be noted that these groupings are not mutually exclusive, i.e., some claims recite limitations that fall within more than one grouping or sub-grouping (MPEP 2106.04(a)(2)).
Although claims 1-20 fall into one of the four statutory categories the patent eligible subject matter. Although claims 1-20 are directed to one of the four statutory categories of invention (MPEP 2106.03), the claims recite a number of steps of (“receiving …”; “generating …”; “detecting …”; “generating …” and “providing …”). These limitations fall into a judicial exception (MPEP 2106.04 (II), “laws of nature”, “natural phenomena” and “abstract idea”). The Supreme Court has explained that the judicial exceptions reflect the Court’s view that abstract ideas, laws of nature, and natural phenomena are "the basic tools of scientific and technological work", and are thus excluded from patentability because "monopolization of those tools through the grant of a patent might tend to impede innovation more than it would tend to promote it." Alice Corp., 573 U.S. at 216, 110 USPQ2d at 1980. It should be noted that there are no bright lines between the types of exceptions, and that many of the concepts identified by the courts as exceptions can fall under several exceptions (MPEP 2106.04 (I) and (II)).
In light of the disclosure (Spec. [0040-0042], Fig. 5B-5C), a claimed subject matter is related to generating a speech transcription from a conversation between two persons. If the speech recognition system detects an overlapped portion of speech (claimed as: “detecting, by the automatic speech recognition, a section type for the portion of audio data...”), the recognition system generates speech transcripts for each person (illustrated in Fig. 5C, claimed as “bias speech recognition in favor of a first speaker”). A claimed subject matter can be regarded as a speech transcriptionist (a human) listen to a recorded conversation between a doctor and a patient and writes down speech transcription. The transcriptionist could write separated speech transcripts for each person even if the transcriptionist notices the doctor and the patient sometimes talk simultaneously. For example, limitations of claim 5 can be interpreted as:
receiving, a transcriptionist receives an audio recording containing a conversation between a doctor and a patient for generating a transcription);
generating, the transcriptionist writes down a first version of speech transcript for the conversation speech segments between the doctor and the patient);
evaluating, by the automatic speech recognition system, the first version of the transcript to select a section type of a plurality of section types for the portion of the audio data comprising a first speaker and a second speaker (the transcriptionist notices that a portion of the audio contains overlapped speech by both doctor and patient, the transcriptionist checks her previous fist version of the transcript);
generating, the transcriptionist carefully listens to the recording and correct some transcribed words from the doctor (claimed “a second version of the transcript to bias speech recognition in favor of the first speaker”) ); and
providing, the transcriptionist presents the corrected speech transcripts to a doctor’s office).
From the above interpretation, it can be seen that if independent claim 5 were patented, a speech transcriptionist would infringe the patent when the transcriptionist is doing her daily work. The examiner notes that claim 5 just recite “by automatic speech recognition system”, which is equivalent to “apply it” that generally links an abstract idea to a particular technology. Independent claim 14, although directed to a computer readable medium, includes similar features as claim 5. Claim 1 is slightly narrower than claim 5 by spelling out a broad term “section type” as “overlapping speech”.
The courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper” to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir.2011). If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea. See, e.g., Benson, 409 U.S. at 67, 65, 175 USPQ at 674-75,674. If the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept.
In these situations, the claim is considered to recite a mental process. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures “can be carried out in existing computers long in use, no new machinery being necessary.” The claims therefore recited an abstract idea, despite the fact that the claimed steps were performed on a computer. 887 F.3d at 1385, 126 USPQ2d at 1504.
Eligibility Step 2A, Prong two (integrated into a practical application? MPEP 2106.04(d)).
Since the claimed invention falls into a judicial exception according above analysis (YES branch of PRONG ONE in the step 2A), a claim that is directed to a judicial exception must be evaluated to determine whether the claim recite additional elements that integrate the judicial exception into a practical application (MPEP 2106.04(II)(A)(2)). Prong Two asks whether the claim recite additional elements that integrate the judicial exception into a practical application. In Prong Two, examiners evaluate whether the claim as a whole integrates the exception into a practical application of that exception. Court in Gottschalk v. Benson ‘‘held that simply implementing a mathematical principle on a physical machine, namely a computer was not a patentable application of that principle. Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. For a claim reciting a judicial exception to be eligible, the additional elements (if any) in the claim must "transform the nature of the claim" into a patent-eligible application of the judicial exception, Alice Corp., 573 U.S. at 217, 110 USPQ2d at 1981, either at Prong Two or in Step 2B. If there are no additional elements in the claim, then it cannot be eligible.
In the instant claims, the claims recite limitations that include generic computer elements (“computing device”) and generally linking a judicial exception to a technological environment (“by the automatic speech recognition system”; “a machine learning model”) . All these additional elements fail to integrate an abstract idea into a practical application.
Eligibility Step 2B (Inventive concept / significantly more consideration; MPEP 2106.05).
MPEP §2106.05 describes step 2B test to determine whether a claim amounts to significantly more. The second part of the Alice/Mayo test is often referred to as a search for an inventive concept. Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 217, 110 USPQ2d 1976, 1981 (2014). The Supreme Court has identified a number of considerations as relevant to the evaluation of whether the claimed additional elements amount to an inventive concept (See MPEP §2106.05(I)(A)). It is notable that mere physicality or tangibility of an additional element or elements is not a relevant consideration in Step 2B. As the Supreme Court explained in Alice Corp., mere physical or tangible implementation of an exception is not in itself an inventive concept and does not guarantee eligibility.
The Supreme Court has identified a number of considerations as relevant to the evaluation of whether the claimed additional elements amount to an inventive concept. By considering limitations recited in the instant claims, the claims do not improve the functions of a computer, or any other technology or technical field. The claims also do not apply the judicial exception with, or by use of, a particular machine. The claims also do not have effecting a transformation or reduction of a particular article to a different state or thing. The claims fail to include a specific limitation other than what is well-understood, routine, conventional activity in the field, or adding unconventional steps that confine the claim to a particular useful application. The recited “processor” / “memory” are well-understood, routine and conventional in the field. Therefore, that recited element does not amount to significantly more than an abstract idea.
Please notes simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984. The court also found “adding insignificant extra-solution activity to the judicial exception” or “generally linking the use of the judicial exception to a particular technological environment or field of use” is not enough to be qualify as “significantly more” considerations.
By reviewing limitations recited in the claims, none of the limitations meet the significantly more considerations. Therefore, claims are directed to unpatentable subject matter and are rejected under 35 U.S.C. 101 (MPEP §2106, flowchart, Step 2B, NO branch).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5, 7-10, 14 and 16-17 are rejected under 35 U.S.C. §103 as being unpatentable over Tripathi et al. (US PG Pub. 2021/0343273, hereinafter referred to as Tripathi) in view of Hirshberg et al. (US PG Pub. 2024/0370661, referred to as Hirshberg).
Tripathi discloses generating a speech transcript for a conversation between two persons ([0018], generating a speech transcript for a conversation between a doctor and a patient; [0027-0029], conversations between two friends, Ted and Jane; see Fig. 1). Tripathi discloses generating a transcript as sequence words from both Ted and Jane ([0027-0029], Fig. 1, #204, claimed “a first version of a transcript”). Tripathi further discloses the transcribed words are separated for one person from another person by applying masks ([0003], [0027-0029], [0033-0034], Fig. 1, Ted’s speech and Jane’s speech are separated, which is a claimed “a second version of the transcript”). Tripathi further discloses generating a final transcription of the conversation by combining transcripts from the first speaker and the second speaker and also adding labels and time stamps to indicating who / what / when was spoken (Tripathi, [0038], the speech recognizer 200 combines the output sequence 242 of each branch 208 to form a sequence of characters and/or words that define the transcript 204 for the conversation between speakers 10. Based on the association of each branch 208 with a respective speaker 10, the transcript 204 may include labels indicating which speaker spoke what. The transcript 204 may also include time stamps indicating who spoke what when; Tripathi further discloses processing each type of speech segments [0003], [0005], these different segments correspond to claimed “a section type”).
Hirshberg discloses recognizing speech by using speech-to-text models (Hirshberg, [0044]). Hirshberg further discloses using the recognized text as prompt to a large language model (LLM) to generate a summary from the recognized text according to a selected styles such as News, academic research presentations (Hirshberg, [0081], Fig. 3). In Hirshberg, the text generated from speech-to-text models is claimed “a first version of the transcript”. The generated summary from a LLM based on the recognized text correspond to “a second version of the speech transcript”.
Regarding claims 5 and 14, Tripathi discloses a computer implemented method and non-transitory medium (Tripathi, Fig. 1 and Fig. 5, a computer implemented system / method for transcribing conversations of multi-talkers; [0018], a conversation between a doctor and a patient; [0027], a conversation between two friends), comprising:
receiving, at an automatic speech recognition system, audio data for generating a transcription (Tripathi, [0025-0027], Fig. 1, transcribing conversations between two speakers, Ted and Jane);
generating, by the automatic speech recognition system, a first version of a transcript for speech in a portion of the audio data (Tripathi, [0025-0027], Fig. 1, generating a sequence of characters as spoken by both persons; [0035], generating transcripts for each of speakers; Fig. 3C, shows the first speaker branch and the second speaker branch);
evaluating, by the automatic speech recognition system, the first version of the transcript to select a section type of a plurality of section types for the portion of the audio data comprising a first speaker and a second speaker (Tripathi, [0003], [0027-0028], processing a overlapped segment including speeches from both Ted and Jane; Fig. 1 shows recognizing speech from Ted and Jane; [0018], [0028], identifying Ted spoke words and Jane also spoke words at the same time; detecting overlapping speech from the conversation between two speakers);
generating, by the automatic speech recognition system, a second version of the transcript for speech in the portion of the audio data according to the section type, wherein the section type causes the automatic speech recognition system to bias speech recognition in favor of a first speaker in the portion of the audio data over a second speaker in the portion of the audio data (Tripathi, [0033-0034], [0040-0041], Fig. 1, applying masking to a mask model to the overlapped speech to bias to either Ted or Jane’s speech, obtaining speech from a first speaker for the overlapped speech, generating transcription for the first speaker; [0038], combining output sequence of characters / words to create a speech transcript between the first speaker and the second speaker, adding labels / time stamps to indicate who / what / when was spoken; This final created speech transcript is claimed “a second version of the transcript”); and
providing, by the automatic speech recognition system, the second version of the transcript for speech in the portion of the audio data (Tripathi, [0038], the speech recognizer 200 combines the output sequence 242 of each branch 208 to form a sequence of characters and/or words that define the transcript 204 for the conversation between speakers 10. Based on the association of each branch 208 with a respective speaker 10, the transcript 204 may include labels indicating which speaker spoke what. The transcript 204 may also include time stamps indicating who spoke what when).
Tripathi discloses processing non-overlapped speech segments and overlapped speech segments meet a broadly recited “select a section type”. In light of the disclosure (Spec. [0012]) and applicant’s argument (page 12), it appears the claimed section type is different from Tripathi’s segment types. To advance the prosecution, the examiner cites Hirshberg which discloses selecting style of transcript for generating a summary of the transcript by input the transcript together with a selected style (Hirshberg; [0081], [0085], a chunk of transcript and selected style as a prompt to a large language model for generating a summary of the transcript; Fig. 3, Fig. 4A)
Both Tripathi and Hirshberg are dealing with generating speech transcriptions. It would have been obvious to a person having ordinary skill in the art at the time the invention was filed to combine Tripathi’s teaching with Hirshberg’s teaching to select a chunk of transcript with a select style (i.e., section type). One having ordinary skill in the art would have been motivated to make such a modification to generate high quality summary (Hirshberg, [0006]).
Regarding claims 7 and 16, Tripathi in view of Hirshberg further discloses the audio data is received as part of a batch of audio files for generating respective transcriptions for individual ones of the audio files in the batch (Tripathi, [0018], generating speech transcripts from audio recordings including conversations between a doctor and a patient).
Regarding claim 8, Tripathi in view of Hirshberg further discloses the audio data is received as part of a stream of audio data for performing real-time transcription on the stream of audio data (Tripathi, [0026], performing speech transcription for streaming audio).
Regarding claim 9, Tripathi in view of Hirshberg further discloses the section type is one of a plurality of section types that are specified in a request to the automatic speech recognition system for performing transcription (Tripathi, [0016-0018], conversations between two persons such as a doctor / a patient; [0027] conversation between two friends; in light of the specification [0038], the claimed “section type” could be “education”, “planning”, “medical” or any type of section type).
Regarding claim 10, Tripathi in view of Hirshberg further discloses the second version of the transcript combines different sections of text spoken by the first speaker and interleaved with further sections of further text spoken by the second speaker (Tripathi, [0006-0018], [0027], a conversation between a doctor and a patient, interleaved between two talkers; Fig. 1 shows overlapped speech transcripts are separated according to each of speakers; [0027-0028], transcript for Ted’s speech is separated from that of Jane’s speech).
Independent claim 1 recites limitations by spelling out broader terms recited in independent claim 5. For example, claim 1 includes “overlapping speech between a first speaker and a second speaker” instead of broadly reciting “a section type”. Claim 1 further recites a limitation “according to a machine learning model”.
Tripathi discloses recognizing overlapped speech between a first speaker and a second speaker using machine learning model (Tripathi, [0017-0018], [0020-0022], using an end-to-end current neural network (RNN-T) mode, which is “a machine learning model”). Claim 1 also include limitations of claim 10. Therefore, claim 1 is rejected based on the same rationale as explained for an independent claim 5 and a dependent claim 10.
Regarding claim 17, Tripathi in view of Hirshberg further discloses the second version of the transcript discards one or more sections of text spoken by the second speaker (Tripathi, [0026-0028], properly converting overlapping segments into Ted’s speech by using masking, which implies discarding overlapped contents from Jane).
Claims 2-4, 6, 12-13, 15, 19-20 are rejected under 35 U.S.C. §103 as being unpatentable over Tripathi in view of Hirshberg and further in view of Strader et al. (US PG Pub. 2019/0121532, referred to as Strader).
Tripathi discloses generating speech transcripts based on conversations between two persons ([0018], a conversation between a doctor and patient; [0027], a conversation between two friends; Fig. 1). Although Tripathi implicitly discloses many features defined by these dependent claims, the examiner further cites Strader to show the claimed features are obvious.
Strader discloses generating speech transcriptions from a conversation between a doctor and a patient, also generating a summary of the speech transactions (Strader, [0042-0043], Fig. 3).
Regarding claims 2, 6 and 15, Tripathi focuses on generating a correct transcription content when conversation has overlapped speech segments (Tripathi, [0017-0019], [0024], Fig. 1). Tripathi does not explicitly discloses “the section type is provided along with the second version of the transcript to a system that performs a downstream natural language processing task”.
Strader discloses generating speech transcriptions from a conversation between a doctor and a patient. The transcriptions are analyzed to extract named entities and to obtain summary of the conversation (Strader, [0035-0036], [0042], [0057], Fig. 5 and Fig. 6; note generated summary of the conversation is a claimed “a downstream natural language processing task”).
Tripathi in view of Hirshberg and Strader are dealing with generating speech transcriptions. It would have been obvious to a person having ordinary skill in the art at the time the invention was filed to modify Tripathi’s teaching with Strader’s teaching to analyze the speech transcripts and generate a summary of the conversation. One having ordinary skill in the art would have been motivated to make such a modification to save doctor’s time to document patient’s visiting (Strader, [0002]).
Regarding claims 3, 12 and 19, these dependent claims related to a further conversation between two persons (claimed “receive further audio data”, “different section type”).
Tripathi’s discloses various conversations, which means “receive further audio data” and “different section type” (Tripathi, [0016-0018], [0027]). Although Tripathi implicitly discloses limitations recited in these dependent claims, the examiner further cites Strader which shows different multi-turn conversations between doctors and patients (Strader, [0040-0043], [0057], Fig. 3, Fig. 8).
Tripathi in view of Hirshberg and Strader are dealing with generating speech transcriptions. It would have been obvious to a person having ordinary skill in the art at the time the invention was filed to combine Tripathi’s teaching with Strader’s teaching to further processing other conversations. One having ordinary skill in the art would have been motivated to make such a modification to save doctor’s time to document patient’s visiting (Strader, [0002]).
Regarding claims 4, 13 and 20, Tripathi in view of Hirshberg and Strader further discloses wherein the service of the provider network is a medical audio summary service, wherein then audio data is identified according to a request to summarize the audio data received via an interface of the medical audio summary service, and wherein the second version of the transcript is provided to an audio summarization task that generates a summary of the audio data (Strader, [0042], [0057], Fig. 16 and Fig. 17, generating a summary from speech transcripts between a doctor and a patient; Fig. 2 shows a remote server).
Claims 11 and 18 are rejected under 35 U.S.C. §103 as being unpatentable over Tripathi in view of Hirshberg and further in view of Siohan et al. (US PG Pub., 2022/0392439, referred to as Siohan).
Regarding claims 11 and 18, Tripathi discloses generating speech transcripts based on conversations between two persons (Tripathi, Fig. 1). Tripathi does no explicitly discloses “generating the second version of the transcript for speech in the portion of the audio data according to the section type comprises rescoring one or more hypothetical transcriptions using the section type”.
Siohan discloses rescoring ASR hypotheses and selecting a best candidate (Siohan, [0003], [0054-0058], [0070]).
Tripathi in view of Hirshberg and Siohan are dealing with speech recognition. It would have been obvious to a person having ordinary skill in the art at the time the invention was filed to modify Tripathi’s teaching with Siohan’s teaching to rescore further processing other conversations. One having ordinary skill in the art would have been motivated to improve accuracy of speech recognition (Siohan, [0027]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIALONG HE whose telephone number is (571)270-5359. The examiner can normally be reached on Monday-Thursday, 7:00AM-4:30PM, ALT. Fridays, EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Pierre Desir can be reached on (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JIALONG HE/Primary Examiner, Art Unit 2659