DETAILED ACTION
This is responsive to the amendment filed 19 November 2025.
Claims 1-4, 6-9, 12-13, 15, 17-18 and 23-26 are currently pending and considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 19 November 2025 have been fully considered but they are not persuasive.
Applicant argues:
Applicant respectfully disagrees that Taple describes "a type of litigation" as recited by claim 1, as the referenced sections of Taple merely describe information like a court case number, attorney docket number, filing date, identification of judges, magistrates, other court personnel, deposition participants, and the like that are specific to a particular proceeding. None of the referenced examples of data speak to a type of litigation associated with a proceeding.
Nonetheless, to draw this distinction with greater contrast, Applicant has amended claim 1 to require "receiving input identifying a case type that represents a type of litigation." The instant specification in paragraph [0197] describes classes "of case types including "asbestos, mesothelioma, pharma, medical malpractice, med device, mass tort, generic personal injury." The instant specification at paragraph [0219] describes a case type as "patent infringement, securities, mass tort." Applicant respectfully submits that the Taple reference is silent on receiving input "identifying a case type that represents a type of litigation" associated with a proceeding or using a case type received as input to "identify, based on the case type, a first subset of relevant electronic documents from a larger database of documents related to the proceeding," as amended claim 1 requires. As such, the applied references fail to disclose or suggest all the features of claim 1. For at least these reasons, claim 1 is in condition for allowance.
The Examiner respectfully disagrees. Taple explicitly discloses receiving input identifying a case type that represents a type of litigation (identifying proceeding subject matter or identifying a proceeding involving deponent/witness Mr. Okerlund) associated with the proceeding (“In advance of, or contemporaneously to the start of a deposition, the ALPA system 200 requests or permits the identification of deposition participants. Deposition participants may include one or more deponents, or one or more deposing attorneys, one or more representing attorneys who represent the deponent in the deposition, or one or more other participants, such as witnesses or, in the course of courtroom proceedings, judges or magistrates or other court personnel. ALPA system 200 may also request or permit the input of other information associated with the deposition, such as a court case number, attorney docket number, filing date, other information that identifies the subject matter of the deposition proceeding”, [0025], see also “system 200 may be utilized to facilitate the deposition of a witness, Mr. Okerlund”, [0068]).
Further, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant’s remaining arguments are moot in view of the new grounds of rejection herein.
Claim Objections
Claims 9, 18, 23 and 25-26 are objected to because of the following informalities:
In lines 12-14 of claim 9 it is believed “in response to recognizing that the name uttered and appearing in the real-time transcript that could correspond with two or more of the identified names” should be ‘in response to recognizing that the name uttered and appearing in the real-time transcript
In line 3 of claim 18 “real--time” should be ‘real-time’.
In line 2 of claim 23 “the_AI module” should be ‘the AI module’.
In line 1 of claim 25 “The method of claim Y” should be ‘The method of claim 24’.
Claim 26 ends with the term “and”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 6-8, 12-13, 15, 18 and 23-26 are rejected under 35 U.S.C. 103 as being unpatentable over Taple et al. (US PGPub 2018/0315429) in view of Krachman (US 6,738,760).
Claim 1:
Taple discloses a method comprising:
for a proceeding, receiving input identifying a case type that represents a type of litigation (identifying proceeding subject matter or identifying a proceeding involving deponent/witness Mr. Okerlund) associated with the proceeding (“In advance of, or contemporaneously to the start of a deposition, the ALPA system 200 requests or permits the identification of deposition participants. Deposition participants may include one or more deponents, or one or more deposing attorneys, one or more representing attorneys who represent the deponent in the deposition, or one or more other participants, such as witnesses or, in the course of courtroom proceedings, judges or magistrates or other court personnel. ALPA system 200 may also request or permit the input of other information associated with the deposition, such as a court case number, attorney docket number, filing date, other information that identifies the subject matter of the deposition proceeding”, [0025], see also “system 200 may be utilized to facilitate the deposition of a witness, Mr. Okerlund”, [0068]);
utilizing a module to identify, based on the case type, a first subset of relevant electronic documents from a larger database of documents related to the proceeding (“incorporate, or access via networked means, data obtained from discovery and in preferred embodiment, one or more indexed discovery databases associated with the case at issue in the deposition. Such databases, including indexed discovery databases, typically include documents and data regarding those documents (e.g., metadata) that are produced by parties during the course of a proceeding … Metadata associated with any file may be stored in order to identify later who wrote the document and when, a when it was edited and to whom it was sent (as examples)”, [0066], see also “system 200 may be utilized to facilitate the deposition of a witness, Mr. Okerlund. System 200 may then query the discovery database of documents as a whole to identify the use of infrequently used terms, or in preferred embodiments documents specifically associated with Mr. Okerlund (e.g. associated utilizing metadata identifying emails and documents authored by Mr. Okerlund)”, [0068]);
analyzing the first subset of documents to identify a first set of key words or phrases based, at least in part, on the case type (generating uncommon terms based on associated documents) (“query the discovery database of documents as a whole to identify the use of infrequently used terms, or in preferred embodiments documents specifically associated with Mr. Okerlund (e.g. associated utilizing metadata identifying emails and documents authored by Mr. Okerlund), and those documents may be analyzed by the system to identify language patterns particular to Mr. Okerlund, or the use of unusual or infrequently used words that have been used by Mr. Okerlund. STT module 234 may identify such words (in advance, during or after a deposition) as potential candidate terms for words spoken by Mr. Okerlund during his deposition that may be challenging to translate”, [0068], note that the infrequently used terms must be stored, at least temporarily, to be used in the STT);
receiving an output signal from one or more microphones, the output signal representing content from a proceeding having two or more participants (“receives (directly or indirectly) from microphone 105 digital or other data reflecting audio recordings of oral statements and other audible sounds made by deposer 103A and deponent 103B in the course of a deposition proceeding”, [0020], see also “As the participants (e.g., attorneys and deponent) speak, the system 200, utilizing the apparatus and methods above, will detect speech acts of each speaker, record and translate them, and convert them into text”, [0073]);
generating a real-time transcript based on the received output signal (“generate a transcript 113 reflecting the orally communicated content of the deposition proceeding”, [0020], see also “As the participants (e.g., attorneys and deponent) speak, the system 200, utilizing the apparatus and methods above, will detect speech acts of each speaker, record and translate them, and convert them into text”, [0073]);
comparing the real-time transcript to the first set of key words or phrases to identify search terms (uncommon terms (i.e. unusual or infrequently used words) are identified in the transcript) (“a user may click the mouse on uncommonunconmron terms in the electronic transcript (or terms identified by a user of the system 200), and the system will query or otherwiseaccess the indexed discovery database to identify documents where that same word or phrase occurred. Thus, a user of the system may access Mr. Okerlund's deposition transcript, clink on the term “Punxsutawney” a system 200 may identify specific documents in the disc database where this term occurred”, [0070]);
displaying the real-time transcript via a user interface (“audio translation engine 207 (e.g., speech to text module 234) may translate speech captured by microphone(s) 105 in real time into text identified by user. Such real-time translated text may be displayed to the respective users via user interfaces 109”, [0073], see also “while a deposition proceeding is taking place, output via user interface(s) 109, generated transcript portions for real-time review by participants”, [0046]);
conducting a search of the first subset of relevant electronic documents based on the identified search terms (“a user may click the mouse on uncommonunconmron [sic] terms in the electronic transcript (or terms identified by a user of the system 200), and the system will query or otherwiseaccess [sic] the indexed discovery database to identify documents where that same word or phrase occurred”, [0070]); and
displaying search results of the first subset of relevant electronic documents via the user interface (“Where system 200 has active access to such an indexed discovery database during the course of a deposition, system may dynamically search for documents in the discovery database by key word, and in such a way additional documents may be identified for use by an attorney utilizing system 200 during a deposition”, [0070], see also “one or more submitted exhibition documents available to the deposition participants, for example via a display of user interface(s) 109”, [0043]).
Taple does not explicitly disclose that the module is an AI module trained through the provision of documents, data and depositions from prior cases.
In an analogous art similarly using a module to identify a first subset of relevant electronic documents from a database (“providing electronic discovery on computer systems and archives is provided by using artificial intelligence to produce smart search agents to retrieve relevant data, particularly legally relevant documents”, Abstract), Krachman discloses the module as an AI module trained through the provision of documents, data and depositions from prior cases (“Information relevant to desired data related to an issue is input into a neural network to train said neural network to produce search algorithms in the form of smart search agent. The smart search agents are released onto target computer systems and/or archives to search for responsive data and documents”, Abstract, see also col. 3, line 40-59).
It would have been obvious to one with ordinary skill in the art before the effective date of the claimed invention to combine the references to yield the predictable result of providing Taple’s module as an AI module trained through the provision of documents, data and depositions from prior cases in order to use “an AI search agent that "learns and understands" the content, context, and objective of the requester, and then applies this understanding to the electronic search of the target's electronic files. Going way beyond simple word searches or tags, this technology transcends traditional search methods, in effect allowing an "expert in a box" to search databases for concepts, with greater speed and accuracy” (Krachman, col. 5, lines 34-46).
Claim 2:
Taple in view of Krachman discloses the method of claim 1, further including receiving input via the user interface selecting one or more words within the real-time transcript, wherein the one or more words within the real-time transcript selected by the user are provided as selected search terms to conduct a search of the database (Taple, [0070]).
Claim 6:
Taple in view of Krachman discloses the method of claim 1, further including: initializing a speech-to-text (STT) module utilized to convert the output signal to the real-time transcript prior to a start of the proceeding, wherein initializing the STT module includes performing a search of the electronic documents stored in the database to identify infrequently used terms relevant to the proceeding, wherein the identified infrequently used terms are utilized to augment the STT module (Taple, [0068], see also [0069]).
Claim 7:
Taple in view of Krachman discloses the method of claim 6, wherein the search of the electronic document stored in the database to identify infrequently used terms includes identifying terms that are not stored in a library associated with the STT module (Taple, [0068], see also [0069]).
Claim 8:
Taple in view of Krachman discloses the method of claim 6, wherein generating the real-time transcript includes providing links to one or more electronic documents stored in the database associated with identified infrequently used terms (Taple, [0070]).
Claim 12:
Taple discloses a system comprising: at least one microphone (Fig. 2, item 105); a user interface device accessible to at least one of a plurality of deposition participants (Fig. 2, item 109); and
an audio translation engine (Fig. 2, item 207), comprising:
an audio storage module (Fig. 2, item 230) configured to store at least one representation of audio recorded by the at least one microphone during a deposition proceeding (“audio storage module 230 receives an output signal from microphone(s) 105, and stores one or more audio recordings representing what was said at the deposition in memory”, [0037]);
a speech-to-text module (Fig. 2, item 234) configured to convert speech of the recorded audio into a textual representation of the speech (“a speech-to-text (STT) module 234. STT module 234 analyzes audio recordings stored by audio storage module 230 to convert the content of spoken word to written text”, [0041]); and
a transcript generator module (Fig. 2, item 240) configured to generate a document representing a transcript of the deposition based on the converted speech and to identify for each portion of the transcript which of a plurality of deposition participants was the speaker (“a transcript generation module 240. Transcript generation module 240 is operable to receive the output of STT module 234, as well as the output of speaker identification module 232 and exhibit module 236, to generate a transcript that accurately reflects the deposition proceeding including what was said during the deposition proceeding”, [0044]);
a search engine configured to interface with a database storing electronic documents relevant to the deposition proceeding, the search engine configured to generate search parameters based on the generated transcript (“a user may click the mouse on uncommonunconmron [sic] terms in the electronic transcript (or terms identified by a user of the system 200), and the system will query or otherwiseaccess [sic] the indexed discovery database to identify documents where that same word or phrase occurred”, [0070]) and to display results via the user interface (“Where system 200 has active access to such an indexed discovery database during the course of a deposition, system may dynamically search for documents in the discovery database by key word, and in such a way additional documents may be identified for use by an attorney utilizing system 200 during a deposition”, [0070], see also “one or more submitted exhibition documents available to the deposition participants, for example via a display of user interface(s) 109”, [0043]), wherein the search engine generates a list of key words based on a first subset of documents identified as relevant (generating uncommon terms based on associated documents),
wherein the first subset of documents are identified as relevant by utilizing a module based on identification of a case type that represents a type of litigation (“In advance of, or contemporaneously to the start of a deposition, the ALPA system 200 requests or permits the identification of deposition participants. Deposition participants may include one or more deponents, or one or more deposing attorneys, one or more representing attorneys who represent the deponent in the deposition, or one or more other participants, such as witnesses or, in the course of courtroom proceedings, judges or magistrates or other court personnel. ALPA system 200 may also request or permit the input of other information associated with the deposition, such as a court case number, attorney docket number, filing date, other information that identifies the subject matter of the deposition proceeding”, [0025], see also “system 200 may be utilized to facilitate the deposition of a witness, Mr. Okerlund”, [0068]),
wherein the search engine generates the search parameters based on the a comparison of the list of key words to the transcript (uncommon terms (i.e. unusual or infrequently used words) are identified in the transcript) (“the transcript will be more accurate where Mr. Okerlund references the city of Punxsutawney (correctly identified by the system 200 as “Punxsutawney” in the converted transcript as opposed to “punks and tawny” due to the fact “Punxsutawney” as among those identified in the indexed discovery database as being an uncommonly used term occurring multiple times in associated documents (e.g., via metadata) with Mr. Okerlund). Moreover, utilizing user interface 109, a user may click the mouse on uncommonunconmron terms in the electronic transcript (or terms identified by a user of the system 200), and the system will query or otherwiseaccess the indexed discovery database to identify documents where that same word or phrase occurred. Thus, a user of the system may access Mr. Okerlund's deposition transcript, clink on the term “Punxsutawney” a system 200 may identify specific documents in the disc database where this term occurred, and in preferred embodiments may call out in particular those documents specifically associated with Mr. Okerlund (e.g., Mr. Okerlund's emails, identified via metadata) where that term occurred”, [0070]).
Taple does not explicitly disclose that the module is an AI module trained through the provision of documents, data and depositions from prior cases.
In an analogous art similarly using a module to identify a first subset of relevant electronic documents from a database (“providing electronic discovery on computer systems and archives is provided by using artificial intelligence to produce smart search agents to retrieve relevant data, particularly legally relevant documents”, Abstract), Krachman discloses the module as an AI module trained through the provision of documents, data and depositions from prior cases (“Information relevant to desired data related to an issue is input into a neural network to train said neural network to produce search algorithms in the form of smart search agent. The smart search agents are released onto target computer systems and/or archives to search for responsive data and documents”, Abstract, see also col. 3, line 40-59).
It would have been obvious to one with ordinary skill in the art before the effective date of the claimed invention to combine the references to yield the predictable result of providing Taple’s module as an AI module trained through the provision of documents, data and depositions from prior cases in order to use “an AI search agent that "learns and understands" the content, context, and objective of the requester, and then applies this understanding to the electronic search of the target's electronic files. Going way beyond simple word searches or tags, this technology transcends traditional search methods, in effect allowing an "expert in a box" to search databases for concepts, with greater speed and accuracy” (Krachman, col. 5, lines 34-46).
Claim 13:
Taple in view of Krachman discloses the system of claim 12, wherein the user interface displays the transcript of the deposition and allows a user to highlight text from the transcript to be provided as an input to the search engine (Taple, [0070]).
Claim 15:
Taple in view of Krachman discloses the system of claim 12, wherein the speech-to-text module is initialized by performing an analysis of electronic documents stored in the database to identify infrequently used or scientific terms, wherein the speech-to-text module is augmented to include the identified infrequently used terms (Taple, [0068], see also [0069]).
Claim 18:
Taple in view of Krachman discloses the system of claim 12, wherein the speech-to-text module and the transcript generator module generate the document representing the transcript in real-time (Taple, [0045]).
Claim 23:
Taple in view of Krachman discloses the method of claim 1, further comprising: utilizing the AI module to predict which of the returned documents are most likely to prove useful to one or more of a questioning attorney and a specific witness (Taple, “dynamically search for documents in the discovery database by key word, and in such a way additional documents may be identified for use by an attorney utilizing system 200 during a deposition”, [0070]).
Claim 24:
Taple in view of Krachman discloses the method of claim 1, further comprising: utilizing the Al module to identify the first subset of relevant electronic documents based on one or more characteristics of a witness (Taple, [0066], see also [0068]).
Claim 25:
Taple in view of Krachman discloses the method of claim 24, but does not explicitly disclose wherein the one or more characteristics of a witness include a witness type selected from the group consisting of: a fact witness; an expert witness; a corporate witness; and a 30(b)(6) witness.
However, witnesses are generally fact witnesses who testify based on personal, firsthand knowledge of events, describing what they saw, heard, or did, but cannot offer opinions or expert witnesses who, conversely, are hired to analyze evidence and provide specialized opinions based on training, education, or experience to help the court understand complex).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention that Taple’s witness is a fact witness; an expert witness; a corporate witness; or a 30(b)(6) witness because most witnesses are either fact or expert witnesses.
Claim 26:
Taple in view of Krachman discloses the method of claim 1, wherein the case type includes one or more classes of litigation selected from the group consisting of: patent infringement; securities; mass tort; asbestos; mesothelioma; pharma; medical malpractice; medical device; mass tort (Taple [0067]-[0068]).
Claims 3 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Taple et al. (US PGPub 2018/0315429) in view of Krachman (US 6,738,760) and Bennett et al. (US PGPub 2007/0239689).
Claim 3:
Taple in view of Krachman discloses the method of claim 2, but does not explicitly disclose further including generating search parameters based on the selected search terms, wherein generating search parameters includes selecting a type of search to perform based on the selection of one or more words.
In an analogous art similarly generating search parameters for querying trial or deposition transcripts ([0003]), Bennett discloses wherein generating the search parameters includes selecting a type of search to perform based on the selection of one or more words (“Either a natural language or boolean searching front-end may be selected from the pull-down menu 65. Once either is selected, the terminal 15 automatically attempts to formulate a search”, [0051]).
It would have been obvious to one with ordinary skill in the art before the effective date of the claimed invention to combine the references to yield the predictable result of performing Taple’s search by generating search parameters, wherein generating the search parameters includes selecting a type of search to perform based on the selection of one or more words in order to provide the user different types of search options from which to choose from (see Bennett, [0051]).
Claim 4:
Taple in view of Krachman and Bennett discloses the method of claim 3, wherein the type of search performed is selected from a group including one or more of Boolean, Proximity, Stemming, Fielded, Semantic, conceptual, or Fuzzy logic type searches (Bennett, [0051]).
Claims 9 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Taple et al. (US PGPub 2018/0315429) in view of Zhang et al. (US 8,560,310).
Claim 9:
Taple discloses a method comprising:
initializing a name recognition module prior to a start of a proceeding, wherein initializing the name recognition module includes performing a search of electronic documents stored in a database to identify names associated with the proceeding (“Examples include difficult words, terms, names, places, chemical names, or other problematic terms that may come up in association with a case. Where, for example, a document repository contains references to uniquely-named places (e.g., Punxsutawney, Pa.) or difficult biological, technical, scientific or chemical terms, (e.g., polysaccharides, immunoglobulin, dodecahedrane and the like) or any term (local idiom, for example) not commonly used in everyday speech, system 200 may proactively flag such terms from the indexed document production database”, [0069]),
receiving an output signal from one or more microphones, the output signal representing content from a proceeding having two or more participants (“receives (directly or indirectly) from microphone 105 digital or other data reflecting audio recordings of oral statements and other audible sounds made by deposer 103A and deponent 103B in the course of a deposition proceeding”, [0020], see also “As the participants (e.g., attorneys and deponent) speak, the system 200, utilizing the apparatus and methods above, will detect speech acts of each speaker, record and translate them, and convert them into text”, [0073]);
generating a real-time transcript based on the received output signal (“generate a transcript 113 reflecting the orally communicated content of the deposition proceeding”, [0020], see also “As the participants (e.g., attorneys and deponent) speak, the system 200, utilizing the apparatus and methods above, will detect speech acts of each speaker, record and translate them, and convert them into text”, [0073]),
recognizing a name uttered and appearing in the transcript (“additional processing may be required, especially where words are difficult to translate (proper names of people or places, foreign words, highly technical terminology that isn't readily translated). System 200 may present, via user interface 109, a list of terms to each speaker to clarify which term was intended”, [0078]); and
displaying the real-time transcript to the user (“audio translation engine 207 (e.g., speech to text module 234) may translate speech captured by microphone(s) 105 in real time into text identified by user. Such real-time translated text may be displayed to the respective users via user interfaces 109”, [0073], see also “while a deposition proceeding is taking place, output via user interface(s) 109, generated transcript portions for real-time review by participants”, [0046]).
Taple does not explicitly disclose in response to recognizing that the name uttered and appearing in the real-time transcript that could correspond with two or more of the identified names, prompting a user for clarification regarding which of the two or more identified names was referred to by the name, wherein prompting the user for clarification includes displaying to the user via a user interface and a list of possible names selected from the identified names from the search of the electronic documents that could correspond with the name appearing in the real-time transcript.
In an analogous art similarly recognizing an uttered name, Zhang discloses in response to recognizing that the name uttered and appearing in a real-time transcript that could correspond with two or more of the identified names, prompting a user for clarification regarding which of the two or more identified names was referred to by the name, wherein prompting the user for clarification includes displaying to the user via a user interface and a list of possible names selected from the identified names from a search of the electronic documents that could correspond with the name appearing in the real-time transcript (“In use, a user 12 may say "call Lao Wang". This is received by the speech recognition element 14 and converted to text. This text is provided to the smart name dialing manager 16. The smart name dialing manager forwards the name "Lao Wang" to the grammar in user model 18. The user model 18 returns three possible names to call (Wang Da Wen, Wang Li Tao, and Wang Pei). These three possible name matches are provided to user call prediction element 20 which determines the user must select one of the three possibilities. This information is forwarded to the response generation element 22 which will provide a communication to the user 12 to select one of the three possibilities to call. For a smart phone user, when the user speaks the command "call Lao Wang", the smart phone will come back with a screen showing the three possible Lao Wang choices. The user will then say the desired name (Wang Da Wen)”, col. 5, lines 20-35).
It would have been obvious to one with ordinary skill in the art before the effective date of the claimed invention to combine the references to yield the predictable result of, in response to recognizing that the name uttered and appearing in the real-time transcript that could correspond with two or more of the identified names, prompting a user for clarification regarding which of the two or more identified names was referred to by the name, wherein prompting the user for clarification includes displaying to the user via a user interface and a list of possible names selected from the identified names from the search of the electronic documents that could correspond with the name appearing in the real-time transcript in order to give the user the ability to disambiguate recognition result candidates (see Zhang, col. 5, lines 20-35).
Claim 17:
Taple discloses a system comprising:
at least one microphone (Fig. 2, item 105); a user interface device accessible to at least one of a plurality of deposition participants (Fig. 2, item 109); and
an audio translation engine (Fig. 2, item 207), comprising:
an audio storage module (Fig. 2, item 230) configured to store at least one representation of audio recorded by the at least one microphone during a deposition proceeding (“audio storage module 230 receives an output signal from microphone(s) 105, and stores one or more audio recordings representing what was said at the deposition in memory”, [0037]);
a speech-to-text module (Fig. 2, item 234) configured to convert speech of the recorded audio into a textual representation of the speech (“a speech-to-text (STT) module 234. STT module 234 analyzes audio recordings stored by audio storage module 230 to convert the content of spoken word to written text”, [0041]); and
a transcript generator module (Fig. 2, item 240) configured to generate a document representing a transcript of the deposition based on the converted speech and to identify for each portion of the transcript which of a plurality of deposition participants was the speaker (“a transcript generation module 240. Transcript generation module 240 is operable to receive the output of STT module 234, as well as the output of speaker identification module 232 and exhibit module 236, to generate a transcript that accurately reflects the deposition proceeding including what was said during the deposition proceeding”, [0044]);
a search engine configured to interface with a database storing electronic documents relevant to the deposition proceeding, the search engine configured to generate search parameters based on the generated transcript (“a user may click the mouse on uncommonunconmron [sic] terms in the electronic transcript (or terms identified by a user of the system 200), and the system will query or otherwiseaccess [sic] the indexed discovery database to identify documents where that same word or phrase occurred”, [0070]) and to display results via the user interface (“Where system 200 has active access to such an indexed discovery database during the course of a deposition, system may dynamically search for documents in the discovery database by key word, and in such a way additional documents may be identified for use by an attorney utilizing system 200 during a deposition”, [0070], see also “one or more submitted exhibition documents available to the deposition participants, for example via a display of user interface(s) 109”, [0043]); and
a name recognition module, wherein the name recognition module is initialized by performing an analysis of electronic documents stored in the database to identify names relevant to the deposition proceeding, wherein the name recognition module is updated with the identified names, and
recognizing a name uttered and appearing in the transcript (“additional processing may be required, especially where words are difficult to translate (proper names of people or places, foreign words, highly technical terminology that isn't readily translated). System 200 may present, via user interface 109, a list of terms to each speaker to clarify which term was intended”, [0078]).
Taple does not explicitly disclose
wherein the name recognition module identifies references to a name uttered and appearing in the transcript that could correspond with two or more of the identified names, wherein the name recognition module: responsive to recognizing that the name uttered and appearing in the real-time transcript could correspond with two or more of the identified names, prompts a user for clarification regarding which of the two or more identified names was referred to by the name, wherein prompting the user for clarification includes displaying to the user via a user interface a list of possible names selected from the identified names from the search of the electronic documents that could correspond with the name appearing in the real-time transcript.
In an analogous art similarly recognizing an uttered name, Zhang discloses identifying references to a name uttered and appearing in the transcript that could correspond with two or more of the identified names and in response to recognizing that the name uttered and appearing in a real-time transcript that could correspond with two or more of the identified names, prompting a user for clarification regarding which of the two or more identified names was referred to by the name, wherein prompting the user for clarification includes displaying to the user via a user interface and a list of possible names selected from the identified names from a search of the electronic documents that could correspond with the name appearing in the real-time transcript (“In use, a user 12 may say "call Lao Wang". This is received by the speech recognition element 14 and converted to text. This text is provided to the smart name dialing manager 16. The smart name dialing manager forwards the name "Lao Wang" to the grammar in user model 18. The user model 18 returns three possible names to call (Wang Da Wen, Wang Li Tao, and Wang Pei). These three possible name matches are provided to user call prediction element 20 which determines the user must select one of the three possibilities. This information is forwarded to the response generation element 22 which will provide a communication to the user 12 to select one of the three possibilities to call. For a smart phone user, when the user speaks the command "call Lao Wang", the smart phone will come back with a screen showing the three possible Lao Wang choices. The user will then say the desired name (Wang Da Wen)”, col. 5, lines 20-35).
It would have been obvious to one with ordinary skill in the art before the effective date of the claimed invention to combine the references to yield the predictable result of, in response to recognizing that the name uttered and appearing in the real-time transcript that could correspond with two or more of the identified names, prompting a user for clarification regarding which of the two or more identified names was referred to by the name, wherein prompting the user for clarification includes displaying to the user via a user interface and a list of possible names selected from the identified names from the search of the electronic documents that could correspond with the name appearing in the real-time transcript in order to give the user the ability to disambiguate recognition result candidates (see Zhang, col. 5, lines 20-35).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL G NEWAY whose telephone number is (571)270-1058. The examiner can normally be reached Monday-Friday 9:00am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAMUEL G NEWAY/Primary Examiner, Art Unit 2657