DETAILED ACTION
Introduction
1. This office action is in response to Applicant's submission filed on 04/29/2024. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are currently pending and examined below.
Drawings
2. The drawings filed on 04/29/2024 have been accepted and considered by the Examiner.
Information Disclosure Statement
3. The Information Statement (IDS) filed on 07/26/2024 has been accepted and considered. It is in compliance with the provisions of 37 CFR 1.97.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) The claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
4. Claims 1-2, 4-5, 6-7, 15 and 17 are rejected under 35 U.S.C. 102 (a) (1) as being anticipated by Montero (U.S. Patent Application Publication # 2023/0122555 A1).
With regards to claim 1, Montero teaches a system for improving telecommunications transcription security by providing transcriptions via integrated telecommunications network components, the system comprising at least one hardware processor and at least one non-transitory memory storing instructions, which, when executed by the at least one hardware processor, cause the system to detect a communication session via a telecommunications network between a first mobile device and second mobile device (Paragraphs 103-104, teach the hardware used including the memory and computer readable media. Para 4, teaches a method of transcription communication may include obtaining, at a first device, audio data during a communication session between the first device and a remote device. The audio data may be obtained from a public switched telephone network);
access a database storing service data of at least one of the first mobile device or the second mobile device using a mobile device identifier of the first mobile device or the second mobile device to determine whether either the first mobile device or the second mobile device is associated with an option to transcribe audio data of the communication session (Para 49, teaches a memory that includes a device identifier and a gateway identifier. Para 23, teaches a transcription system configured to generate transcriptions from audio. The audio may include audio from the remote device, from the first device, or both the remote device and the first device);
responsive to determining that at least one of the first mobile device or the second mobile device is associated with the option to transcribe the audio data of the communication session, provide the audio data of the communication session to a telecommunications node comprising an integrated-network component configured to transcribe the audio data of the communication session (Paragraphs 31-32, teach a third network controlled by a wireless telecommunications provider or some other network provider. A communication session between the remote device and the first device may be established such that audio originating at the remote device is directed to the first device over the first network. The first device may present the audio for a user of the first device. The first device may also direct the audio to the second device over the second network. The second device may direct the audio to the transcription system over the third network. The transcription system may generate a transcription of the audio and direct the transcription to the second device over the third network. The second device may direct the transcription to the first device over the second network. The first device may present the transcription of the audio to a user of the first device);
transcribe the audio data of the communication session at the telecommunications node using the integrated-network component (Para 65, teaches a first integrated circuit associated with the first network and may include credentials that may allow the device to access the first network. Additionally, the second integrated circuit may be associated with the second network and may include credentials that may allow the device to access the second network. Both of the first network and the second network may independently couple the device to the transcription system to allow communication there between);
and transmit a textual representation of the audio data of the communication session to each of (i) the first mobile device and (ii) the second mobile device using the transcribed audio data, wherein the textual representation is capable of being displayed on a graphical user interface (GUI) (Para 26, teaches that the transcription system generates a transcription of audio. The transcription system may also direct the transcription of the audio to the first device. The first device may be configured to present the transcription received from the transcription system. The first device may be configured to display the received transcriptions on a display that is part of the first device or a display of a device that is communicatively coupled to the first device. The transcription system may provide captions to multiple devices simultaneously).
With regards to claim 2, Montero teaches the system of claim 1, wherein the integrated-network component is part of the telecommunications network that is associated with at least one of the first mobile device or the second mobile device (Para 65, teaches a first integrated circuit associated with the first network and may include credentials that may allow the device to access the first network. Additionally, the second integrated circuit may be associated with the second network and may include credentials that may allow the device to access the second network. Both of the first network and the second network may independently couple the device to the transcription system to allow communication there between).
With regards to claim 4, Montero teaches the system of claim 1, wherein the textual representation of the audio data is generated via a Real-Time-Text (RTT) interface (Para 12, teaches that the transcription of audio of a communication session may be presented in real-time or substantially real-time on a display of a device of a hard-of-hearing person).
With regards to claim 5, Montero teaches a method for improving telecommunications transcription security by providing transcriptions via integrated telecommunications network components, the method comprising detecting a communication session via a telecommunications network between two or more user devices (Paragraphs 103-104, teach the hardware used including the memory and computer readable media. Para 4, teaches a method of transcription communication may include obtaining, at a first device, audio data during a communication session between the first device and a remote device. The audio data may be obtained from a public switched telephone network);
determining whether at least one of the two or more user devices is associated with an option to transcribe audio data of the communication session (Para 49, teaches a memory that includes a device identifier and a gateway identifier. Para 23, teaches a transcription system configured to generate transcriptions from audio. The audio may include audio from the remote device, from the first device, or both the remote device and the first device);
responsive to determining that at least one of the two or more user devices is associated with the option to transcribe the audio data of the communication session, providing the audio data of the communication session to a telecommunications node comprising an integrated-network component configured to transcribe the audio data of the communication session, wherein it is determined that at least one of the two or more user devices is associated with the option to transcribe the audio data of the communication session (Paragraphs 31-32, teach a third network controlled by a wireless telecommunications provider or some other network provider. A communication session between the remote device and the first device may be established such that audio originating at the remote device is directed to the first device over the first network. The first device may present the audio for a user of the first device. The first device may also direct the audio to the second device over the second network. The second device may direct the audio to the transcription system over the third network. The transcription system may generate a transcription of the audio and direct the transcription to the second device over the third network. The second device may direct the transcription to the first device over the second network. The first device may present the transcription of the audio to a user of the first device);
and generating, for display, on a graphical user interface (GUI), a visual representation of the transcribed audio data of the communication session at the at least one of the two or more user devices (Para 65, teaches a first integrated circuit associated with the first network and may include credentials that may allow the device to access the first network. Additionally, the second integrated circuit may be associated with the second network and may include credentials that may allow the device to access the second network. Both of the first network and the second network may independently couple the device to the transcription system to allow communication there between. Para 26, teaches that the transcription system generates a transcription of audio. The transcription system may also direct the transcription of the audio to the first device. The first device may be configured to present the transcription received from the transcription system. The first device may be configured to display the received transcriptions on a display that is part of the first device or a display of a device that is communicatively coupled to the first device. The transcription system may provide captions to multiple devices simultaneously).
With regards to claim 6, Montero teaches the method of claim 5, wherein the integrated-network component is part of a telecommunications network that is associated with at least one of the two or more user devices (Para 65, teaches a first integrated circuit associated with the first network and may include credentials that may allow the device to access the first network. Additionally, the second integrated circuit may be associated with the second network and may include credentials that may allow the device to access the second network. Both of the first network and the second network may independently couple the device to the transcription system to allow communication there between).
With regards to claim 7, Montero teaches the method of claim 5, wherein determining whether at least one of the two or more user devices is associated with an option to transcribe the audio data further comprises accessing a database storing service data of at least one of the two or more user devices using user device identifiers of the two or more user devices to determine whether at least one of the two or more user devices is associated with the option to transcribe audio data of the communication session (Para 49, teaches a memory that includes a device identifier and a gateway identifier. Para 23, teaches a transcription system configured to generate transcriptions from audio. The audio may include audio from the remote device, from the first device, or both the remote device and the first device).
With regards to claims 15 and 17, these are computer readable medium (CRM) claims for the corresponding method claims 5 and 7. These two sets of claims are related as method and CRM of using the same, with each claimed CRM element's function corresponding to the claimed method step. Accordingly, claims 15 and 17 are similarly rejected under the same rationale as applied above with respect to method claims 5 and 7.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Montero in view of Konicek (U.S. Patent Application Publication # 2007/0032225 A1).
With regards to claim 8, Montero teaches the method of claim 5, wherein determining whether at least one of the two or more user devices is associated with an option to transcribe the audio data further comprises parsing a session-initiated-protocol (SIP) message received by one or more of the two or more user devices to determine whether the SIP message comprises a user-selected option indicating to transcribe the audio data of the communication session (Para 13, teaches that some devices that present transcriptions during communication sessions use Internet protocols networks connections through an Internet service provider to direct audio to and receive transcriptions from a transcription system for communication sessions conducted over analog voice network, such as a public switched telephone network);
Montero may not explicitly detail responsive to determining that the SIP message comprises the user-selected option indicating to transcribe the audio data of the communication session, determining that at least one of the two or more user devices is associated with the option to transcribe the audio data of the communication session, wherein it is determined that the SIP message comprises the user-selected option indicating to transcribe the audio data of the communication session. This is taught by Konicek (Paragraphs 7-8, teach automatically forwarding information and/or connections destined for a mobile or wireless communication device to an alternative preferred communication device of the user's selection based upon the determined location of the mobile/wireless communication device. The method also comprises automatically forwarding information and connections destined for a portable or wireless communication device, to an alternate preferred communication device of the user's selection, based upon the proximity of the portable/wireless communication device to a preferred or alternate device or to a wireless-or wired-network on which the alternate preferred device is operating. Para 232, teaches that the above method can include the user instructing a particular device to automatically listen for certain keywords and record and/or transcribe a portion of the conversation before, during, or after the keyword);
Montero and Konicek can be considered as analogous art as they belong to a similar field of endeavor in audio transcription. It would thus have been obvious to one having ordinary skill in the art to advantageously combine the teachings of Konicek (Selecting a particular device for generating audio transcription) with those of Montero (Generating audio transcription alone) so as to provide the convenience of portability along with capability of supporting the best communication mode depending on the user's current location, the availability of alternative communication devices, and user-defined preferences (Konicek, para 5).
With regards to claim 18, this is a CRM claim for the corresponding method claim 8. These two claims are related as method and CRM of using the same, with each claimed CRM element's function corresponding to the claimed method step. Accordingly, claim 18 is similarly rejected under the same rationale as applied above with respect to method claim 8.
6. Claims 3, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Montero in view of Boekweg (U.S. Patent Application Publication # 2020/0357408 A1).
With regards to claim 3, Montero may not explicitly detail the limitation comprising the instructions to generate a summary of the transcribed audio data by providing the transcribed audio data to an artificial intelligence model configured to summarize transcriptions. However, Boekweg teaches this aspect (Paragraphs 36-37, teach machine learning models, such as neural networks, trained to generate summaries of transcription);
Boekweg also teaches generating the textual representation of the audio data, wherein the generated textual representation of the audio data is a representation of the generated summary of the transcribed audio data (Para 14 and figures 1-6, teach a method of presenting a summary of a transcription of a communication session on a device. The transcription presentation and the summary presentation may both be performed simultaneously during a communication session).
Montero and Boekweg can be considered as analogous art as they belong to a similar field of endeavor in audio transcription. It would thus have been obvious to one having ordinary skill in the art to advantageously combine the teachings of Boekweg (Generating audio summary along with transcription) with those of Montero (Generating audio transcription alone) so as to assist people that are hard-of-hearing or deaf to participate in the audio communications (Boekweg, para 2).
With regards to claim 14, Montero may not explicitly detail the limitation comprising generating a summary of the transcribed audio data by providing the transcribed audio data to an artificial intelligence model configured to summarize transcriptions. However, Boekweg teaches this aspect (Paragraphs 36-37, teach machine learning models, such as neural networks, trained to generate summaries of transcription);
Boekweg also teaches generating, for display, on the GUI, the visual representation of the transcribed audio data, wherein the visual representation of the transcribed audio data is a visual representation of the generated summary of the transcribed audio data;
Montero and Boekweg can be considered as analogous art as they belong to a similar field of endeavor in audio transcription. It would thus have been obvious to one having ordinary skill in the art to advantageously combine the teachings of Boekweg (Generating audio summary along with transcription) with those of Montero (Generating audio transcription alone) so as to assist people that are hard-of-hearing or deaf to participate in the audio communications (Boekweg, para 2).
With regards to claim 20, this is a CRM claim for the corresponding method claim 14. These two claims are related as method and CRM of using the same, with each claimed CRM element's function corresponding to the claimed method step. Accordingly, claim 20 is similarly rejected under the same rationale as applied above with respect to method claim 14.
Allowable Subject Matter
7. Claims 9-13, 16 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art of record, alone or in combination, does not currently suggest or teach the invention as outlined in these claims. More detailed reasons for allowance will be outlined as and when the Application proceeds to allowability.
Conclusion
8. The following prior art, made of record but not relied upon, is considered pertinent to applicant's disclosure: Holm (U.S. Patent Application Publication # 2020/0075013 A1), Lam (U.S. Patent # 9736298 B2). These references are also included in the PTO-892 form attached with this office action.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. If you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). In case you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NEERAJ SHARMA whose contact information is given below. The examiner can normally be reached on Monday to Friday 8 am to 5 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Louis-Desir can be reached on 571-272-7799 (Direct Phone). The fax number for the organization where this application or proceeding is assigned is 571-273-8300.
/NEERAJ SHARMA/
Primary Examiner, Art Unit 2659
571-270-5487 (Direct Phone)
571-270-6487 (Direct Fax)
neeraj.sharma@uspto.gov (Direct Email)