DETAILED ACTION
This communication is responsive to the amendment filed 12/23/2025.
Notice of Pre-AIA or AIA Status
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 10-13 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by CARTER et al. (U. S. Pat. App. Pub. No. – 2022/0076663).
Regarding claim 11, CARTER et al. disclose a system (210), comprising: at least one processor (Hearing Instrument, Fig. 3); and at least one memory ([0040-0045, 0146]) including program instructions which when executed by the at least one processor cause operations comprising: receiving a spatial audio signal (Microphone); processing the spatial audio signal to extract one or more sound features (Signal recognition and analysis, Fig. 3) from the spatial audio signal; interpreting the one or more sound features; generating haptic stimuli based on the interpretation of the one or more sound features (420); and causing the haptic stimuli to be sent to a user device (430) to be presented to a hearing-impaired user (Figs. 3-4, and [0040-0044]) as claimed.
Regarding claim 12, CARTER et al. further disclose the system, wherein the one or more sound features comprise a direction, a distance, and an intensity of each sound source of one or more sound sources captured by the spatial audio signal (by microphone, inherently).
Regarding claim 13, CARTER et al. further disclose the system, wherein the program instructions are further executable by the at least one processor (Fig. 3) to cause operations comprising: encoding (digitization) the haptic stimuli into one or more signals; and driving the one or more signals to a haptic interface of the user device (Signal output to user).
Method claims 1-3 and 10 are similar to claims 11-13 except for being couched in method terminology; such methods would be inherent when the structure or structural elements is/are shown in the references.
Computer software product claim 20 is similar to claims 1-3, 10-13 except for being couched in computer software terminology; such product would be inherent when the structure/method is shown in the references.
Claims 1, 3-6, 11 and 13-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zass (U. S. Pat. App. Pub. No. – 2018/0020285).
Regarding claim 11, Zass discloses a system (300), comprising: at least one processor (330); and at least one memory (320) including program instructions which when executed by the at least one processor cause operations comprising: receiving a spatial audio signal (360); processing the spatial audio signal to extract one or more sound features from the spatial audio signal (600a); interpreting the one or more sound features; generating textual data or haptic stimuli based on the interpretation of the one or more sound features (600b); and causing the textual data or the haptic stimuli to be sent to a user device to be presented to a hearing-impaired user ([0150-0155]) as claimed.
Regarding claim 13, Zass further discloses the system, wherein the program instructions are further executable by the at least one processor (330) to cause operations comprising: encoding (digitization) the haptic stimuli into one or more signals (600a); and driving the one or more signals to a haptic interface of the user device (600b).
Regarding claim 14, Zass further discloses the system, wherein the one or more signals include one or more vibrations that correspond to a first direction (703) and a first intensity of a first sound source (704), and wherein the one or more vibrations serve as haptic cues (705).
Regarding claim 15, Zass further discloses the system, wherein the program instructions (600a/b) are further executable by the at least one processor to cause operations comprising adjusting a duration and a frequency of the one or more vibrations based on the one or more sound features (Figs 6A/B).
Regarding claim 16, Zass further discloses the system, wherein the program instructions are (600a/b) further executable by the at least one processor to cause operations comprising analyzing the spatial audio signal to calculate a first angle to a first sound source relative to an avatar or a user in a metaverse scene (700).
Method claims 1 and 3-6 are similar to claims 11 and 13-16 except for being couched in method terminology; such methods would be inherent when the structure or structural elements is/are shown in the reference.
Response to Amendment
Applicant’s arguments dated 12/23/2025 have been fully considered, but they are not deemed to be persuasive.
In response to applicants’ argument, the examiner disagrees with the applicant, respectfully, as following:
Regarding the sole structure claim 11, at least one of the cited reference (USPAP – 2022/0076663) does clearly show that
a system (210) comprising: at least one processor (Hearing Instrument, Fig. 3); and at least one memory ([0040-0045, 0146]) including program instructions:
([0027] FIG. 2A depicts an exemplary system 210 according to an exemplary embodiment, including hearing prosthesis 100, which, in an exemplary embodiment, corresponds to cochlear implant 100 detailed above, and a portable body carried device (e.g. a portable handheld device as seen in FIG. 2A (a smart phone), a watch, a pocket device, any body carried device, etc.) 240 in the form of a mobile computer having a display 242. The system includes a wireless link 230 between the portable handheld device 240 and the hearing prosthesis 100 (the link can be wired in some embodiments). In an exemplary embodiment, the hearing prosthesis 100 is an implant implanted in recipient 99 (as represented functionally by the dashed lines of box 100 in FIG. 2A). Again, it is noted that while the embodiments detailed herein will be described in terms of utilization of a cochlear implant, the teachings herein can be applicable to other types of prostheses.
[0040] In this exemplary embodiment, the variable delay device is included in the hearing prostheses, and is configured to impart variable delay on to the output of the standard signal processing path with respect to the flow of the signal through the hearing instrument. In an exemplary embodiment, the variable delay device can be a memory unit that stores the received input from the standard signal processing path, and permits such to be retrieved shortly thereafter, in accordance with the time frames that will be detailed below. The variable delay can be part of the sound processor and/or signal processor that is utilized in the prosthesis, or any system that can enable a delay to be utilized in accordance with at least some exemplary embodiments. A delay circuit can be utilized. In this exemplary embodiment, a user can control the amount of delay, such as via input into the prosthesis whether such is an input that corresponds to a time frame or otherwise is an input that is indicative of an ultimate desire of the recipient, where the prosthesis determines what the delay should be based on that input. As seen, the hearing prosthesis is configured to augment the signal based on input from the signal recognition and analysis block. This will be described in greater detail below, but, in an exemplary embodiment, can be a chip or a processor or a computing device that includes therein software for speech recognition and/or sound recognition, etc. Additional details of this will be described below. In any event, in an exemplary embodiment, the signal recognition and analysis block can be utilized to determine the amount of delay, and can provide a control signal to the variable delay block to adjust the delay and/or remove the delay, again in accordance with the teachings below. Signal augmentation can correspond to any of the actions herein with respect to how the signal that is based upon the captured sound is modified or otherwise how the signal is replaced with another signal, again as will be described in greater detail below. The digital to analog conversion is an optional example, and it is noted that some embodiments can be utilized herein with respect to a purely analog system. Indeed, the digital storage unit is also optional, as well as the microphone and the analog-to-digital converter associated therewith (not shown, but complied with respect to the indicia “digitization”). The digital storage unit can instead be an analog storage unit, and may not be present in any eventuality as well in some embodiments. In an exemplary embodiment, the storage unit can be a memory unit or a circuit that includes transistors etc. or a set of chips, etc.
[0041-0045 …])
which when executed by the at least one processor cause operations comprising:
receiving a spatial audio signal (Microphone, Fig. 3);
processing the spatial audio signal to extract one or more sound features (Signal recognition and analysis, Fig. 3) from the spatial audio signal; interpreting the one or more sound features;
([0042] In an exemplary embodiment, the signal recognition and analysis block of FIG. 3 can be a word identification and/or word prediction device. In an exemplary embodiment, the signal recognition and analysis block of FIG. 3 can correspond to a processor or a computer chip or to a computing device that is configured to identify and/or predict words and/or can be a component, such as an input and/or output device that is in signal communication or otherwise can be placed into signal communication with a remote device that has the noted functionality associated with word recognition and/or word prediction.
[0044] Briefly, FIG. 4 presents an exemplary flowchart for an exemplary method, method 400, which includes method action 410, which includes receiving a signal which includes speech data. In an exemplary embodiment, the signal is received from the microphone of FIG. 3. In an exemplary embodiment, the signal is the signal that is received from a microphone, although the signal can be a different signal which is based on a signal from a microphone (e.g., such as might be the case with respect to preprocessing and/or the output of a sound processor alike, depending on how the teachings are implemented herein, or with respect to a remote processing embodiment, where, for example, the hearing prostheses communicates with a device that is located remotely, and the signal from the microphone is utilized to develop another signal which is what is ultimately analyzed or otherwise evaluated, although that said, same signal can be transmitted to the remote component). Moreover, the signal can be received from another device, such as a USB port, etc., where, for example, the speech data does not result from live speech, but instead, could be speech that is prerecorded, and/or in a scenario where, for example, the speech originates at a remote location and is transmitted to the recipient electronically, such as would be the case with respect to a television broadcast or a radio broadcast, etc. where, for example, the prostheses is in wire communication and/or in signal communication with an output device that transmits or otherwise provides the speech data (e.g., thus bypassing the microphone, for example). As long as the signal includes speech data, it is covered by method action 410.
[0045] Method 400 also includes method action 420, which includes processing the received signal to identify and/or predict one or more words in the speech data. This can be done by any processor that is configured to do such, such as a processor and/or a computer and/or a computer chip and/or artificial intelligence devices and/or a trained expert system, etc. In an exemplary embodiment, the action 420 is executed utilizing a computing device that includes word identification/word recognition software (e.g., such as that used on a smart phone when one speaks into the smart phone and the smart phone converts the captured voice sound to text, or the Dragon™ software, etc., or any variation thereof) that is utilized in voice to text applications and/or in spelling correction applications, etc. Note further that the method action disclosed herein can also include utilizing systems that “learn” from the past and/or from user experiences, again, such as the Dragon™ software system, etc. Moreover, as noted above, systems can also include word prediction techniques. In an exemplary embodiment, the device system and/or method that is utilized to execute method action 420 can be a computing device that includes software for word prediction, such as that which is found with web browsers and/or that which is found in smart phones, etc. Any device, system, and/or method that can enable word identification and/or word recognition and/or word prediction can be utilized in at least some exemplary embodiments.)
generating haptic stimuli based on the interpretation of the one or more sound features (420); and causing the haptic stimuli to be sent to a user device (430) to be presented to a hearing-impaired user (Figs. 3-4, and [0040-0044])
[0045] Method 400 also includes method action 420, which includes processing the received signal to identify and/or predict one or more words in the speech data. This can be done by any processor that is configured to do such, such as a processor and/or a computer and/or a computer chip and/or artificial intelligence devices and/or a trained expert system, etc. In an exemplary embodiment, the action 420 is executed utilizing a computing device that includes word identification/word recognition software (e.g., such as that used on a smart phone when one speaks into the smart phone and the smart phone converts the captured voice sound to text, or the Dragon™ software, etc., or any variation thereof) that is utilized in voice to text applications and/or in spelling correction applications, etc. Note further that the method action disclosed herein can also include utilizing systems that “learn” from the past and/or from user experiences, again, such as the Dragon™ software system, etc. Moreover, as noted above, systems can also include word prediction techniques. In an exemplary embodiment, the device system and/or method that is utilized to execute method action 420 can be a computing device that includes software for word prediction, such as that which is found with web browsers and/or that which is found in smart phones, etc. Any device, system, and/or method that can enable word identification and/or word recognition and/or word prediction can be utilized in at least some exemplary embodiments.
[0046] Method 400 further includes method action 430, which includes evoking a hearing percept based in the received signal, wherein the evoked hearing percept includes one or more modified words based on the identification and/or prediction of the one or more words.
[0047] It is briefly noted that method 400 can be executed, in some embodiments, completely within a self-contained hearing prosthesis, such as a cochlear implant, or any of the other hearing prostheses detailed herein. It is also noted that some embodiments include methods where the speech data and the features associated with voice are replaced with features associated with light, and the percept that is evoked as a sight percept that includes one or more modified visions or images based on the identification and/or prediction providing that such is enabled by the art.
[0048] Accordingly, in an exemplary embodiment, the processing of method action 420 includes utilizing speech recognition software to identify the one or more words.)
Furthermore, the limitation of “haptic stimuli” are tactile sensations—vibrations, motions, forces, or temperatures. Since the hearing instrument could be an implanted hearing device, the haptic stimuli is a tactile sensation, or bone conductive vibration to be sent to a user device (430) to be presented to a hearing-impaired user as claimed.
Regarding the sole method claim 1, which is similar to structure claim 11 except for being couched in method terminology; such method would be inherent when the structure or structural elements is/are shown in the references.
Regarding the computer software product claim 20, which is similar to claims 1 or 11, except for being couched in computer software terminology; such product would be inherent when the structure/method is shown in the references.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 C.F.R. § 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUHAN NI whose telephone number is (571)272-7505. The examiner can normally be reached on Monday to Friday from 10:00 am to 6:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a supplied web-based collaboration tool. To schedule an interview, applicants are encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached on 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format.
For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SUHAN NI/Primary Examiner, Art Unit 2691