Prosecution Insights
Last updated: April 19, 2026
Application No. 17/273,139

SPEECH DISCRIMINATION TEST SYSTEM AND DEVICE

Final Rejection §103
Filed
Mar 03, 2021
Examiner
HOFFPAUIR, ANDREW ELI
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Cochlear Limited
OA Round
5 (Final)
39%
Grant Probability
At Risk
6-7
OA Rounds
3y 12m
To Grant
80%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
29 granted / 75 resolved
-31.3% vs TC avg
Strong +41% interview lift
Without
With
+41.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 12m
Avg Prosecution
61 currently pending
Career history
136
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
44.5%
+4.5% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
27.4%
-12.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 75 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Amendment Entered This Office action is responsive to the Amendment filed on June 25th, 2025. The examiner acknowledges the amendments to claims 1, 3, 11, 12, 15, 17, 19, 24, 26, 34, 35, 40, and 41, as well as the cancellation of claims 46-47. New claims 48-49 have been added. Claims 1, 3-4, 8, 10-15, 17, 19, 24, 26-27, 31, 33-38, 40-41, 43-44 and 48-49 are pending in the application. Response to Arguments Applicant’s arguments, filed June 25th, 2025, with respect to the rejections under 35 U.S.C. 112(b) have been fully considered. The rejections under 35 U.S.C. 112(b) are withdrawn. Applicant’s arguments, filed June 25th, 2025, with respect to the rejections under 35 U.S.C. 103, specifically the limitations “wherein a first consonant of a first speech sound in the given sequence and a second consonant of a second speech sound in the given sequence are selected based on ahistorical difficulty in discriminating a first test speech sound that includes the first consonant and a second test speech sound that includes the second consonant” and “wherein the signal-to-noise ratio is selected for the given sequence such that a correct answer rate associated with the subject discriminating the speech sounds presented in the given sequence meets a predetermined rate”, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s arguments, filed June 25th, 2025, with respect to the rejections under 35 U.S.C. 103, specifically the limitations “determining noise such that a signal-to-noise ratio is adjusted from sequence to sequence to account for inherent difficulty in discriminating speech sounds presented in a given sequence” have been fully considered but are not persuasive. At page 13, Applicant argues that Lasry does not disclose “determining noise such that a signal-to-noise ratio is adjusted from sequence to sequence to account for inherent difficulty in discriminating speech sounds presented in a given sequence” as Lasry merely discloses adjusting a noise level from one test cycle to another where each test cycle may include a different sequence of words and the adjustment is not associated with the words presented in a particular cycle. Examiner respectfully disagrees. Lasry discloses in para. [0069-0070], figs. 8C-8H that one of the output variables for defining how the word cycle interacts with other cycles including indicating “whether or not the same words are used if the word cycle is repeated (“same words if cycle repeated (y/n)?”)”. If this option is set to yes than the representation in in fig. 14 and para. [0086] would repeat the cycle with the same four words for each noise scenario and the adjustment would be associated with the words presented in a particular cycle. Therefore, Lasry does disclose determining noise (volume of background noise, para. [0009-0010]) such that a signal-to-noise ratio (fig. 14, para. [0009-0010], volume ratio output variable for defining a ratio, or range of ratios, between a volume of the vocalized word and a volume of the background noise) that is adjusted (“adjust output variables … noise”, para. [0090]) from sequence to sequence (“same words if cycle repeated (y/n)?”; “each cycle”, para. [0069-0070, 0086] Examiner note: if the cycle is set to be repeated using the same words, then each cycle would present the same words with a different noise level) accounts for inherent difficulty in discriminating speech sounds presented in a given sequence (para. [0090-0091], subject … response accuracy that is lower than expected for his age cohort, Audyx may proceed to repeat the same Audyx hearing test but with the word SoundCat being defined at a higher volume (for example 70 dB instead of 60 dB) and/or with the noise SoundCat being defined at a lower range of volumes (for example 15 to 55 dB instead of 20 dB to 60 dB); “difficulty in perceiving”). At page 15, with respect to claim 43, Applicant argues that Mauger does not disclose “estimate whether, or a likelihood, that the subject’s identification of the lone speech sound would be improved through cochlear implantation”, as Mauger predicts broad categories of “hearing benefit”, but fails to disclose the specific prediction of whether the subject’s ability to predict the lone sound would be improved. Examiner respectfully disagrees. Mauger discloses in para. [0045, 0059-0060, 0118-0120] that the developed data regarding the hearing loss could be a measure of the person’s hearing health … an estimate of the person’s hearing loss in percentage or in dB attenuation or dB hearing loss; predicting hearing health measures as a measure of the benefits that could be provided by a hearing aid and a measure of the benefit that could be provided by a cochlear implant or other device; and determining the suitability or utilitarian value with respect to utilizing/fitting cochlear implants. Therefore, Mauger does disclose a specific prediction as a utilitarian value. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3, 8, 11-14, 24, 26, 31, 34-37, and 48 are rejected under 35 U.S.C. 103 as being unpatentable over Lasry (US 20170273602 A1) in view of Lorenzi (Lorenzi et al. “Speech perception problems of the hearing impaired reflect inability to use temporal fine structure”, further in view of Finley (Finley et al., “Teaching heart auscultation to health professionals” In: “Teaching heart auscultation to health professionals”, 31 December 2011 (2011-12-31), XP055927461, pages 68-69), and further in view of Wasowicz (US 20010046658 A1) Regarding claim 1, Lasry discloses a method of testing speech comprehension of a subject (Abstract, para. [0002, 0029], hearing test system; defining and executing an audiometric test, Audyx system 10), the method comprising: storing an inventory of speech sounds (“plurality of test files 36 stored …”; “test sounds encoded in … 36”, para. [0033, 0035] (see also para. [0004]) on a speech discrimination testing system (Audyx system 10 comprising Audyx computer system 20, para. [0033]), the speech discrimination testing system (Audyx system 10, fig. 1A) comprising a data processor (“microprocessor”; computer device 70/72”, para. [0038], fig. 1A), a memory in data communication with the data processor (database 42; “non-transitory computer readable medium”, para. [0033, 0038], fig. 1A), and at least one transducer for presentation of speech sounds to the subject (earphones 52; “transducer … producing test sounds … speakers”, fig. 1A, para. [0048]); selecting speech sounds from the inventory of speech sounds to be presented to the subject (“select SoundCats for inclusion in the Audyx hearing test”; “selection of a given SoundCat … presented as test sounds”, para. [0008, 0048-0049]) in a sequence of speech sounds (“sequence of the test sounds to be presented”, para. [0046]) to test an ability of the subject to hear speech sounds (audiometric tests may measure … ability to … recognize or distinguish speech”; “assess the hearing of the subject”, para. [0003, 0089]); presenting the sequence of speech sounds to the subject (“sequence”; “series of multiple test sound presentations”; “presentations of a test sound”, para. [0046, 0060], figs. 1A-1D & 14 (see also para. 0077-0078)); requesting the subject (“subject may be … asked to characterize … test sounds”, para. [0004]) identify which speech sound in the sequence of speech sounds was the speech sound (register the subject’s response to a test sound; “selects one of answer selection buttons … believes was the word presented”, para. [0044, 0077-0078, 0088], fig. 15); receiving (“registers … responses”, para. [0044, 0060]), at the speech discrimination testing system (Audyx system 10, fig. 1A), the subject's identification of which of the presented speech sounds was the speech sound (fig. 15, “register the subject’s response to a test sound”; “selected answer … registered subject responses”, para. [0077-0078, 0089]); and determining noise (volume of background noise, para. [0009-0010]) such that a signal-to-noise ratio (fig. 14, para. [0009-0010], volume ratio output variable for defining a ratio, or range of ratios, between a volume of the vocalized word and a volume of the background noise) that is adjusted (“adjust output variables … noise”, para. [0090]) from sequence to sequence (“same words if cycle repeated (y/n)?”; “each cycle”, para. [0069-0070, 0086] Examiner note: if the cycle is set to be repeated using the same words, then for the representation in fig. 14 each cycle would present the same words with a different noise level) accounts for inherent difficulty in discriminating speech sounds presented in a given sequence (para. [0090-0091], subject … response accuracy that is lower than expected for his age cohort, Audyx may proceed to repeat the same Audyx hearing test but with the word SoundCat being defined at a higher volume (for example 70 dB instead of 60 dB) and/or with the noise SoundCat being defined at a lower range of volumes (for example 15 to 55 dB instead of 20 dB to 60 dB); “difficulty in perceiving”). Lasry further discloses a selectable list of languages 222 in para. [0067]. Lasry does not expressly disclose selecting speech sounds from the inventory of speech sounds to be presented to the subject in a sequence of speech sounds to test an ability of the subject to hear speech sounds in a language- neutral manner, wherein the speech sounds stored in the inventory of speech sounds are used in a majority of most commonly spoken languages; wherein the speech sounds presented in the given sequence follow a vowel-consonant- vowel format, a consonant-vowel-consonant format, a consonant-vowel format, or a vowel- consonant format. However, Lorenzi discloses test an ability of the subject to hear speech sounds in a language- neutral manner, wherein the speech sounds stored in the inventory of speech sounds are used in a majority of most commonly spoken languages (page 18868, right column, Procedure: “48 vowel-consonant-vowel items (i.e., three exemplars of 16 /aCa/ utterances with C = /p, t, k, b, d, g, f, s, ∫, v, z, j, m, n, r, l/, read by a French female speaker) … testing”) (Examiner note: if the consonants selected are p, t, k, s, j, m, n, l and following the vowel-consonant-vowel aCa the speech sounds form “apa, ata, aka, asa, aja, ama, ana, ala”. These speech sounds are similar to the speech sounds disclosed in para. [0084, 0088-0089] of the instant application specification “[apa], [ata], [aka], [ama], [ana], [asa], [ala], [aja], [ipi], [iti], [iki], [imi], [ini], [isi], [ili], [iji], [opo], [oto], [oko], [omo], [ono], [oso], [olo], and [ojo] which provides language neutrality as they can be discriminated by speakers of the most commonly spoken languages”. Therefore, “apa, ata, aka, asa, aja, ama, ana, ala” would test an ability of a subject to hear speech sounds in a language-neutral manner and be used in a majority of the most commonly spoken languages); wherein the speech sounds presented in the given sequence follow a vowel-consonant- vowel format, a consonant-vowel-consonant format, a consonant-vowel format, or a vowel- consonant format (page 18868, right column, Procedure: “48 vowel-consonant-vowel items (i.e., three exemplars of 16 /aCa/ utterances with C = /p, t, k, b, d, g, f, s, ∫, v, z, j, m, n, r, l/)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lasry to further test an ability of the subject to hear speech sounds in a language- neutral manner, wherein the speech sounds stored in the inventory of speech sounds are used in a majority of most commonly spoken languages; wherein the speech sounds presented in the given sequence follow a vowel-consonant-vowel format, a consonant-vowel-consonant format, a consonant-vowel format, or a vowel- consonant format, in view of the teachings of Lorenzi, as such a modification would have been merely a substitution of the test files/test sounds of Lasry for the 48 vowel-consonant-vowel items of Lorenzi to assess the subject’s ability to distinguish speech. Lasry, as modified by Lorenzi hereinabove, does not expressly disclose wherein each speech sound in the sequence of speech sounds is presented to the subject more than once, apart from a lone speech sound which is presented to the subject only once; requesting the subject identify which speech sound in the sequence of speech sounds was the lone speech sound. However, Finley discloses wherein each speech sound in the sequence of speech sounds is presented to the subject more than once, apart from a lone speech sound which is presented to the subject only once (“Principles of Auditory Training: Designing Auditory Training Steps of Progressive Difficulty” page 69, “picking the odd sound out of a set (e.g., coat, coat, goat, coat)”); requesting the subject identify which speech sound in the sequence of speech sounds was the lone speech sound (“Principles of Auditory Training: Designing Auditory Training Steps of Progressive Difficulty” page 69, other types of auditory discrimination exercises may involve picking the odd sound out of a set (e.g., coat, coat, goat, coat”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lasry, as modified by Lorenzi hereinabove, such that each speech sound in the sequence of speech sounds is presented to the subject more than once, apart from a lone speech sound which is presented to the subject only once and requesting the subject identify which speech sound in the sequence of speech sounds was the lone speech sound, in view of the teachings of Finley, as such a modification would have yielded predictable results, namely assessing the subject’s ability to distinguish speech by performing the auditory discrimination exercise, that involves picking the odd sound out of a set. Lasry, as modified by Lorenzi and Finley hereinabove, does not expressly disclose wherein a first consonant of a first speech sound in the given sequence and a second consonant of a second speech sound in the given sequence are selected based on ahistorical difficulty in discriminating a first test speech sound that includes the first consonant and a second test speech sound that includes the second consonant. However, Wasowicz discloses wherein a first consonant (r or m or d, para. [0064]) of a first speech sound (ra, ma, or da, para. [0064]) in the given sequence and a second consonant (l or n or g, para. [0064]) of a second speech sound (la, na, ga, para. [0064]) in the given sequence (“consonant-vowel (CV) syllables … ra-la CV pairs, ma-na CV pairs and da-ga CV pairs … including other CV pairs”, para. [0060, 0064, 0079]) are selected (“sounds … selected to change the perceptual saliency of the sounds in order to change the difficulty of discriminating the sounds”, para. [0013]) based on a historical difficulty in discriminating a first test speech sound that includes the first consonant and a second test speech sound that includes the second consonant (“begin by presenting the user with the two stimuli with the greatest separation, such as stimulus 1 and 10 in the chart … sounds with smaller separation … ra-la CV pairs, ma-na CV pairs and da-ga CV pair … inherent distinguishing acoustic properties in various modules and training tasks”; “three correct responses in a row … difficulty of the task may be increased by changing … difficulty variables … consonant … two incorrect answers … decrease the level”, para. [0064, 0096-0098, 0108], fig. 25). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lasry, as modified by Lorenzi and Finley hereinabove, such that a first consonant of a first speech sound in the given sequence and a second consonant of a second speech sound in the given sequence are selected based on a historical difficulty in discriminating a first test speech sound that includes the first consonant and a second test speech sound that includes the second consonant, in view of the teachings of Wasowicz, as such a modification would have yielded predictable results, namely training and assessing one or more auditory processing, phonological awareness, phonological processing and reading skills of an individual by increasing or decreasing the level of difficulty using selected difficulty variables/type of sounds/consonants based on whether the user has met the advance level criteria or decrease level criteria. Regarding claim 3, upon the modification of Lasry to incorporate the auditory discrimination exercise that involves picking the odd sound out of a set of Finley, as described with respect to claim 1 above, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the method according to claim 1, wherein the speech discrimination testing system further comprises a display for presenting visual images to the subject (figs. 1A and 15, computer device 70/field 540), and the method further comprises: presenting one or more visual images to the subject in association with each speech sound in the sequence of speech sounds (as seen in fig. 15); and requesting the subject to identify which speech sound in the sequence of speech sounds was the lone speech sound, by identifying a visual image from the one or more images associated with the lone speech sound (para. [0088], a subject interacting with touchscreen 64 … selects one of the answer selection buttons that the subject believes was the word presented in the test sound together with noise). Regarding claim 8, upon the modification of Lasry to incorporate the vowel-consonant-vowel items of Lorenzi (“apa, ata, aka, asa, aja, ama, ana, ala”), as described with respect to claim 1 above, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the method according to claim 1, wherein the most commonly spoken languages include Mandarin, English, Spanish, Arabic, Hindi-urdu, Bengali, Russian, Portuguese, Japanese, German, Javanese, Korean, French, Turkish, Vietnamese, Telugu, Cantonese, Italian, Polish, Ukranian, Thai, Gujarati, Malay, Malayalam, Tamil, Marathi, Burmese, Romanian, Pashto, Dutch, Finnish, Greek, Indonesian, Norwegian, Hebrew, Croatian, Danish, Hungarian, Swedish, and Serbian (Examiner’s Note: upon the modification of Lasry to incorporate the speech sounds “apa, ata, aka, asa, aja, ama, ana, ala” of Lorenzi which are similar to the speech sounds disclosed in para. [0089] of the instant application specification “[apa], [ata], [aka], [ama], [ana], [asa], [ala], [aja], [ipi], [iti], [iki], [imi], [ini], [isi], [ili], [iji], [opo], [oto], [oko], [omo], [ono], [oso], [olo], and [ojo]” which can be discriminated by speakers of the most commonly spoken languages”; the speech sounds would include the most commonly spoken languages). Regarding claim 11, upon the modification of Lasry to incorporate the vowel-consonant-vowel items of Lorenzi (“apa, ata, aka, asa, aja, ama, ana, ala”), as described with respect to claim 1 above, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the method according to claim 1, wherein the speech sounds presented in the sequence vary from one another by substitution of either one vowel, or one consonant (page 18868, right column, Procedure: “48 vowel-consonant-vowel items (i.e., three exemplars of 16 /aCa/ utterances with C = /p, t, k, b, d, g, f, s, ∫, v, z, j, m, n, r, l/, read by a French female speaker) … testing”) (Examiner note : if the consonants selected are p, t, k, s, j, m, n, l and following the vowel-consonant-vowel aCa the speech sounds form “apa, ata, aka, asa, aja, ama, ana, ala” which substitute one consonant). Regarding claim 12, upon the modification of Lasry to incorporate the vowel-consonant-vowel items of Lorenzi (“apa, ata, aka, asa, aja, ama, ana, ala”), as described with respect to claim 1 above, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the method according to claim 11, in which speech sounds are presented within the sequence (Lasry, “presented with two similar test sounds”; “sequence of test sounds”, para. [0004, 0006]) as a consonant pair (Lorenzi, page 18868, right column, Procedure: “48 vowel-consonant-vowel items (i.e., three exemplars of 16 /aCa/ utterances with C = /p, t, k, b, d, g, f, s, ∫, v, z, j, m, n, r, l/, read by a French female speaker) … testing” (Examiner note: if the consonants selected are p, t, k, s, j, m, n, l and following the vowel-consonant-vowel aCa the speech sounds form “apa, ata, aka, asa, aja, ama, ana, ala”). The instant application specification defines a pair of speech sounds presented may share the same vowel, but differ in the selection of a consonant, so as to provide a ‘consonant pair’ of speech sounds presented to the subject, and would for example include: [asa] and [ala], or [imi] and [isi] (para. [0089]). The speech sounds are “apa, ata, aka, asa, aja, ama, ana, ala” follow the pattern disclosed in the instant application specification and would therefore form consonant pairs when the subject is presented with two similar test/speech sounds. Regarding claim 13, upon the modification of Lasry to incorporate the vowel-consonant-vowel items of Lorenzi (“apa, ata, aka, asa, aja, ama, ana, ala”) and the auditory discrimination exercise that involves picking the odd sound out of a set to assess the subject’s ability to distinguish speech of Finley, as described with respect to claim 1 above, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses wherein more than one sequence of speech sounds is presented (Lasry, figs. 14-15, para. [0086], “cycles … presentation of four compound test sounds … repeat”) such that the subject is required to identify lone speech sounds within each presented sequence (Finley, “Principles of Auditory Training: Designing Auditory Training Steps of Progressive Difficulty” page 69, other types of auditory discrimination exercises may involve picking the odd sound out of a set (e.g., coat, coat, goat, coat”). Regarding claim 14, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the method according to claim 1, further comprising emitting the noise (noise 410, fig. 14) via the at least one transducer (“transducer for producing test sounds”, para. [0003, 0048]) while the speech sounds are presented (party noise 410 presented with word 420 as seen in fig. 14, para. [0086-0087]), to provide the signal- to-noise ratio (volume ratio output variable for defining a ratio, or range of ratios, between a volume … word and … noise, para. [0010]) such that the subject is required to discriminate between presented speech sounds while the noise is emitted (“compound test … distinguish”; “subject … selects one of the answer selection buttons that the subject believes was the word presented in the test sound together with noise”, para. [0009, 0088]). Regarding claim 24, Lasry discloses a speech discrimination testing system (fig. 1A, para.[0029], Audyx system 10) comprising: a data processor (“microprocessor”; computer device 70/72”, para. [0038], fig. 1); a memory in data communication with the data processor (database 42; “non-transitory computer readable medium”, para. [0033, 0038], fig. 1A) configured to store an inventory of speech sounds for presentation to a subject (“plurality of test files 36 stored …”; “test sounds encoded in … 36”, para. [0033, 0035] (see also para. [0004]); and at least one transducer for presentation of speech sounds to the subject (earphones 52; “transducer … producing test sounds … speakers”, fig. 1, para. [0048]), wherein the speech discrimination testing system (Audyx system 10, fig. 1A) is configured to: select speech sounds from the inventory of speech sounds to be presented to the subject (“select SoundCats for inclusion in the Audyx hearing test”; “selection of a given SoundCat … presented as test sounds”, para. [0008, 0048-0049]) in a sequence (“sequence of the test sounds to be presented”, para. [0046]) to test an ability of the subject to hear speech sounds (audiometric tests may measure … ability to … recognize or distinguish speech”; “assess the hearing of the subject”, para. [0003, 0089]); present selected speech sounds via the at least one transducer to the subject in a sequence (“sequence”; “series of multiple test sound presentations”; “presentations of a test sound”, para. [0046, 0060], figs. 1A-1D & 14 (see also para. 0077-0078)); receive (“registers … responses”, para. [0044, 0060]) the subject's identification of the speech sound (fig. 15, “register the subject’s response to a test sound”; “selected answer … registered subject responses”, para. [0077-0078, 0089]); and determine noise (volume of background noise, para. [0009-0010]) such that a signal-to-noise ratio (fig. 14, para. [0009-0010], volume ratio output variable for defining a ratio, or range of ratios, between a volume of the vocalized word and a volume of the background noise) that is adjusted (“adjust output variables … noise”, para. [0090]) from sequence to sequence (“same words if cycle repeated (y/n)?”; “each cycle”, para. [0069-0070, 0086] Examiner note: if the cycle is set to be repeated using the same words, then for the representation in fig. 14 each cycle would present the same words with a different noise level) accounts for inherent difficulty in discriminating speech sounds presented in a given sequence (para. [0090-0091], subject … response accuracy that is lower than expected for his age cohort, Audyx may proceed to repeat the same Audyx hearing test but with the word SoundCat being defined at a higher volume (for example 70 dB instead of 60 dB) and/or with the noise SoundCat being defined at a lower range of volumes (for example 15 to 55 dB instead of 20 dB to 60 dB); “difficulty in perceiving”). Lasry further discloses a selectable list of languages 222 in para. [0067]. Lasry does not expressly disclose select speech sounds from the inventory of speech sounds to be presented to the subject in a sequence to test an ability of the subject to hear speech sounds in a language- neutral manner, wherein the speech sounds of the inventory of speech sounds are used in a majority of most commonly spoken languages; and wherein the speech sounds presented in the given sequence follow a vowel-consonant-vowel format, a consonant-vowel-consonant format, a consonant-vowel format, or a vowel-consonant format. However, However, Lorenzi discloses test an ability of the subject to hear speech sounds in a language- neutral manner, wherein the speech sounds stored in the inventory of speech sounds are used in a majority of most commonly spoken languages (page 18868, right column, Procedure: “48 vowel-consonant-vowel items (i.e., three exemplars of 16 /aCa/ utterances with C = /p, t, k, b, d, g, f, s, ∫, v, z, j, m, n, r, l/, read by a French female speaker) … testing”) (Examiner note: if the consonants selected are p, t, k, s, j, m, n, l and following the vowel-consonant-vowel aCa the speech sounds form “apa, ata, aka, asa, aja, ama, ana, ala”. These speech sounds are similar to the speech sounds disclosed in para. [0084, 0088-0089] of the instant application specification “[apa], [ata], [aka], [ama], [ana], [asa], [ala], [aja], [ipi], [iti], [iki], [imi], [ini], [isi], [ili], [iji], [opo], [oto], [oko], [omo], [ono], [oso], [olo], and [ojo] which provides language neutrality as they can be discriminated by speakers of the most commonly spoken languages”. Therefore, “apa, ata, aka, asa, aja, ama, ana, ala” would test an ability of a subject to hear speech sounds in a language-neutral manner and be used in a majority of the most commonly spoken languages); wherein the speech sounds presented in the given sequence follow a vowel-consonant- vowel format, a consonant-vowel-consonant format, a consonant-vowel format, or a vowel- consonant format (page 18868, right column, Procedure: “48 vowel-consonant-vowel items (i.e., three exemplars of 16 /aCa/ utterances with C = /p, t, k, b, d, g, f, s, ∫, v, z, j, m, n, r, l/)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lasry to further test an ability of the subject to hear speech sounds in a language- neutral manner, wherein the speech sounds stored in the inventory of speech sounds are used in a majority of most commonly spoken languages; wherein the speech sounds presented in the given sequence follow a vowel-consonant-vowel format, a consonant-vowel-consonant format, a consonant-vowel format, or a vowel- consonant format, in view of the teachings of Lorenzi, as such a modification would have been merely a substitution of the test files/test sounds of Lasry for the 48 vowel-consonant-vowel items of Lorenzi to assess the subject’s ability to distinguish speech. Lasry, as modified by Lorenzi hereinabove, does not expressly disclose present selected speech sounds via the at least one transducer to the subject in a sequence such that each speech sound in the sequence is presented to the subject more than once, apart from one lone speech sound which is presented to the subject only once; receive the subject's identification of the lone speech sound. However, Finley discloses present selected speech sounds to the subject in a sequence such that each speech sound in the sequence is presented to the subject more than once, apart from one lone speech sound which is presented to the subject only once(“Principles of Auditory Training: Designing Auditory Training Steps of Progressive Difficulty” page 69, “picking the odd sound out of a set (e.g., coat, coat, goat, coat)”); receive the subject's identification of the lone speech sound (“Principles of Auditory Training: Designing Auditory Training Steps of Progressive Difficulty” page 69, other types of auditory discrimination exercises may involve picking the odd sound out of a set (e.g., coat, coat, goat, coat”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lasry, as modified by Lorenzi hereinabove, to present selected speech sounds via the at least one transducer to the subject in a sequence such that each speech sound in the sequence is presented to the subject more than once, apart from one lone speech sound which is presented to the subject only once; receive the subject's identification of the lone speech sound, in view of the teachings of Finley, as such a modification would have yielded predictable results, namely assessing the subject’s ability to distinguish speech by performing the auditory discrimination exercise, that involves picking the odd sound out of a set. Lasry, as modified by Lorenzi and Finley hereinabove, does not expressly disclose wherein a first consonant of a first speech sound in the given sequence and a second consonant of a second speech sound in the given sequence are selected based on ahistorical difficulty in discriminating a first test speech sound that includes the first consonant and a second test speech sound that includes the second consonant. However, Wasowicz discloses wherein a first consonant (r or m or d, para. [0064]) of a first speech sound (ra, ma, or da, para. [0064]) in the given sequence and a second consonant (l or n or g, para. [0064]) of a second speech sound (la, na, ga, para. [0064]) in the given sequence (“consonant-vowel (CV) syllables … ra-la CV pairs, ma-na CV pairs and da-ga CV pairs … including other CV pairs”, para. [0060, 0064, 0079]) are selected (“sounds … selected to change the perceptual saliency of the sounds in order to change the difficulty of discriminating the sounds”, para. [0013]) based on a historical difficulty in discriminating a first test speech sound that includes the first consonant and a second test speech sound that includes the second consonant (“begin by presenting the user with the two stimuli with the greatest separation, such as stimulus 1 and 10 in the chart … sounds with smaller separation … ra-la CV pairs, ma-na CV pairs and da-ga CV pair … inherent distinguishing acoustic properties in various modules and training tasks”; “three correct responses in a row … difficulty of the task may be increased by changing … difficulty variables … consonant … two incorrect answers … decrease the level”, para. [0064, 0096-0098, 0108], fig. 25). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lasry, as modified by Lorenzi and Finley hereinabove, such that a first consonant of a first speech sound in the given sequence and a second consonant of a second speech sound in the given sequence are selected based on a historical difficulty in discriminating a first test speech sound that includes the first consonant and a second test speech sound that includes the second consonant, in view of the teachings of Wasowicz, as such a modification would have yielded predictable results, namely training and assessing one or more auditory processing, phonological awareness, phonological processing and reading skills of an individual by increasing or decreasing the level of difficulty using selected difficulty variables/type of sounds/consonants based on whether the user has met the advance level criteria or decrease level criteria. Regarding claim 26, upon the modification of Lasry to incorporate the auditory discrimination exercise that involves picking the odd sound out of a set of Finley, as described with respect to claim 24 above, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the speech discrimination testing system according to claim 24, further comprising a display for presenting visual images to the subject (as seen in fig. 15, computer device 70/field 540, fig. 1A), wherein the speech discrimination testing system is further configured to present visual images to the subject in association with presented speech sounds (as seen in fig. 15); and enable the subject to identify which speech sound in the sequence of speech sounds was the lone speech sound, by identifying a visual image from the visual images associated with that speech sound (para. [0088], a subject interacting with touchscreen 64, … selects one of the answer selection buttons that the subject believes was the word presented in the test sound together with noise, fig. 15). Regarding claim 31, upon the modification of Lasry to incorporate the vowel-consonant-vowel items of Lorenzi (“apa, ata, aka, asa, aja, ama, ana, ala”), as described with respect to claim 24 above, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the speech discrimination testing system according to claim 24, wherein the most commonly spoken languages include Mandarin, English, Spanish, Arabic, Hindi-urdu, Bengali, Russian, Portuguese, Japanese, German, Javanese, Korean, French, Turkish, Vietnamese, Telugu, Cantonese, Italian, Polish, Ukranian, Thai, Gujarati, Malay, Malayalam, Tamil, Marathi, Burmese, Romanian, Pashto, Dutch, Finnish, Greek, Indonesian, Norwegian, Hebrew, Croatian, Danish, Hungarian, Swedish, and Serbian (Examiner’s Note: upon the modification of Lasry to incorporate the speech sounds “apa, ata, aka, asa, aja, ama, ana, ala” of Lorenzi which are similar to the speech sounds disclosed in para. [0089] of the instant application specification “[apa], [ata], [aka], [ama], [ana], [asa], [ala], [aja], [ipi], [iti], [iki], [imi], [ini], [isi], [ili], [iji], [opo], [oto], [oko], [omo], [ono], [oso], [olo], and [ojo]” which can be discriminated by speakers of the most commonly spoken languages”; the speech sounds would include the most commonly spoken languages). Regarding claim 34, upon the modification of Lasry to incorporate the vowel-consonant-vowel items of Lorenzi (“apa, ata, aka, asa, aja, ama, ana, ala”), as described with respect to claim 24 above, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the speech discrimination testing system according to claim 24, wherein the speech discrimination testing system (Audyx system 10, fig. 1A) is configured to present speech sounds in the sequence so as to vary from one another by substitution of either one vowel, or one consonant (page 18868, right column, Procedure: “48 vowel-consonant-vowel items (i.e., three exemplars of 16 /aCa/ utterances with C = /p, t, k, b, d, g, f, s, ∫, v, z, j, m, n, r, l/, read by a French female speaker) … testing”) (Examiner note : if the consonants selected are p, t, k, s, j, m, n, l and following the vowel-consonant-vowel aCa the speech sounds form “apa, ata, aka, asa, aja, ama, ana, ala” which substitute one consonant). Regarding claim 35, upon the modification of Lasry to incorporate the vowel-consonant-vowel items of Lorenzi (“apa, ata, aka, asa, aja, ama, ana, ala”), as described with respect to claims 24 and 34 above, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the speech discrimination testing system according to claim 34, wherein the speech discrimination testing system (Audyx system 10, fig. 1A) is configured to present speech sounds within the sequence (Lasry, “presented with two similar test sounds”; “sequence of test sounds”, para. [0004, 0006]) as a consonant pair (Lorenzi, page 18868, right column, Procedure: “48 vowel-consonant-vowel items (i.e., three exemplars of 16 /aCa/ utterances with C = /p, t, k, b, d, g, f, s, ∫, v, z, j, m, n, r, l/, read by a French female speaker) … testing” (Examiner note: if the consonants selected are p, t, k, s, j, m, n, l and following the vowel-consonant-vowel aCa the speech sounds form “apa, ata, aka, asa, aja, ama, ana, ala”). The instant application specification defines a pair of speech sounds presented may share the same vowel, but differ in the selection of a consonant, so as to provide a ‘consonant pair’ of speech sounds presented to the subject, and would for example include: [asa] and [ala], or [imi] and [isi] (para. [0089]). The speech sounds are “apa, ata, aka, asa, aja, ama, ana, ala” follow the pattern disclosed in the instant application specification and would therefore form consonant pairs when the subject is presented with two similar test/speech sounds. Regarding claim 36, upon the modification of Lasry to incorporate the vowel-consonant-vowel items of Lorenzi (“apa, ata, aka, asa, aja, ama, ana, ala”) and the auditory discrimination exercise that involves picking the odd sound out of a set to assess the subject’s ability to distinguish speech of Finley, as described with respect to claim 24 above, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses wherein the speech discrimination testing system (Audyx system 10, fig. 1A) is configured to present more than one sequence of speech sounds (Lasry, figs. 14-15, para. [0086], “cycles … presentation of four compound test sounds … repeat”) such that the subject is required to identify lone speech sounds within each presented sequence (Finley, “Principles of Auditory Training: Designing Auditory Training Steps of Progressive Difficulty” page 69, other types of auditory discrimination exercises may involve picking the odd sound out of a set (e.g., coat, coat, goat, coat”). Regarding claim 37, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the speech discrimination testing system according to claim 24, wherein the speech discrimination testing system (Audyx system 10, fig. 1A) is configured to emit the noise (noise 410, fig. 14) via the at least one transducer (“transducer for producing test sounds”, para. [0003, 0048]) while the speech sounds are presented (party noise 410 presented with word 420 as seen in fig. 14, para. [0086-0087]), to provide the signal-to-noise ratio (volume ratio output variable for defining a ratio, or range of ratios, between a volume … word and … noise, para. [0010]) such that the subject is required to discriminate between presented speech sounds while the noise is emitted (“compound test … distinguish”; “subject … selects one of the answer selection buttons that the subject believes was the word presented in the test sound together with noise”, para. [0009, 0088]). Regarding claim 48, Lasry as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the method according to claim 1. Lasry as modified by Lorenzi, Finley, and Wasowicz hereinabove does not expressly disclose wherein the signal-to-noise ratio is selected for the given sequence such that a correct answer rate associated with the subject discriminating the speech sounds presented in the given sequence meets a predetermined rate. However, Wasowicz discloses wherein the signal-to-noise ratio is selected for the given sequence such that a correct answer rate associated with the subject discriminating the speech sounds presented in the given sequence meets a predetermined rate (“ the difficulty … increased … predetermined number (e.g., three) of sequential correct responses”; “alter … difficulty … level of background noise”, para. [0008, 0054, 0117]). Wasowicz further discloses that the system ensures that the current task is at a difficulty level that is sufficiently challenging to challenge the user's skills but not too difficult to discourage the user from continuing the training (para. [0008]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, such that the signal-to-noise ratio is selected for the given sequence such that a correct answer rate associated with the subject discriminating the speech sounds presented in the given sequence meets a predetermined rate, in view of the teachings of Wasowicz, for the obvious advantage of ensuring that the current task is at a difficulty level that is sufficiently challenging to challenge the user's skills but not too difficult to discourage the user from continuing the training. Claims 4 and 27 are rejected as being unpatentable over Lasry in view of Lorenzi, further in view of Finley, further in view of Wasowicz, as applied to claims 1 and 24 above, and further in view of Boretzki (US 9942673 B2). Regarding claim 4, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the method according to claim 3. Lasry further discloses a subject selects one of the answer selection buttons that the subject believes was the word presented in the test sound together with noise (para. [0088]). Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, does not expressly disclose wherein presenting the one or more visual images to the subject further comprises presenting a sequence of visual images synchronized with the presentation of the speech sounds in the sequence of speech sounds such that each speech sound in the sequence of speech sounds is associated with a presented visual image. However, Boretzki discloses wherein presenting the one or more visual images to the subject further comprises presenting a sequence of visual images synchronized with the presentation of the speech sounds in the sequence of speech sounds such that each speech sound in the sequence of speech sounds is associated with a presented visual image (Abstract, claim 6, control unit is configured to: play said audio sequence to said user, display—using said display unit—said visualization synchronously with playing of said audio sequence, with a visualization of a scene to which said audio sequence belongs). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, such that presenting the one or more visual images to the subject further comprises presenting a sequence of visual images synchronized with the presentation of the speech sounds in the sequence of speech sounds such that each speech sound in the sequence of speech sounds is associated with a presented visual image, in view of the teachings of Boretzki, in order to display the answer selection buttons of Lasry synchronously with the playing of an audio sequence, with a visualization of a scene to which said audio sequence belongs (Boretzki, Abstract, claim 6) to enable a subject to select one of the answer selection buttons that the subject believes was the word presented in the test sound together with noise (Lasry, para. [0088]). Regarding claim 27, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the speech discrimination testing system according to claim 26. Lasry further discloses a subject selects one of the answer selection buttons that the subject believes was the word presented in the test sound together with noise (para. [0088]). Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, does not expressly disclose wherein the speech discrimination testing system is configured to present visual images to the subject in a sequence of visual images synchronized with the presentation of speech sounds such that each presented speech sound is associated with a presented visual image. However, Boretzki discloses wherein the speech discrimination testing system (hearing system 2, Abstract) is configured to present visual images to the subject in a sequence of visual images synchronized with the presentation of speech sounds such that each presented speech sound is associated with a presented visual image. (Abstract, claim 6, control unit is configured to: play said audio sequence to said user, display—using said display unit—said visualization synchronously with playing of said audio sequence, with a visualization of a scene to which said audio sequence belongs). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, such that presenting the one or more visual images to the subject further comprises presenting a sequence of visual images synchronized with the presentation of the speech sounds in the sequence of speech sounds such that each speech sound in the sequence of speech sounds is associated with a presented visual image, in view of the teachings of Boretzki, in order to display the answer selection buttons of Lasry synchronously with the playing of an audio sequence, with a visualization of a scene to which said audio sequence belongs (Boretzki, Abstract, claim 6) to enable a subject to select one of the answer selection buttons that the subject believes was the word presented in the test sound together with noise (Lasry, para. [0088]). Claims 10 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Lasry in view of in view of Lorenzi, further in view of Finley, further in view of Wasowicz, as applied to claims 1 and 24 above, and further in view of Koo (US 20110046511 A1). Regarding claim 10, Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, discloses the method according to claim 1. Lasry, as modified by Lorenzi, Finley, and Wasowicz hereinabove, does not expressly disclose wherein vowels used in the speech sounds are selected from a group consisting of: [a], [i] and [o]; and consonants us
Read full office action

Prosecution Timeline

Mar 03, 2021
Application Filed
Dec 21, 2023
Non-Final Rejection — §103
Feb 15, 2024
Response Filed
Apr 08, 2024
Non-Final Rejection — §103
Jun 25, 2024
Response Filed
Aug 12, 2024
Final Rejection — §103
Nov 18, 2024
Request for Continued Examination
Nov 20, 2024
Response after Non-Final Action
Mar 26, 2025
Non-Final Rejection — §103
Jun 25, 2025
Response Filed
Sep 10, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593987
FOREHEAD TEMPERATURE MEASUREMENT SYSTEM WITH HIGH ACCURACY
2y 5m to grant Granted Apr 07, 2026
Patent 12564423
SYSTEMS AND METHODS FOR ACCESSING A RENAL CAPSULE FOR DIAGNOSTIC AND THERAPEUTIC PURPOSES
2y 5m to grant Granted Mar 03, 2026
Patent 12533043
DEVICE FOR PROCESSING AND VISUALIZING DATA OF AN ELECTRIC IMPEDANCE TOMOGRAPHY APPARATUS FOR DETERMINING AND VISUALIZING REGIONAL VENTILATION DELAYS IN THE LUNGS
2y 5m to grant Granted Jan 27, 2026
Patent 12521023
TEMPERATURE SELF-COMPENSATION INTERVENTIONAL OPTICAL FIBER PRESSURE GUIDEWIRE AND WIRELESS FFR MONITOR
2y 5m to grant Granted Jan 13, 2026
Patent 12502514
Vascular Access Device Adapter
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
39%
Grant Probability
80%
With Interview (+41.1%)
3y 12m
Median Time to Grant
High
PTA Risk
Based on 75 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month