Prosecution Insights
Last updated: April 17, 2026
Application No. 17/676,167

UTILIZATION OF VOCAL ACOUSTIC BIOMARKERS FOR ASSISTIVE LISTENING DEVICE UTILIZATION

Final Rejection §101§102§103
Filed
Feb 20, 2022
Examiner
DOUGHERTY, SEAN PATRICK
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
unknown
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
701 granted / 932 resolved
+5.2% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
63 currently pending
Career history
995
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
32.8%
-7.2% vs TC avg
§102
31.6%
-8.4% vs TC avg
§112
23.2%
-16.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 932 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21-37 and 39-49 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Each of Claims 21-45 has been analyzed to determine whether it is directed to any judicial exceptions. Step 2A, Prong 1 Each of Claims 21-45 recites at least one step or instruction for the capturing of audio and identification of biomarkers indicative of a hearing ability/diagnose a hear-related impairment by comparing to the speech of others and identify a recommended action, which is grouped as a mental process under the 2019 PEG or a certain method of organizing human activity under the 2019 PEG. Accordingly, each of Claims 21-45 recites an abstract idea. Specifically, Claims 21-45 recite the underlined elements, as shown below, which are all abstract ideas (observation, judgment or evaluation, which is grouped as a mental process under the 2019 PEG) (and additional elements, bolded). All underlined limitations can be performed mentally, as an individual could observe sounds being heard by another and perform the analysis of the sounds either mentally or using a pencil and paper. Claim 21. A device, comprising: an apparatus configured to capture an audio environment of a person, wherein the device is configured to identify, based on the captured audio environment, one or more biomarkers present in the audio environment indicative of the recipient's ability to hear. Claim 22. The device of claim 21, wherein: the audio environment includes speech of the person; the one or more biomarkers are linguistic characteristics of the speech of the person, the device is a body worn hearing prosthesis, and the device is configured to identify the linguistic characteristics as such. Claim 23. The device of claim 21, wherein: the device is a hearing prosthesis configured to evaluate the one or more biomarkers and develop data indicative of the person's ability to hear, wherein the person is a recipient of the hearing prosthesis. Claim 24. The device of claim 21, wherein: the one or more biomarkers are linguistic characteristics of people other than the person and the device is configured to identify the linguistic characteristics as such. Claim 25. The device of claim 21, wherein: the device is a body worn hearing prosthesis and is configured to automatically adjust a feature thereof based on the identification. Claim 26. The device of claim 21, wherein: the device is a multimodal hearing prosthesis; and the device is configured to automatically adjust a crossover point of modes of the device based on the identification. Claim 27. The device of claim 21, wherein: the one or more biomarkers include one or more first biomarkers of people other than the person and the device is configured to identify the first biomarkers as such; the one or more biomarkers also include second biomarkers of the person, and the device is configured to identify the second one or more biomarkers as such; the device is configured to compare the one or more first biomarkers to the one or more second biomarkers and develop data indicative of the person's ability to hear; and the device is a hearing prosthesis. Claim 28. The device of claim 21, wherein: the device is also configured to evoke a hearing percept based at least in part on the captured audio environment. Claim 29. A device, comprising: a component configured to receive data based on speech of a person, wherein the device is configured to evaluate the received data and identify a recommended action to be taken by the person based on the evaluation. Claim 30. The device of claim 29, wherein: the device is configured to evaluate the speech of the person by way of the device being configured to evaluate the received data by acoustically analyzing patterns in speech productions of the person. Claim 31. The device of claim 29, wherein: the recommended action is an adjustment to a habilitation and/or rehabilitation regime of the person. Claim 32. The device of claim 29, wherein: the device is configured to receive second data based on speech of other people other than the person; and the device is configured to compare the data and the second data; and the device is configured to identify the recommended action to be taken based on the evaluation of the received data and the comparison of the data and the second data. Claim 33. The device of claim 29, wherein: evaluating the speech of the person by way of the device being configured to evaluate the received data includes comparing the data based on speech of the person to data based on prior speech of the of the person produced before the speech of the person of the data based on speech of the person. Claim 34. The device of claim 29, wherein: the device is configured to, based on the received data based on speech of the person, compare the speech of the person to data of a speech development trajectory for similarly situated people. Claim 35. The device of claim 29, wherein: the device is also configured to evaluate non-speech related data, wherein the functionality of identifying the action is also based on the evaluation of the non- speech related data. Claim 36. The device of claim 29, wherein: the device being configured to evaluate the received data imparts a functionality of evaluating the data based on speech of the person to the device which includes detecting at least one of speech articulation problems, phonological process problems or problems pronouncing sounds having relatively higher frequency components than that of other pronounced sounds. Claim 37. The device of claim 29, wherein: the functionality of evaluating the data based on speech of the person includes determining whether the person is having problems hearing in a first set of sound conditions relative to that which is the case in a second set of sound conditions. Claim 39. A device, comprising: a processor, wherein the device is configured to obtain data based on captured first sounds corresponding to speech of a person; and the device is configured to compare the data based on the captured first sounds corresponding to speech of the person to data based on speech of others and diagnose a hearing- related impairment of the person based on the comparison. Claim 40. The device of claim 39, wherein: diagnosing the hearing-related issue by way of the device being configured to diagnose the hearing-related impairment includes determining an ability of the person to hear based on the comparison. Claim 41. The device of claim 39, wherein: the device is further configured to diagnose the hearing-related issue by determining that the person should increase exposure time to more-complex second sounds relative to that which was previously the case. Claim 42. The device of claim 39, wherein: the device is a hearing prosthesis. Claim 43. The device of claim 39, wherein: the device includes a sound capture component configured to capture sound, wherein the sound capture component is a component configured to obtain the data based on captured first sounds; and the processor is a sound processor configured to process sound captured by the sound capture component. Claim 44. The device of claim 43, wherein: the data based on speech of others is based on second sounds captured by the sound capture component, wherein the second sounds correspond to one or more voices of others. Claim 45. The device of claim 39, wherein: the data based on speech of others is statistical data of a statistically relevant population. Claim 46. The device of claim 39, wherein: the device is further configured to evaluate the data based on captured first sounds to determine whether the person is having problems hearing in a first set of sound conditions relative to that which is the case in a second set of sound conditions. Claim 47. The device of claim 29, wherein: the device is configured so that execution of the evaluation of the received data is based at least in part on one or more linguistic characteristics in the speech of the person. Claim 48. The device of claim 43, wherein: the device is configured to wirelessly and/or wiredly communicate with a personal computer. Claim 49. The device of claim 41, wherein: the device is configured to wirelessly and/or wiredly communicate with a body worn device configured to capture the first sounds and enable adjustments to be made to the body worn device based on the diagnosis. Accordingly, as indicated above, each of the above-identified claims recites an abstract idea. Step 2A, Prong 2 The above-identified abstract idea in each of Claims 21-37 and 39-49 is not integrated into a practical application under 2019 PEG because the additional elements (in bold, above), either alone or in combination, generally link the use of the above-identified abstract idea to a particular technological environment or field of use. More specifically, the additional elements of: hearing prosthesis and processor are generically recited computer elements in Claims 21-37 and 39-49 which do not improve the functioning of a computer, or any other technology or technical field. Nor do these above-identified additional elements serve to apply the above-identified abstract idea with, or by use of, a particular machine, effect a transformation or apply or use the above-identified abstract idea in some other meaningful way beyond generally linking the use thereof to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Furthermore, the above-identified additional elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. For at least these reasons, the abstract idea identified above in Claims 21-37 and 39-49 are not integrated into a practical application under 2019 PEG. Moreover, the above-identified abstract idea is not integrated into a practical application under 2019 PEG because the claimed method and system merely implements the above-identified abstract idea (e.g., mental process and certain method of organizing human activity) using rules (e.g., computer instructions) executed by a computer (e.g., hearing prosthesis and processor as claimed). In other words, these claims are merely directed to an abstract idea with additional generic computer elements which do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. Additionally, Applicant’s specification does not include any discussion of how the claimed invention provides a technical improvement realized by these claims over the prior art or any explanation of a technical problem having an unconventional technical solution that is expressed in these claims. That is, like Affinity Labs of Tex. v. DirecTV, LLC, the specification fails to provide sufficient details regarding the manner in which the claimed invention accomplishes any technical improvement or solution. Thus, for these additional reasons, the abstract idea identified above in Claims 21-45 are not integrated into a practical application under the 2019 PEG. Accordingly, Claims 21-37 and 39-49 are each directed to an abstract idea under 2019 PEG. Step 2B None of Claims 21-37 and 39-49 include additional elements that are sufficient to amount to significantly more than the abstract idea for at least the following reasons. These claims require the additional elements of: a hearing prostheses and processor. The above-identified additional elements are generically claimed computer components which enable the above-identified abstract idea(s) to be conducted by performing the basic functions of automating mental tasks. The courts have recognized such computer functions as well understood, routine, and conventional functions when claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. See, Versata Dev. Group, Inc. v. SAP Am., Inc. , 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); and OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93. Per Applicant’s specification, the hearing prosthesis is a cochlear implant or middle ear implant or fully implanted bone conduction device (paragraphs [0032] [0047]) and the processor is a generic processor as described in paragraph [0045]. Accordingly, in light of Applicant’s specification, the claimed hearing prosthesis and processor is reasonably construed as a generic computing device. Like SAP America vs Investpic, LLC (Federal Circuit 2018), it is clear, from the claims themselves and the specification, that these limitations require no improved computer resources, just already available computers, with their already available basic functions, to use as tools in executing the claimed process. Furthermore, Applicant’s specification does not describe any special programming or algorithms required for hearing prosthesis and processor. This lack of disclosure is acceptable under 35 U.S.C. §112(a) since this hardware performs non-specialized functions known by those of ordinary skill in the computer arts. By omitting any specialized programming or algorithms, Applicant's specification essentially admits that this hardware is conventional and performs well understood, routine and conventional activities in the computer industry or arts. In other words, Applicant’s specification demonstrates the well-understood, routine, conventional nature of the above-identified additional elements because it describes these additional elements in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a) (see Berkheimer memo from April 19, 2018, (III)(A)(1) on page 3). Adding hardware that performs “‘well understood, routine, conventional activit[ies]’ previously known to the industry” will not make claims patent-eligible (TLI Communications). The recitation of the above-identified additional limitations in Claims 21-37 and 39-49 amounts to mere instructions to implement the abstract idea on a computer. Simply using a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); and TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Moreover, implementing an abstract idea on a generic computer, does not add significantly more, similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer. A claim that purports to improve computer capabilities or to improve an existing technology may provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); and Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). However, a technical explanation as to how to implement the invention should be present in the specification for any assertion that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. Here, Applicant’s specification does not include any discussion of how the claimed invention provides a technical improvement realized by these claims over the prior art or any explanation of a technical problem having an unconventional technical solution that is expressed in these claims. Instead, as in Affinity Labs of Tex. v. DirecTV, LLC 838 F.3d 1253, 1263-64, 120 USPQ2d 1201, 1207-08 (Fed. Cir. 2016), the specification fails to provide sufficient details regarding the manner in which the claimed invention accomplishes any technical improvement or solution. For at least the above reasons, Claims 21-37 and 39-49 are directed to applying an abstract idea as identified above on a general purpose computer without (i) improving the performance of the computer itself, or (ii) providing a technical solution to a problem in a technical field. None of Claims 21-45 provides meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that these claims amount to significantly more than the abstract idea itself. Taking the additional elements individually and in combination, the additional elements do not provide significantly more. Specifically, when viewed individually, the above-identified additional elements in Claims 21-37 and 39-49 do not add significantly more because they are simply an attempt to limit the abstract idea to a particular technological environment. That is, neither the general computer elements nor any other additional element adds meaningful limitations to the abstract idea because these additional elements represent insignificant extra-solution activity. When viewed as a combination, these above-identified additional elements simply instruct the practitioner to implement the claimed functions with well-understood, routine and conventional activity specified at a high level of generality in a particular technological environment. As such, there is no inventive concept sufficient to transform the claimed subject matter into a patent-eligible application. When viewed as whole, the above-identified additional elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Thus, Claims 21-37 and 39-49 merely apply an abstract idea to a computer and do not (i) improve the performance of the computer itself (as in Bascom and Enfish), or (ii) provide a technical solution to a problem in a technical field (as in DDR). Therefore, none of the Claims 21-37 and 39-49 amounts to significantly more than the abstract idea itself. Accordingly, Claims 21-37 and 39-49 are not patent eligible and rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 21, 39, 40 and 43-46 is/are rejected under 35 U.S.C. 102(a) as being anticipated by Automated Vocal Analysis of Children with Hearing Loss and Their Typical and Atypical Peers to VanDam et al. (hereinafter, VanDam). Regarding Claim 21, VanDam discloses a device, comprising inter alia: an apparatus (Page 5, Hardware and Software “The system consists of a wearable recording device”) configured to capture an audio environment of a person (Page 5, Hardware and Software “The device is designed to be turned on in the morning and record continuous audio for up to 16 hours or until the device is turned off in the evening.”), wherein the device is configured to identify, based on the captured audio environment, one or more biomarkers present in the audio environment indicative of the recipient's ability to hear (Page 1, Abstract, Results “—Samples from children who were hard-of-hearing patterned more similarly to those of typically-developing children than to the language-delayed or autistic samples. The statistical models were able to classify children from the four groups examined and estimate developmental age based on automated vocal analysis.”) (Page 5, Hardware and Software “The present work is based on acoustic analyses (using the LENA ASP software) of speech-like segments produced by the target child. The LENA ASP system generates many acoustic variables, and from these variables also generates an estimate of the child’s vocal development age (see also Oller et al. 2010)”). Regarding Claim 39, VanDam discloses a device comprising inter alia: a processor (Page 5, Hardware and Software, “… a computer and processed with the LENA software. The LENA software is capable of outputting the full audio (16 bit, 16 kHz sampling rate, PCM WAV format) and summary reports of the ASP software.”), wherein the device is configured to obtain data based on captured first sounds corresponding to speech of a person (Page 5, Hardware and Software, “Day-long recordings were collected and processed using hardware and software … device is designed to be turned on in the morning and record continuous audio … acoustic recordings are transferred to a computer and processed with the LENA software.”) and the device is configured to compare the data based on the captured first sounds corresponding to speech of the person to data based on speech of others (Page 2, Introduction “This study uses automated analyses to compare HH children to children who are typically developing (TD), autistic (Autism Spectrum Disorders or ASD), or language delayed (LD).”) (Page 5, Data Analysis and Statistical Models “This study uses automated analyses to compare HH children to children who are typically developing (TD), autistic (Autism Spectrum Disorders or ASD), or language delayed (LD).”) (Page 5, Hardware and Software “Acoustic segments are labeled for non-speech events including noise, silence, and the presence of electronic media (e.g., TV, radio) and for human vocal activity including that of any adult female, adult male, the target child wearing the recorder, or other children in the environment. A final label designates “overlap” between any voice and any other category of sound.”) and diagnose a hearing-related impairment of the person based on the comparison (Page 1 Abstract, Results “The statistical models were able to classify children from the four groups examined and estimate developmental age based on automated vocal analysis.”) (also see Page 6, Results, Group Classification section). Regarding Claim 40, VanDam discloses the device of claim 39, wherein: the functionality of diagnosing the hearing-related issue includes determining an ability of the person to hear based on the comparison (Page 2, Introduction “This study is the first to use automated acoustic analyses to examine vocal development in children who are hard of hearing (HH; those with mild- to severe hearing loss) using a very large database of hundreds of whole-day recordings of families in naturalistic home environments. This study uses automated analyses to compare HH children to children who are typically developing (TD), autistic (Autism Spectrum Disorders or ASD), or language delayed (LD).”). Regarding Claim 43, VanDam discloses the device of claim 39, wherein: the device includes a sound capture component configured to capture sound (Page 5, Hardware and Software “The system consists of a wearable recording device”), wherein the sound capture component is a component configured to obtain the data based on captured first sounds (Page 5, Hardware and Software “The device is designed to be turned on in the morning and record continuous audio for up to 16 hours or until the device is turned off in the evening.”); and the processor is a sound processor configured to process sound captured by the sound capture component (Page 5, Hardware and Software “Acoustic recordings are transferred to a computer and processed with the LENA software. The LENA software is capable of outputting the full audio (16 bit, 16 kHz sampling rate, PCM WAV format) and summary reports of the ASP software.”). Regarding Claim 44, VanDam discloses the device of claim 43, wherein: the data based on speech of others is based on second sounds captured by the sound capture component, wherein the second sounds correspond to one or more voices of others (Page 5, Hardware and Software “Acoustic segments are labeled for non-speech events including noise, silence, and the presence of electronic media (e.g., TV, radio) and for human vocal activity including that of any adult female, adult male, the target child wearing the recorder, or other children in the environment. A final label designates “overlap” between any voice and any other category of sound.”). Regarding Claim 45, VanDam discloses the device of claim 39, wherein: the data based on speech of others is statistical data of a statistically relevant population (Page 15, Table 2, recordings from 273 individuals with 1913 day-long recordings) (Page 2, Introduction “This study is the first to use automated acoustic analyses to examine vocal development in children who are hard of hearing (HH; those with mild- to severe hearing loss) using a very large database of hundreds of whole-day recordings of families in naturalistic home environments.”). Regarding Claim 46, VanDam discloses the device of claim 39, wherein:the device is further configured to evaluate the data based on captured first sounds to determine whether the person is having problems hearing in a first set of sound conditions relative to that which is the case in a second set of sound conditions (Discussion “ Age estimates modeled on child vocal characteristics were unique for each of the four groups, with the HH group patterning most similarly to the TD group, followed by the LD and ASD groups, respectively.”). Claim(s) 21, 27, 39, 41, 42, 46 and 49 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20140336448 A1 to Banna et al. (hereinafter, Banna). Regarding Claim 21, Banna discloses a device, comprising inter alia: an apparatus (paragraph [0022]) configured to capture an audio environment of a person (paragraph [0021] “… the example hearing prosthesis 12 generally includes one or more microphones (microphone inputs) 14 for receiving audio input representing an audio environment of the prosthesis recipient…”), wherein the device is configured to identify, based on the captured audio environment (paragraph [0033] “In practice, as the processing unit 16 receives audio input representing the audio environment of the recipient, the processing unit module may evaluate the audio input in real-time so as to determine one or more linguistic characteristics in the audio input.”), one or more biomarkers present in the audio environment indicative of the recipient's ability to hear case (paragraph [0044] “Moreover, to facilitate carrying out this analysis in real-time, the processing unit may limit its analysis to identify key parameters as proxies for more complex linguistic characteristics or may generally estimate various ones of the linguistic characteristics rather than striving to determine them exactly. For instance, rather than working to determine an exact count of words spoken by the recipient or spoke by others in the recipient's environment, the processing unit may determine an approximate count. Such an approximation may be clinically relevant, as it may facilitate general comparisons between extents of speech to which the recipient is exposed. For example, if the processing unit determines that the recipient is exposed to approximately 400 words one day and approximately 600 words the next day, that 50% estimated increase may be key to evaluating the recipient's speech exposure.”) (paragraph [0034] “Examples of linguistic characteristics include, among others, (1) a measure of proportion of time spent by the recipient speaking, (2) a measure of proportion of time spent by the recipient receiving speech from others … (5) a measure of quantity of words spoken by one or more people other than the recipient, (6) a measure of quantity of sentences spoken by one or more people other than the recipient, (7) a measure of quantity of conversational turns by the recipient, (8) a measure of length of utterances by the recipient or by others … (11) a measure of words spoken by adult versus words spoken by children, (12) a measure of quantity of conversations engaged in or initiated by the recipient, and (13) indications of whether the speech is shouted or conversational.”) and diagnose a hearing- related impairment of the person based on the comparison (paragraph [0064] “In line with the discussion above, the data output by the hearing prosthesis in this method may be used to develop speech training for the recipient. For instance, given data indicating that the recipient is being exposed to a certain extent of speech (including speech production, speech reception, conversations, and so forth), a clinician may arrange for the recipient to be exposed to more speech, or to speech of different types, in an effort to help rehabilitate the recipient. Further, for recipients who are initially developing their speech skills (such as infants or the like), a clinician could use this output data to help identify speech errors, such as phoneme substitution for instance, and to develop appropriate therapy and further speech training.”). Regarding Claim 27, Banna discloses the device of claim 21, wherein: the one or more biomarkers include one or more first biomarkers of people other than the person and the device is configured to identify the first biomarkers as such (one of from paragraph [0034] “Examples of linguistic characteristics include … (2) a measure of proportion of time spent by the recipient receiving speech from others… (5) a measure of quantity of words spoken by one or more people other than the recipient, (6) a measure of quantity of sentences spoken by one or more people other than the recipient, (7) a measure of quantity of conversational turns by the recipient, (8) a measure of length of utterances … by others, … (11) a measure of words spoken by adult versus words spoken by children …”); the one or more biomarkers also include second biomarkers of the person, and the device is configured to identify the second one or more biomarkers as such another of from paragraph [0034] “Examples of linguistic characteristics include … (2) a measure of proportion of time spent by the recipient receiving speech from others… (5) a measure of quantity of words spoken by one or more people other than the recipient, (6) a measure of quantity of sentences spoken by one or more people other than the recipient, (7) a measure of quantity of conversational turns by the recipient, (8) a measure of length of utterances … by others, … (11) a measure of words spoken by adult versus words spoken by children …”) the device is configured to compare the one or more first biomarkers to the one or more second biomarkers and develop data indicative of the person's ability to hear (paragraph [0030] “And the user interface system 24 may process that data and provide a graphical user interface that depicts a comparison of the logged linguistic characteristics (possibly per stimulation mode) over time.); and the device is a hearing prosthesis (paragraph [0022]). Regarding Claim 39, Banna discloses a device, comprising inter alia: a processor (paragraph [0032] “As shown in FIG. 1, the processing unit 16 of the example hearing prosthesis 12 includes a data logging and linguistic analysis (DLLA) module 32 for carrying out some or all of these added functions.”), wherein the device is configured to obtain data based on captured first sounds (paragraph [0033] “In practice, as the processing unit 16 receives audio input representing the audio environment of the recipient, the processing unit module may evaluate the audio input in real-time so as to determine one or more linguistic characteristics in the audio input.”) corresponding to speech of a person (words and sentences spoken by recipient, paragraph [0034]); and the device is configured to compare the data based on the captured first sounds corresponding to speech of the person to data based on speech of others (words and sentences spoken by others, paragraph [0034]) (paragraph [0034] “Examples of linguistic characteristics include, among others, (1) a measure of proportion of time spent by the recipient speaking, (2) a measure of proportion of time spent by the recipient receiving speech from others … (5) a measure of quantity of words spoken by one or more people other than the recipient, (6) a measure of quantity of sentences spoken by one or more people other than the recipient, (7) a measure of quantity of conversational turns by the recipient, (8) a measure of length of utterances by the recipient or by others … (11) a measure of words spoken by adult versus words spoken by children, (12) a measure of quantity of conversations engaged in or initiated by the recipient, and (13) indications of whether the speech is shouted or conversational.”) and diagnose a hearing- related impairment of the person based on the comparison (paragraph [0064] “In line with the discussion above, the data output by the hearing prosthesis in this method may be used to develop speech training for the recipient. For instance, given data indicating that the recipient is being exposed to a certain extent of speech (including speech production, speech reception, conversations, and so forth), a clinician may arrange for the recipient to be exposed to more speech, or to speech of different types, in an effort to help rehabilitate the recipient. Further, for recipients who are initially developing their speech skills (such as infants or the like), a clinician could use this output data to help identify speech errors, such as phoneme substitution for instance, and to develop appropriate therapy and further speech training.”). Regarding Claim 41, Banna discloses the device of claim 39, wherein the functionality of diagnosing the hearing-related issue includes determining that the person should increase exposure time to more-complex second sounds relative to that which was previously the case (paragraph [0044] “Moreover, to facilitate carrying out this analysis in real-time, the processing unit may limit its analysis to identify key parameters as proxies for more complex linguistic characteristics or may generally estimate various ones of the linguistic characteristics rather than striving to determine them exactly. For instance, rather than working to determine an exact count of words spoken by the recipient or spoke by others in the recipient's environment, the processing unit may determine an approximate count. Such an approximation may be clinically relevant, as it may facilitate general comparisons between extents of speech to which the recipient is exposed. For example, if the processing unit determines that the recipient is exposed to approximately 400 words one day and approximately 600 words the next day, that 50% estimated increase may be key to evaluating the recipient's speech exposure.”). Regarding Claim 42, Banna discloses the device of claim 39, wherein the device is a hearing prosthesis (paragraph [0022]). Regarding Claim 46, Banna discloses the device of claim 39, wherein: the device is further configured to evaluate the data based on captured first sounds to determine whether the person is having problems hearing in a first set of sound conditions relative to that which is the case in a second set of sound conditions (paragraph [0062] “…the one or more linguistic characteristics may include a quantity of speech, such as a measure of quantity of speech by the recipient and a measure of quantity of speech by one or more people other than the recipient.”). Regarding Claim 49, Banna discloses the device of claim 41, wherein: the device is configured to wirelessly and/or wiredly communicate with a body worn device configured to capture the first sounds and enable adjustments to be made to the body worn device based on the diagnosis (paragraph [0026] “…and the prosthesis may include a communication interface arranged to communicate with those components through a wireless and/or wired link of any type now known or later developed.”). Claim(s) 21-26, 29-33, 35 and 46-49 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by EP 1835784 A1 to Zhang et al. (hereinafter, Zhang). Regarding Claim 21, Zhang discloses a device, comprising inter alia: an apparatus (paragraph [0016] “… hearing assistance device 100 is a hearing aid.”) configured to capture (paragraph [0016] “… mic 1102 is an omnidirectional microphone connected to amplifier 104 which provides signals to analog-to-digital converter 106 ("A/D converter"). The sampled signals are sent to processor 120 which processes the digital samples and provides them to the digital-to-analog converter 140 ("D/A converter").”) an audio environment of a person (paragraph [0016] “… sampled signals are sent to processor 120 which processes the digital samples and provides them to the digital-to-analog converter 140…”) (paragraph [0018] “Processor 120 includes modules for execution that will detect environments…”), wherein the device is configured to identify, based on the captured audio environment, one or more biomarkers present in the audio environment (paragraph [0024] “the actual usage log can track the number of times the device detected wind noise, machinery noise, one's own speech sound, and other speech sound.”) (paragraph [0025] “The resulting actual and hypothetical usage logs can also be used to determine statistics on the modes based on actual … settings … For example, the gain reduction data for wind noise, machinery noise, one's owns speech sound, and other speech sound can be averaged to determine actual average gain reduction per source class and hypothetical average gain reduction per source class.”) indicative of the recipient's ability to hear (paragraph [0026] “… statistics of actual hearing inputs where appropriate to assist an audiologist or dispenser in diagnosing problems …”) (paragraph [0026] “… it is possible to capture and store input sound level histogram.”) (paragraph [0028] “The output of the actual usage log and hypothetical usage log (or plurality of hypothetical usage logs in embodiments employing more than one hypothetical usage log) may be depicted in a graphical format to a user and may be displayed by the programmer to review behavior of the hearing assistance device”). Regarding Claim 22, Zhang discloses the device of claim 21, wherein: the audio environment includes speech of the person (paragraph [0024] “the actual usage log can track the number of times the device detected wind noise, machinery noise, one's own speech sound, and other speech sound.”); the one or more biomarkers are linguistic characteristics of the speech of the person, the device is a body worn hearing prosthesis (paragraph [0016] “… hearing assistance device 100 is a hearing aid.”) (paragraph [0032] “… behind-the-ear devices, on-the-ear devices, and in-the-ear devices, such as in-the-canal and/or completely-in-the-canal hearing assistance devices…”), and the device is configured to identify the linguistic characteristics as such. Regarding Claim 23, Zhang discloses the device of claim 21, wherein: the device is a hearing prosthesis (paragraph [0016] “… hearing assistance device 100 is a hearing aid.”) configured to evaluate the one or more biomarkers and develop data indicative of the person's ability to hear (paragraph [0026] “… statistics of actual hearing inputs where appropriate to assist an audiologist or dispenser in diagnosing problems …”) (paragraph [0026] “… it is possible to capture and store input sound level histogram.”) (paragraph [0028] “The output of the actual usage log and hypothetical usage log (or plurality of hypothetical usage logs in embodiments employing more than one hypothetical usage log) may be depicted in a graphical format to a user and may be displayed by the programmer to review behavior of the hearing assistance device”), wherein the person is a recipient of the hearing prosthesis (paragraph [0032] “… behind-the-ear devices, on-the-ear devices, and in-the-ear devices, such as in-the-canal and/or completely-in-the-canal hearing assistance devices…”). Regarding Claim 24, Zhang discloses the device of claim 21, wherein: the one or more biomarkers are linguistic characteristics of people other than the person and the device is configured to identify the linguistic characteristics as such (paragraph [0025] “The resulting actual and hypothetical usage logs can also be used to determine statistics on the modes based on actual and hypothetical settings. For example, the gain reduction data for … other speech sound can be averaged to determine actual average gain reduction per source class and hypothetical average gain reduction per source class. The audiologist can adjust the size of gain reduction for each sound class based on the patient's feedback and the actual and hypothetical average gain reduction log. These examples are just some of the possible available statistics that may be used with the actual and hypothetical usage logs.”) (TABLE 1 and 2 which show usage of others speak, actual %, average gain reduction in paragraph [0029]). Regarding Claim 25, Zhang discloses the device of claim 21, wherein: the device is a body worn hearing prosthesis (paragraph [0016] “… hearing assistance device 100 is a hearing aid.”) (paragraph [0032] “… behind-the-ear devices, on-the-ear devices, and in-the-ear devices, such as in-the-canal and/or completely-in-the-canal hearing assistance devices…”) and is configured to automatically adjust a feature thereof based on the identification (paragraph [0018] “Processor 120 includes modules for execution that will detect environments and make adaptations accordingly as set forth herein. Such processing can be on one or more audio inputs, depending on the function.”]). Regarding Claim 26, Zhang discloses the device of claim 21 where the device is a multimodal hearing prosthesis configured to automatically adjust a crossover point of modes of the device based on the identification (paragraph [0017] “In such embodiments, directionality is controllable via phasing mic 1 and mic 2. In one embodiment, mic 1 is a directional microphone with an omnidirectional setting. In one embodiment, the gain on mic 2 is reduced so that the system 100 is effectively a single microphone system.”) (paragraph [0018] “Processor 120 includes modules for execution that will detect environments and make adaptations accordingly as set forth herein. Such processing can be on one or more audio inputs, depending on the function.”). Regarding Claim 29, Zhang discloses a device, comprising inter alia: a component configured to receive data based on speech of a person, wherein the device is configured to evaluate the received data (paragraph [0031] “In one embodiment, the processor of the hearing assistance device can perform statistical operations on data from the actual and hypothetical usage logs. It is understood that data from the usage logs may be processed by software executing on a computer to provide statistical analysis of the data.”) and identify a recommended action to be taken by the person based on the evaluation (paragraph [0031] “Also, advanced software solutions can suggest parameters for the dispenser/audiologist based on the actual usage log and one or more hypothetical usage logs.”) (paragraph [0028] “The output of the actual usage log and hypothetical usage log (or plurality of hypothetical usage logs in embodiments employing more than one hypothetical usage log) may be depicted in a graphical format to a user and may be displayed by the programmer to review behavior of the hearing assistance device. In embodiments recording environmental aspects, such outputs may be made on a graphical device to monitor behavior, for example, as a function of time and/or frequency.”). Regarding claim 30, Zhang discloses the device of claim 29, wherein: the device is configured to evaluate the speech of the person by way of the device being configured to evaluate the received data by acoustically analyzing patterns in speech production of the person (paragraph [0024] “A comparison between the actual and hypothetical usage logs allows a dispenser or audiologist to recommend proper enablement of modes for a user based on his or her typical environment. In this example, the actual usage log can track the number of times the device detected wind noise, machinery noise, one's own speech sound, and other speech sound. The hypothetical usage log can track the number of times the device would have detected wind noise, machinery noise, one's own speech sound, and other speech sound, given the hypothetical detection settings.”) (The number of times wind, machinery and differing kinds of speech or detected, especially as broadly claimed, is the detection in patterns in speech). Regarding Claim 31, Zhang discloses the device of claim 29, wherein: the recommended action is an adjustment to a habilitation and/or rehabilitation regime of the person (paragraph [0024] “A comparison between the actual and hypothetical usage logs allows a dispenser or audiologist to recommend proper enablement of modes for a user based on his or her typical environment. In this example, the actual usage log can track the number of times the device detected wind noise, machinery noise, one's own speech sound, and other speech sound. The hypothetical usage log can track the number of times the device would have detected wind noise, machinery noise, one's own speech sound, and other speech sound, given the hypothetical detection settings.”). Regarding Claim 32, Zhang discloses the device of claim 29, wherein: the device is configured to receive second data based on speech of other people other than the person (paragraph [0025] “The resulting actual and hypothetical usage logs can also be used to determine statistics on the modes based on actual and hypothetical settings. For example, the gain reduction data for wind noise, machinery noise, one's own speech sound, and other speech sound can be averaged to determine actual average gain reduction per source class and hypothetical average gain reduction per source class.”).; and the device is configured to compare the data and the second data (paragraph [0025] “For example, the gain reduction data for wind noise, machinery noise, one's own speech sound, and other speech sound can be averaged to determine actual average gain reduction per source class and hypothetical average gain reduction per source class. The audiologist can adjust the size of gain reduction for each sound class based on the patient's feedback and the actual and hypothetical average gain reduction log.”) (paragraph [0028] “The output of the actual usage log and hypothetical usage log (or plurality of hypothetical usage logs in embodiments employing more than one hypothetical usage log) may be depicted in a graphical format to a user and may be displayed by the programmer to review behavior of the hearing assistance device. In embodiments recording environmental aspects, such outputs may be made on a graphical device to monitor behavior, for example, as a function of time and/or frequency. Other forms of output, such as tabular output, are provided in various embodiments. The presentation methods set forth herein are demonstrative and not intended to be exhaustive or exclusive.” and TABLES 1 and 2 at paragraph [0029]) (comparison between own speech and other speech is performed by determine percentages); and the device is configured to identify the recommended action to be taken based on the evaluation of the received data and the comparison of the data and the second data (paragraph [0031] “In one embodiment, the processor of the hearing assistance device can perform statistical operations on data from the actual and hypothetical usage logs. It is understood that data from the usage logs may be processed by software executing on a computer to provide statistical analysis of the data.”). Regarding Claim 33, Zhang discloses the of claim 29, wherein: evaluating the speech of the person by way of the device being configured to evaluate the received data includes comparing the data based on speech of the person to data based on prior speech of the of the person produced before the speech of the person of the data based on speech of the person (paragraph [0028] “In embodiments recording environmental aspects, such outputs may be made on a graphical device to monitor behavior, for example, as a function of time and/or frequency.”) Regarding Claim 35, Zhang discloses the device of claim 29, wherein: the device is also configured to evaluate non-speech related data, wherein the functionality of identifying the action is also based on the evaluation of the non-speech related data (paragraph [0025] “The resulting actual and hypothetical usage logs can also be used to determine statistics on the modes based on actual and hypothetical settings. For example, the gain reduction data for wind noise, machinery noise, one's own speech sound, and other speech sound can be averaged to determine actual average gain reduction per source class and hypothetical average gain reduction per source class.”). (paragraph [0031] “Also, advanced software solutions can suggest parameters for the dispenser/audiologist based on the actual usage log and one or more hypothetical usage logs.”) Regarding Claim 46, Zhang discloses the device of claim 39, wherein: the device is further configured to evaluate the data based on captured first sounds to determine whether the person is having problems hearing in a first set of sound conditions relative to that which is the case in a second set of sound conditions (paragraph [0024] “…the actual usage log can track the number of times the device detected wind noise, machinery noise, one's own speech sound, and other speech sound.”). Regarding Claim 47, Zhang discloses the device of claim 29, wherein: the device is configured so that execution of the evaluation of the received data is based at least in part on one or more linguistic characteristics in the speech of the person ([0025] “The resulting actual and hypothetical usage logs can also be used to determine statistics on the modes based on actual and hypothetical settings. For example, the gain reduction data for wind noise, machinery noise, one's own speech sound, and other speech sound can be averaged to determine actual average gain reduction per source class and hypothetical average gain reduction per source class.”). Regarding Claim 48, Zhang discloses the device of claim 43, wherein: the device is configured to wirelessly and/or wiredly communicate with a personal computer (paragraph [0027]). Regarding Claim 49, Zhang discloses the device of claim 41, wherein:the device is configured to wirelessly and/or wiredly communicate with a body worn device configured to capture the first sounds and enable adjustments to be made to the body worn device based on the diagnosis (paragraph [0027]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over one of VanDam, Banna or Zhang in view of US 20090185704 A1 to Hockley. VanDam, Banna or Zhang discloses Claim 21, but do not expressly disclose wherein the device is also configured to evoke a hearing percept based at least in part on the captured audio environment. However, Hockley teaches a hearing aid comprising a microphone and a voice detector to determine if voices are present in an environment (paragraph [0013]). Hockley teaches adjustable processing parameters and a frequency analyzer (paragraph [0013]) that allow for a specific hearing profile and acoustic environment to be modified according to the type of voice present to evoke a hearing percept (paragraph [0014]). One having an ordinary skill in the art at the time the invention was filed would have found it obvious to modify the discloses of VanDam, Banna or Zhang with the teachings of Hockley, as Hockley teaches that such modification would have optimized perception of a voice for the wearer of the hearing aid (paragraph [0014]). Claim(s) 34 and 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of US 20090208913 A1 to Xu et al. (hereinafter, Xu). Zang discloses the device of claim 29 as set forth and cited above. Zang does not expressly disclose wherein the device is configured to, based on the received data based on speech of the person, compare the speech of the person to data of a speech development trajectory for similarly situated people. However, Xu teaches computer speech/language based metrics and comparing them to statistical averages for others with similar attributes such as age of peers (paragraph [0150]). Xu teaches age-based model to assess expressive language development (paragraph [0003] and explicitly represents progress over time (paragraph [0152] “FIG. 12 illustrates a graphical representation of a key child's language progression over a selected amount of time and for particular characteristics.”) based on normative distributions and percentiles (the position on the trajectory for age-matched peers (paragraph [0166-168]). Xu further teaches where a functionality of evaluating data based on speech of the person includes detecting at least one of speech articulation problems (paragraph [0214] “Articulation analysis … articulation level). One having an ordinary skill in the art at the time the invention was filed would have found it obvious to modify the disclose of Zang with the speech comparison and trajectory of Xu, as Xu teaches that the metrics may be used to promote improvement of language development and track the development of a language skill (paragraph [0011]). Claim(s) 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of US 20130209970 A1 to Maja et al. (hereinafter, Maja). Regarding Claim 37, Zhang discloses the device of claim 29 as set forth and cited above. Zhang does not expressly disclose wherein the functionality of evaluating the data based on speech of the person includes determining whether the person is having problems hearing in a first set of sound conditions relative to that which is the case in a second set of sound conditions. However, Maja teaches a hearing aid to be worn by a user which provides training and analysis of speech of the user (paragraphs [0001-0003]). Maja teaches that training stages can be carried out in a first and second condition (quiet and noise) and provide a comparison to see when speech is an issue or has improved (paragraph [0048]). One having an ordinary skill in the art at the time the invention was filed would have found it obvious to modify the disclose of Zhang with the evaluation of problems in first and second conditions of Maja, as Maja teaches that such training and analysis can prepare hearing-aid wearing for different situations (paragraph [0020]). Response to Arguments Applicant’s arguments filed 1/22/2026 with respect to the claim objections and 35 U.S.C. 112 rejections are persuasive, and the rejections are withdrawn. The paragraph numbers with respect to the citations of Zhang have been corrected, requiring only a simple substitution of paragraph [0025] for [0026] and not affecting the merits of the rejection (the same quotes have been maintained). Examiner thanks the Applicant for pointing out the citation issue. Applicant's arguments filed 1/22/2026 with respect to the 35 U.S.C. 101, 102 and 103 rejections have been fully considered but they are not persuasive. The Applicant argues the 35 U.S.C. 101 rejection at pgs. 9-12. The arguments are unpersuasive for the following reasons: The Applicant argues that the claims don’t fall within any of the three abstract idea categories. The Examiner disagrees and submits that identifying biomarkers, comparing speech, and recommending actions based on comparisons are all steps that can be performed mentally. The Applicant argues that even if the claims are abstract, the claims integrate the exception into a practical application because they effect treatment of a medical condition and require a particular machine. The Examiner disagrees and submits that hearing prosthesis and processors are generic components that merely serve the environment in which the abstract idea is practiced and in no way impose a meaningful limit, especially when the specification describes these at conventional devices (paragraphs [0032], [0045], [0047]). Merely using a conventional and generic component to an abstract idea does not constitute integration into a practical application. The Applicant argues that VanDam does not anticipate the claims by rewriting Claims 21, 39, 40 and 43-45, rewriting Claim 21 at pgs. 13-14, rewriting Claim 39 at pgs. 14-15, and rewriting Claim 40 at pg. 16. The Applicant argues that Banna does not anticipate the claims by rewriting Claim 21 at pgs. 16-18, rewriting Claim 39 at pgs. 18-19, rewriting Claim 39 at pgs. 19-20, and rewriting Claim 41 at pgs. 20-21. The Applicant argues that Zhang does not anticipate the claims be rewriting Claim 21 at pgs. 21-22, rewriting Claim 23 at pgs. 22-23, rewriting Claim 25 at pgs. 23-24, and rewriting Claim 31 at pgs. 31. However, the Applicant has not identified any specific deficiency in the Examiner’s mapping. The Applicant’s arguments are not persuasive because they are based on claim language that does not appear in the claims as currently written. The Applicant presents “written” versions of the claims, but the characterizations of the Examiner’s rejections are incorrect. The Examiner’s citations map the references to the claims as written – not to some made-up alternative formulation. Applicant’s arguments are therefore directed at a strawman rather than the actual rejection. To the extent Applicant believes a particular claim limitation is not taught by the cited reference, Applicant should identify the specific limitation at issue and explain why the reference fails to teach it. Bare assertions that the Examiner’s mapping amounts to reading the claim differently than written are unpersuasive with a specific explanation of the deficiency. The Applicant argues at pg. 14 that the citations provided by VanDam are a classification “But that is not what we claim.” This argument is unpersuasive. By characterizing VanDam is merely providing a “classification” without providing an explanation of how this differs from the claimed invention does not constitute a substantive traversal of the rejection. The Applicant argues at pg. 22 that the Examiner has not identified where there are biomarkers in Banna that are indicative of the recipient’s ability to hear. This argument is unpersuasive as the Examiner cited paragraph [0044] and it is noted that acoustic and linguistic measures, especially as cited in paragraphs [0034] and [0044] are biomarkers that are indicative of the recipient’s ability to hear. The Applicant argues at pg. 24 with resect to Claim 26 that Zhang is not a multimodal device. This argument is unpersuasive as Zhang expressly discloses at paragraph [0008] that the devices operate across multiple distinct modes including directionality, environmental, gain adjustment, etc. which are multimodal devices. The Applicant argues at pg. 25 with respect to Claim 29 that there is no “speech of a person” disclosed by Zhang, only usage logs. This argument is unpersuasive because the usage log tracks detection of one’s won speech and other speec sounds, there are by definition speech of a person. The Applicant argues at pgs. 26-27 with respect to Claim 32 that there is no comparison of speech of the person to others. This argument is unpersuasive because paragraph [0025] clearly sets forth that data and second data can be compared, including that of other speech sounds. The Applicant argues at pgs. 28-29 with respect to Claim 35 that Zhang does not disclose the “identifying based on two things”. The argument is unpersuasive because paragraph [0025] sets forth the averaging of multiple variables including wind, machinery, own’s speech and other’s speech. This is the identification based on more than one “thing”. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN PATRICK DOUGHERTY whose telephone number is (571)270-5044. The examiner can normally be reached 8am-5pm (Pacific Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jacqueline Cheng can be reached at (571)272-5596. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN P DOUGHERTY/Primary Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Feb 20, 2022
Application Filed
Jul 05, 2023
Response after Non-Final Action
Sep 16, 2025
Non-Final Rejection — §101, §102, §103
Jan 22, 2026
Response Filed
Mar 09, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599324
Systems and Methods for Phlebotomy Through a Peripheral IV Catheter
2y 5m to grant Granted Apr 14, 2026
Patent 12599373
BIOPSY DEVICE HAVING A LINEAR MOTOR
2y 5m to grant Granted Apr 14, 2026
Patent 12588833
MONITORING A SLEEPING SUBJECT
2y 5m to grant Granted Mar 31, 2026
Patent 12588845
LIQUID COLLECTION DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588826
PHOTOPLETHYSMOGRAM SENSOR ARRANGEMENT
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
90%
With Interview (+14.3%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 932 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month