Prosecution Insights
Last updated: April 19, 2026
Application No. 18/754,955

VETERINARY ACOUSTIC AND BIOFEEDBACK DEVICE

Non-Final OA §101§103§112
Filed
Jun 26, 2024
Examiner
GHAND, JENNIFER LEIGH-STEWAR
Art Unit
3796
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Quadralynx Inc.
OA Round
1 (Non-Final)
61%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
89%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
404 granted / 667 resolved
-9.4% vs TC avg
Strong +29% interview lift
Without
With
+28.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
65 currently pending
Career history
732
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
39.3%
-0.7% vs TC avg
§102
18.7%
-21.3% vs TC avg
§112
28.0%
-12.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 667 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Objections Claim 6 is objected to because of the following informalities: Claim 6, line 3 the extra “the” needs to be deleted to fix an inadvertent typographical error. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 14 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 14 and 20 recite “providing, as an input to a machine-learned (ML) model trained to identify the pathological state;”, however the claims do not recite what is provided as an input to the ML model to identify the pathological state, clarification is required. As best understood for the purposes of examination, the input into the ML model has been interpreted to include the sensor data (claim 14) and audio data (claim 20). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-20 is/are drawn to an apparatus which is/are a statutory category of invention (Step 1: YES). The claim limitations within independent claim 1 that set forth or describe the abstract idea is/are: “receiving, from the one or more microphones, audio data associated with sound captured within an environment of the animal, determining, based at least in part on the audio data, that the sound is associated with the animal, determining, based at least in part on the sound being associated with the animal, one or more biomarkers associated with the sound, determining a threshold associated with the animal having a behavioral or pathological state, determining, based at least in part on the one or more biomarkers satisfying the threshold, that the sound is indicative of the animal having the behavioral or pathological state, causing, based at least in part on the one or more biomarkers satisfying the threshold, output of a first notification on the output component,”. The claim limitations within independent claim 8 that set forth or describe the abstract idea is/are: “receiving, from the one or more sensors, data, determining that the data is indicative of sound generated from an animal, determining, based at least in part on the data being indicative the sound generated from the animal, one or more biomarkers associated with the sound, the one or more biomarkers including at least an amplitude associated with the sound, determining that the one or more biomarkers fail to satisfy a threshold associated with a pathological state of the animal, and causing, based at least in part on the one or more biomarkers failing to satisfy the threshold, output of a notification via the one or more output components”. The claim limitations within independent claim 16 that set forth or describe the abstract idea is/are: “receiving, from the first microphone, first audio data associated with a sound captured in an environment, receiving, from the second microphone, second audio data associated with the sound, determining, based at least in part on the first audio data and the second audio data, that the sound is associated with an animal, determining, based at least in part on the sound being associated with the animal, one or more characteristics associated with the sound, determining, based at least in part on the one or more characteristics, that the sound is indicative of a pathological state”. The reasons that the limitations is/are considered an abstract idea is/are the following: The above limitations are a process directed to a concept relating to organizing or analyzing information in a way that can be performed in human mental work, i.e. under its broadest reasonable interpretation covers performance of the limitation in the mind with the aid of pen and paper but for the recitation of generic computer components. That is, other than reciting “one or more processor and one or more non-transitory computer-readable media storing computer executable instructions that when executed by the one or more processors, cause the device to perform acts” (claims 1,8 and 16) nothing in the claim element precludes the steps from practically being performed in the mind with the aid of pen and paper. For example but for the “one or more processor and one or more non-transitory computer-readable media storing computer executable instructions that when executed by the one or more processors, cause the device to perform acts” (claims 1,8 and 16) the “receiving” steps in the context of the claims encompasses the user receiving a print out of the data or visually viewing the data on a screen, the “determining” steps in the context of the claims encompasses the user, with the aid of pen and paper, using the audio data (claims 1 and 16) or data (claim 8) to determine one or more biomarkers and comparing the one or more biomarkers to a threshold to determine a physiological or behavioral state. There is nothing to suggest an undue level of complexity in the receiving and determining steps. If a claim limitation, under its broadest reasonable interpretation covers a metal process, i.e. performance of the limitation in the mind, but for the recitation of generic computer components, then it falls with the “Mental Processes” grouping of abstract ideas. Accordingly the claims recite an abstract idea. Although not drawn to the same subject matter, the claimed limitation(s) is/are similar to concepts that have been identified as abstract by the courts, such as: collecting information, analyzing it, and reporting certain results of the collection and analysis in Electric Power Group, LLC, v. Alstom, 830 F.3d 1350, 119 U.S.P.Q.2d 1739 (Fed. Cir. 2016), selecting certain information, analyzing it using mathematical techniques, and reporting or displaying the results of the analysis in SAP America Inc. v. Investpic, LLC, 890 F.3d 1016, 126 USPQ2d 1638 (Fed Cir. 2018). Thus, the claim(s) are directed to a judicial exception and fall squarely within the realm of "abstract ideas," which is a patent-ineligible concept. (Step 2A: Prong One YES). Analyzing the claim as whole for a practical application, the claim does not include additional elements/steps that are sufficient to amount to significantly more than the judicial exception. The additionally recited element(s) appended to the abstract include “one or more microphones” (claims 1 and 16), “one or more sensors” (claim 8), “one or more output components” (claims 1,8 and 16), “a network interface” (claim 1), “one or more processor and one or more non-transitory computer-readable media storing computer executable instructions that when executed by the one or more processors, cause the device to perform acts” (claims 1,8 and 16), “causing, based at least in part on the one or more characteristics being indicative of the pathological state, output of a notification via the one or more output components.” (claims 1,8 and 16) and “sending data associated with a second notification to be output on an electronic device associate with a caregiver of the animal” (claim 1). The additional elements of “one or more microphones” (claims 1 and 16), “one or more sensors” (claim 8), merely: add insignificant extra-solution activity and are recited at a high level of generality (i.e. as a general means of gathering data) and is merely nominally, insignificantly or tangentially related to the performance of the steps, i.e. amounts to mere data gathering, which is a form of insignificant extra-solution activity (pre-solution activity), all uses of the recited judicial exception require the pre-solution activity of data gathering. The additional elements reciting “causing….output of a notification via the one or more output components.” (claims 1,8 and 16) and “sending data associated with a second notification to be output on an electronic device associate with a caregiver of the animal” (claim 1). merely: add insignificant extra-solution activity and are recited at a high level of generality (i.e. as a general means of outputting and transmitting data) and is merely nominally, insignificantly or tangentially related to the performance of the steps, i.e. amounts to insignificant application, which is a form of insignificant extra-solution activity (post-solution activity), see MPEP 2106.05(d) and MPEP 2106.05 (g). As discussed above with respect to integration of the abstract idea, the additional element of “one or more output components” (claims 1,8 and 16), “a network interface” (claim 1), “one or more processor and one or more non-transitory computer-readable media storing computer executable instructions that when executed by the one or more processors, cause the device to perform acts” (claims 1,8 and 16), amount to no more than mere instruction to apply the exception using generic computer components. The “one or more output components”, “a network interface” and “one or more processor and one or more non-transitory computer-readable media storing computer executable instructions that when executed by the one or more processors, cause the device to perform acts”, are purely general-purpose computer components recited as carrying out the general-purpose computer functions of receiving data, processing data and displaying to enable the abstract process. The specification discloses “The processor(s) 120 may include a graphics processing unit (GPU), a microprocessor, a digital signal processor or other processing units or components known in the art. Alternatively, or in addition, the functionally described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that may be used include field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), etc.”, see para. [0072] of the published application. As such, this/these recitation(s) is/are nothing more than nominal recitation(s) of a computer covering an abstract concept. See Bancorp Servs. v. Sun Life Assurance Co., 687 F.3d 1266, 103 USPQ2d 1425 (Fed. Circ. 2012). See also Mayo Collaborative Services v. Prometheus Laboratories Inc., 101 USPQ2d 1961 (U.S. 2012), which establishes that a claim cannot simply state the abstract idea and add the words "apply it”. Therefore, the additional elements, alone or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (Step 2A, Prong Two, NO). Claims 1,8 and 16 do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception (i.e., an inventive concept) for the same reasons as described above. e.g., all elements are directed to insignificant extra-solution activity which merely facilitate the abstract idea and/or purely general-purpose computer components recited as carrying out the general-purpose computer function of processing data and displaying to enable the abstract process, the additional elements do not amount to significantly more than the above-identified judicial exception(s). Further, the use of “one or more microphones” or “one or more sensors” for collecting data and “one or more output components”, “a network interface” and “one or more processor and one or more non-transitory computer-readable media storing computer executable instructions” to collect data, analyze data and display data are well-understood, routine, conventional activity, see US 2011/0092779 to Chang et al., see Figs. 3A-3B, 5. Furthermore, the specification discloses that the functionally described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that may be used include field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), etc.”, see para. [0072] of the published application, therefore the recited “one or more output components”, “a network interface” and “one or more processor and one or more non-transitory computer-readable media storing computer executable instructions that when executed by the one or more processors, cause the device to perform acts” are nothing more than purely general-purpose computer components recited as carrying out the general-purpose computer functions of processing data and displaying to enable the abstract process, Similarly, when considered as an ordered combination, the additional components/steps of the claim(s) add nothing that is not already present when the steps are considered separately (Step 2B: NO). The claims are not patent eligible. Claim(s) 2-7, 9-15, 17-20 depend directly or indirectly from claim(s) 1,8 or 16. Therefore, the dependent claims rely upon the same abstract idea as the independent claim(s), as set forth above. Additionally, the dependent claims do nothing more than further limiting the abstract idea while failing to qualify as "significantly more", and the specificity of an abstract idea does not make it any "less abstract" as it is still directed to concepts relating to organizing or analyzing information in a way that can be performed mentally or is analogous to human mental work subject matter. Therefore, the dependent claim(s) are also not patent eligible for the reasons discussed above. Claim(s) 2-5, 7,13,18-19 fail(s) to provide significantly more, when considered as an ordered combination, as it/they merely provide further limitation regarding the abstract idea, which can still nonetheless be considered mental processes, i.e. performed in the mind with the aid of pen and paper. Claim(s) 2, 9, 15 and 18 fail(s) to provide significantly more, when considered as an ordered combination, as it/they merely provide further limitation regarding the “data gathering” which merely: add insignificant extra- solution activity, and is merely nominally, insignificantly or tangentially related to the performance of the steps, i.e. amounts to mere data gathering, which is a form of insignificant extra-solution activity (pre-solution activity). All uses of the recited judicial exception require the pre-solution activity of data gathering. Claim 5, 11-12 and 17 further add insignificant extra-solution activity that is merely nominally, insignificantly or tangentially related to the performance of the steps, i.e. amounts to insignificant application, which is a form of insignificant extra-solution activity (post-solution activity), see MPEP 2106.05(g). Claim(s) 6,10, 14 and 20 fail(s) to provide significantly more, when considered as an ordered combination, as it/they merely provide further limitation regarding the processing system which are purely general-purpose computer components recited as carrying out the general-purpose computer functions of processing data and displaying to enable the abstract process. As such, this/these recitation(s) is/are nothing more than nominal recitation(s) of a computer covering an abstract concept. See Bancorp Servs. v. Sun Life Assurance Co., 687 F.3d 1266, 103 USPQ2d 1425 (Fed. Circ. 2012). See also Mayo Collaborative Services v. Prometheus Laboratories Inc., 101 USPQ2d 1961 (U.S. 2012), which establishes that a claim cannot simply state the abstract idea and add the words "apply it”. Therefore, the dependent claims rely upon the same abstract idea as the independent claim(s), as set forth above. Additionally, the dependent claims do nothing more than further limiting the abstract idea while failing to qualify as "significantly more", and the specificity of an abstract idea does not make it any "less abstract” as it is still directed to concepts relating to organizing or analyzing information in a way that can be performed mentally or is analogous to human mental work subject matter. Therefore, the dependent claim(s) are also not patent eligible for the reasons discussed above. The instantly rejected claim(s) are therefore not drawn to eligible subject matter as they are directed to an abstract idea without significantly more. In the interest of advancing prosecution, the examiner suggests: providing evidence, for example, delineating how the abstract idea and/or additional elements appended to the abstract idea results in an improvement to the technology/technical field, which can show eligibility and/or adding a practical application of the claimed method outside of the computer (e.g. treating a patient). See MPEP § 716.01(c) for examples of providing evidence supported by an appropriate affidavit or declaration. For additional guidance, applicant is directed generally to MPEP §2106. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3 and 5-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2011/0092779 to Chang et al. (Chang) in view of US 2017/0358301 to Raitio et al. (Raitio) both cited by applicant). In reference to at least claim 1 Chang discloses a device to be worn by an animal, the device comprising: a microphone (e.g. “a microphone in communication with the processor”; para [0012],[0033]); one or more output components (e.g. “send outputs to health monitoring device 100” para [0034]); a network interface (e.g. “This system may be used at the local level, or may be connected to a network.”, para [0008],[0033]); one or more processors (e.g. “the device includes a processor, a memory in communication with the processor, a remote health monitor logic on the memory, a health profile database on the memory, a wireless transceiver in communication with the processor; para [0012]): and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors (e.g. “the device includes a processor, a memory in communication with the processor, a remote health monitor logic on the memory, a health profile database on the memory, a wireless transceiver in communication with the processor; para [0012]); cause the device to perform acts (e.g. “The comparison may be done by a processor on health monitoring device 200”, para. [0039],[0044]); comprising: receiving , from the one or more microphones, audio data associated with sound captured within an environment of the animal (e.g. “ receipt of the audio tone by the microphone”, para [0012], “Microphone 102 provides an input for speech or sounds from the user.” [0033]), determining, based at least in part on the audio data, that the sound is associated with the animal (e.g. “The present invention provides devices and methods for remotely monitoring the health of an individual. The individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual. These characteristics may include voice level and tone, movements, blood pressure, temperature, etc.”, Figs, 1, 2, 3A, 3B, 4A-4C, 7, abstract, para [0022]) determining, based at least in part on the sound being associated with the animal, one or more biomarkers associated with the sound (e.g. “The present invention provides devices and methods for remotely monitoring the health of an individual. The individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual. These characteristics may include voice level and tone, movements, blood pressure, temperature, etc.”, Figs, 1, 2, 3A, 3B, 4A-4C, 7, abstract, para [0022]). Chang further discloses that users complaining of sore throats, coughing, and/or other illnesses or diseases often have discernable differences in their speech (e.g. para. [0039]). However, Chang fails to disclose determining a threshold associated with the animal having a behavioral or pathological state, determining, based at least in part on the one or more biomarkers satisfying the threshold, that the sound is indicative of the animal having the behavioral or pathological state, causing, based at least in part on the one or more biomarkers satisfying the threshold, output of a first notification on the output component, and sending data associated with a second notification to be output on an electronic device associated with a caregiver of the animal. Raitio discloses determining a threshold associated with having a behavioral or pathological state such as a whispered speech or a non-whispered speech (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage; abstract, [0244]), determining, based at least in part on the one or more biomarkers satisfying the threshold, that the sound is indicative of the animal having the behavioral or pathological state (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; in a pre-determined frequency range (e.g. below 800 Hz), an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage; abstract, [0244]), causing, based at least in part on the one or more biomarkers satisfying the threshold (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; in a pre-determined frequency range (e.g., below 800 Hz), an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage, abstract, para. [0244]), output of a first notification on the output component, and sending data associated with a second notification to be output on an electronic device associated with a caregiver of the animal (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; output can be provided as voice, sound, alerts, text messages, vibrations, and/or combinations of two or more of the above; in a pre-determined frequency range (e.g., below 800 Hz), an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage; abstract, para [0077], [0244]). Users complaining of sore throats, coughing, and/or other illnesses or diseases often have discernable differences in their speech, therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Chang to include determining spectrum characteristics of speech that include a threshold associated with the user having a behavioral or pathological state such as a threshold associated with a whispered speech and determining, based at least in part on the one or more biomarkers satisfying the threshold such as the presence of whispered speech, that the sound is indicative of the animal having the behavioral or pathological state such as sore throats, coughing, and/or other illnesses or diseases, and causing output of a first notification such as an alert or message on the output component, and sending data associated with a second notification such as an alert or message to be output on an electronic device associated with a caregiver as taught by Raitio into the system of Chang for the purpose of identifying a whispered speech input having one or more spectrum characteristics that are different from the corresponding spectrum characteristics of a non-whispered speech input providing a whispered speech determination module which improves the accuracy of recognizing subsequent user speech inputs (‘301, para. [0245]). In reference to at least claim 2 Chang modified by Raitio renders obvious a device according to claim 1. Chang further discloses a sensor, the acts further comprising receiving, from the sensor, sensor data (e.g. “Health monitoring device 100 uses various sensors to take readings on the user as well as the user's environment.” para [0033]), wherein: determining that the audio corresponds to the user speech is based at least in part on the sensor data (e.g. “The individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual. These characteristics may include voice level and tone,”, abstract; “Health monitoring device 100 uses various sensors to take readings on the user as well as the user's environment.,” para [0033]); and determining the one or more biomarkers is based at least in part on the sensor data (e.g. “The individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual. These characteristics may include voice level and tone,”, abstract; “Health monitoring device 100 uses various sensors to take readings on the user as well as the user's environment.,” para [0033]). In reference to at least claim 3 Chang modified by Raitio renders obvious a device according to claim 1. Chang further discloses the acts further comprising: receiving, from the one or more microphones, second audio data associated with a second sound captured within the environment (e.g. “receipt of the audio tone by the microphone”, para. [0012]; “Microphone 102 provides an input for speech or sounds from the user”, para. [0033]); determining, based at least in part on the second audio data, that the second sound is associated with the environment (e.g. “The individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual. These characteristics may include voice level and tone,”, abstract; “Health monitoring device 100 uses various sensors to take readings on the user as well as the user's environment.,” para [0033]); and determining one or more second biomarkers associated with the second (e.g. “The individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual. These characteristics may include voice level and tone,”, abstract; “Health monitoring device 100 uses various sensors to take readings on the user as well as the user's environment.,” para [0033]); wherein determining that the sound is indicative of the animal having the pathological state is based at least in part on the one or more second biomarkers (the modified device with Raitio which utilizes thresholds for comparison to identify behavioral or pathological state based on the biomarkers would also be used when provided with second audio data, see modification above within claim 1). In reference to at least claim 5 Chang modified by Raitio renders obvious a device according to claim 1. Chang further discloses one or more attachment mechanisms for coupling the device to the animal (e.g. Figs. 4A-4C 4C shows a rope for attaching the device to the user; para. [0042]). In reference to at least claim 6 Chang modified by Raitio renders obvious a device according to claim 1. Chang fails to disclose wherein determining the one or more biomarkers is based at least in part on: providing, as an input to a machine-learned (ML) model trained to identify the behavioral or pathological state, the audio data; and receiving, as an output from the ML model, an indication associated with the one or more biomarkers. Raitio discloses wherein determining the one or more biomarkers (e.g. determination whether the speech input includes a whispered speech, para [0248]) is based at least in part on: providing, as an input to a machine-learned (ML) model trained to identify the behavioral or pathological state, the audio data (e.g. determination whether the speech input includes a whispered speech; the adjustment may be performed automatically and dynamically using, for example, machine learning technique; para [0248]); and receiving, as an output from the ML model, an indication associated with the one or more biomarkers (e.g. determination whether the speech input includes a whispered speech; the adjustment may be performed automatically and dynamically using, for example, machine learning technique; para [0248]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include wherein determining the one or more biomarkers is based at least in part on: providing, as an input to a machine-learned (ML) model trained to identify the behavioral or pathological state, the audio data; and receiving, as an output from the ML model, an indication associated with whispered speech as taught by Raitio into the system of Chang for the purpose of providing a whispered speech determination module which improves the accuracy of recognizing the subsequent user speech inputs and adjustments may increase the accuracy of determination whether the speech input includes a whispered speech input (‘301, para. [0248]). In reference to at least claim 7 Chang modified by Raitio renders obvious a device according to claim 1. Chang further discloses wherein the one or more biomarkers include at least one of a pitch of the sound, a tone associated with the sound, a pause in the sound, a duration of the sound, or an amplitude of the sound (e.g. “The present invention provides devices and methods for remotely monitoring the health of an individual. The individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual. These characteristics may include voice level and tone, movements, blood pressure, temperature, etc.”, Figs, 1, 2, 3A, 3B, 4A-4C, 7, abstract, para [0022]). In reference to at least claim 8 Chang discloses a device (e.g. device; abstract) comprising: one or more sensors (e.g. sensors; para [0040]); one or more output components (e.g. send outputs to health monitoring device; para [0034]); one or more processors (e.g. “the device includes a processor, a memory in communication with the processor, a remote health monitor logic on the memory, a health profile database on the memory, a wireless transceiver in communication with the processor; para [0012]); and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors (e.g. the device includes a processor, a memory in communication with the processor, a remote health monitor logic on the memory, a health profile database on the memory, a wireless transceiver in communication with the processor; para [0012]), cause the device to perform acts (e.g. a processor of health monitoring device; para [0044]) comprising: receiving, from the one or more sensors, data, determining that the data is indicative of sound generated from an animal (e.g. “The present invention provides devices and methods for remotely monitoring the health of an individual. The individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual. These characteristics may include voice level and tone, movements, blood pressure, temperature, etc.”, Figs, 1, 2, 3A, 3B, 4A-4C, abstract, para [0022]) determining, based at least in part on the data being indicative the sound generated from the animal, one or more biomarkers associated with the sound, the one or more biomarkers including at least an amplitude associated with the sound (e.g. “The present invention provides devices and methods for remotely monitoring the health of an individual. The individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual. These characteristics may include voice level and tone, movements, blood pressure, temperature, etc.”, Figs, 1, 2, 3A, 3B, 4A-4C, 7, abstract, para [0022]). However, Chang fails to disclose determining that the one or more biomarkers fail to satisfy a threshold associated with a pathological state of the animal, and causing, based at least in part on the one or more biomarkers failing to satisfy the threshold, output of a notification via the one or more output components. Raitio discloses determining a threshold associated with having a behavioral or pathological state such as a whispered speech or a non-whispered speech (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage; abstract, [0244]), determining, based at least in part on the one or more biomarkers satisfying the threshold, that the sound is indicative of the animal having the behavioral or pathological state (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; in a pre-determined frequency range (e.g. below 800 Hz), an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage; abstract, [0244]), causing, based at least in part on the one or more biomarkers satisfying the threshold (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; in a pre-determined frequency range (e.g., below 800 Hz), an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage, abstract, para. [0244]), output of a first notification on the output component, and sending data associated with a second notification to be output on an electronic device associated with a caregiver of the animal (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; output can be provided as voice, sound, alerts, text messages, vibrations, and/or combinations of two or more of the above; in a pre-determined frequency range (e.g., below 800 Hz), an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage; abstract, para [0077], [0244]). Users complaining of sore throats, coughing, and/or other illnesses or diseases often have discernable differences in their speech, therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Chang to include determining spectrum characteristics of speech that include a threshold associated with the user having a behavioral or pathological state such as a threshold associated with a whispered speech and determining and based at least in part on the one or more biomarkers failing to satisfy the threshold such as non-whispered speech causing output of a first notification such as an alert or message on the output component as taught by Raitio into the system of Chang for the purpose of identifying a non-whispered speech input having one or more spectrum characteristics that are different from the corresponding spectrum characteristics of a whispered speech input providing a whispered speech determination module which improves the accuracy of recognizing subsequent user speech inputs (‘301, para. [0245]). In reference to at least claim 9 Chang modified by Raitio renders obvious a device according to claim 8. Chang further discloses wherein the one or more sensors comprise: at least one microphone (e.g. microphone; para [0012],[0033]); and at least one of an internal measurement unit (IMU), an accelerometer, a gyroscope, or a piezoelectric sensor (e.g. accelerometer 313 is also used to detect (sensor) motions of health monitoring device 300; para [0041]). In reference to at least claim 10 Chang modified by Raitio renders obvious a device according to claim 8. Chang further discloses wherein the one or more output components (e.g. send outputs to health monitoring device; para [0034]) comprise at least one of a lighting element or a speaker (e.g. a speaker; para [0012], [0027]). In reference to at least claim 11 Chang modified by Raitio renders obvious a device according to claim 8. Chang fails to disclose the acts further comprising sending a second notification to an electronic device. Raitio discloses the acts further comprising sending a second notification to an electronic device (e.g. includes one or more tactile output generators; a tactile output generator coupled to haptic feedback controller 261 in I/O subsystem; output can be provided as voice, sound, alerts, text messages, vibrations (haptic), and/or combinations of two or more of the above; para [0064), [0077]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the acts further comprising sending a second notification to an electronic device as taught by Raitio into the system of Chang for the purpose of providing tactile feedback generation instructions from a haptic feedback module to generate alerts via tactile outputs on the user device that are capable of being sensed by a user of the device. In reference to at least claim 12 Chang modified by Raitio renders obvious a device according to claim 8. Chang further discloses wherein the device is configured to be worn by the animal (e.g. the individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual; these characteristics may include voice level and tone; the device allows individuals lo constantly monitor their health; an individual uses a health monitoring device, in the form of a wearable wireless voice remote, to interact with a 'computerized' healthcare service provider; Figs, 1, 2, 3A, 3B, 4A-4C,7, abstract, para [0022]). In reference to at least claim 13 Chang modified by Raitio renders obvious a device according to claim 8. Chang further discloses the acts further comprising: receiving, from the one or more sensors, second data (e.g. sensors; detecting data; para [0040]; [0041]); determining that the second data is indicative of second sound generated from the animal (e.g. capable of sensing characteristics of the individual; these characteristics may include voice level and tone; Figs, 1, 2, 3A, 3B, 4A-4C,7, abstract, para [0022]); determining, based at least in part on the second data being indicative the second sound generated from the animal, one or more second biomarkers associated with the second sound (e.g. capable of sensing characteristics of the individual; these characteristics may include voice level and tone; Figs, 1, 2, 3A, 3B, 4A-4C, 7, abstract, para [0022]). Chang fails to disclose determining that the one or more second biomarkers satisfy the threshold, and causing, based at least in part on the one or more second biomarkers satisfying the threshold, output of a third notification via the one or more output components. Raitio discloses determining that the one or more second biomarkers satisfy the threshold (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; in a pre-determined frequency range (e.g., below 800 Hz), an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage; abstract, para [0244]), and causing, based at least in part on the one or more second biomarkers satisfying the threshold, output of a third notification via the one or more output components (speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; output can be provided as voice, sound, alerts, text messages, vibrations, and/or combinations of two or more of the above; in a pre-determined frequency range (e.g., below 800 Hz), an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage; abstract, para [0077], [0244]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include determining that the one or more second biomarkers satisfy the threshold, and causing, based at least in part on the one or more second biomarkers satisfying the threshold, output of a third notification via the one or more output components as taught by Raitio into the system of Chang for the purpose of providing a whispered speech input having one or more spectrum characteristics that are different from the corresponding spectrum characteristics of a non-whispered speech input in order to provide a whispered speech determination module which can improve the accuracy of recognizing the subsequent user speech inputs. In reference to at least claim 14 Chang modified by Raitio renders obvious a device according to claim 8. Chang fails to disclose wherein determining the one or more biomarkers is based at least in part on: providing, as an input lo a machine-learned (ML) model trained to identify the pathological state; and receiving, as an output from the ML model, an indication associated with the one or more biomarkers. Raitio discloses wherein determining the one or more biomarkers (e.g. determination whether the speech input includes a whispered speech, para [0248]) is based at least in part on: providing, as an input to a machine-learned (ML) model trained to identify hypophonia, the first audio data and the second audio data (e.g. determination whether the speech input includes a whispered speech, the adjustment may be performed automatically and dynamically using, for example, machine learning technique; para [0248]); and receiving, as an output from the ML model, an indication associated with the one or more biomarkers (e.g. determination whether the speech input includes a whispered speech, the adjustment may be performed automatically and dynamically using, for example, machine learning technique; para [0248]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include providing the speech data as an input to a machine-learned (ML) model trained to identify whispered speech and receiving, as an output from the ML model, an indication associated with the whispered speech as taught by Raitio into the system of Chang for the purpose of providing a whispered speech determination module which improves the accuracy of recognizing the subsequent user speech inputs and adjustments may increase the accuracy of determination whether the speech input includes a whispered speech input (‘301, para. [0248]). In reference to at least claim 15 Chang modified by Raitio renders obvious a device according to claim 8. Chang further discloses wherein the one or more sensors (e.g. sensors; detect; para [0040], [0041]) comprise at least one of a microphone, an accelerometer, an inertial measurement unit (IMU), a GPS, or a temperature sensor (e.g. accelerometer 313 is also used to detect (sensor) motions of health monitoring device 300; para [0041]). Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2011/0092779 to Chang et al. (Chang) in view of US 2017/0358301 to Raitio et al. (Raitio) as applied to claim 1 further in view of Voice feature extraction from agitated speech to Seteanu (Seteanu) (cited by applicant). In reference to at least claim 4 Chang modified by Raitio renders obvious a device according to claim 1. Chang fails to disclose the acts further comprising: determining an audio signature associated with the pathological state; determining, based at least in part in the one or more biomarkers, an audio signature associated with the sound; and determining a similarity between the audio signature associated with the pathological state and the audio signature associated with the sound, wherein determining that the sound is indicative of the animal having the pathological slate is based at least in part on the similarity. Seteanu discloses the acts further comprising: determining an audio signature associated with the pathological state (e.g. identify voice features by literature survey and validate them for human agitation detection; agitation is a common neuropsychiatric disorder that usually manifests in the elderly and which is associated with Alzheimer's disease and dementia; a value assigned to each window which represents the average frequencies that are most represented in intensity of speech; the values of all the windows are now averaged out to compute a value that represents the audio signature of the recording; this is the value that the algorithm uses to determine agitation; voice recognition to identify emotions that are linked to agitation, abstract, page 1, column 1,para 1, page 4, column 1, para 3, page 1, column 2, para 3); determining, based at least in part on the one or more biomarkers, an audio signature associated with the sound (e.g. identify voice features by literature survey and validate them for human agitation detection; a value assigned to each window which represents the average frequencies that are most represented in intensity of speech; the values of all the windows are now averaged out to compute a value that represents the audio signature of the recording; this is the value that the algorithm uses to determine agitation; voice recognition to identify emotions that are linked to agitation; abstract, page 1, column 1, para 3, page 1 , column 2, para 3); and determining a similarity between the audio signature associated with the pathological state and the audio signature associated with the sound (e.g. identify voice features by literature survey and validate them for human agitation detection; a value assigned to each window which represents the average frequencies that are most represented in intensity of speech; the values of all the windows are now averaged out to compute a value that represents the audio signature of the recording; this is the value that the algorithm uses to determine agitation; voice recognition to identify emotions that are linked to agitation; abstract, page 1, column 1, para 3, page1, column 2, para 3), wherein determining that the sound is indicative of the animal having the pathological state is based at least in part on the similarity (e.g. identify voice features by literature survey and validate them for human agitation detection; a value assigned to each window which represents the average frequencies that are most represented in intensity of speech; the values of all the windows are now averaged out to compute a value that represents the audio signature of the recording; this is the value that the algorithm uses to determine agitation; voice recognition to identify emotions that are linked to agitation, abstract, page 1, column 1, para 3, page1, column 2, para 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the acts further comprising: determining an audio signature associated with the pathological state; determining, based at least in part on the one or more biomarkers, an audio signature associated with the sound; and determining a similarity between the audio signature associated with the pathological state and the audio signature associated with the sound, wherein determining that the sound is indicative of the animal having the pathological state is based at least in part on the similarity as taught by Seteanu into the system of Chang in view of Raitio for the purpose of providing and identifying voice features by literature survey and validate them for human agitation detection. Claim(s) 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2011/0092779 to Chang et al. (Chang) in view of US 2017/0358301 to Raitio et al. (Raitio) and US 2020/02212223 to Zhou (Zhou) (cited by applicant). In reference to at least claim 16 Chang discloses a device (e.g. device; abstract) comprising: a first microphone (e.g. a microphone; para [0012], [0033]); one or more output components (e.g. send outputs to health monitoring device; para [0034]); one or more processors (e.g. processors; para); and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors (e.g. the device includes a processor, a memory in communication with the processor, a remote health monitor logic on the memory, a health profile database in the memory, a wireless transceiver in communication with the processor; para [0012]), cause the device to perform acts (a processor of health monitoring device; para (0044]) comprising: receiving, from the first microphone, first audio data associated with a sound captured In an environment (e.g. receipt of the audio tone by the microphone; microphone 102 provides an input for speech or sounds from the user; para [0012], [0033]), determining, based at least in part on the audio data that the sound is associated with an animal (e.g. “The present invention provides devices and methods for remotely monitoring the health of an individual. The individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual. These characteristics may include voice level and tone, movements, blood pressure, temperature, etc.”, Figs, 1, 2, 3A, 3B, 4A-4C, 7, abstract, para [0022]); determining, based at least in part on the sound being associated with the animal, one or more characteristics associated with the sound (e.g. the individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual; these characteristics may include voice level and tone; Figs, 1, 2, 3A, 38, 4A-4C, abstract, para [0022]). Chang further discloses that users complaining of sore throats, coughing, and/or other illnesses or diseases often have discernable differences in their speech (e.g. para. [0039]). Chang fails to disclose determining, based at least in part on the one or more characteristics, that the sound is indicative of a pathological state, and causing, based at least in part on the one or more characteristics being indicative of the pathological state, output of a notification via the one or more output components. Raitio discloses determining a threshold associated with having a behavioral or pathological state such as a whispered speech or a non-whispered speech (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage; abstract, [0244]), determining, based at least in part on the one or more biomarkers satisfying the threshold, that the sound is indicative of the animal having the behavioral or pathological state (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; in a pre-determined frequency range (e.g. below 800 Hz), an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage; abstract, [0244]), causing, based at least in part on the one or more biomarkers satisfying the threshold (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; in a pre-determined frequency range (e.g., below 800 Hz), an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage, abstract, para. [0244]), output of a first notification on the output component, and sending data associated with a second notification to be output on an electronic device associated with a caregiver of the animal (e.g. speech is received from a user, and based on the speech input, determined that a whispered speech response is to be provided; output can be provided as voice, sound, alerts, text messages, vibrations, and/or combinations of two or more of the above; in a pre-determined frequency range (e.g., below 800 Hz), an absolute value of the slope of spectrum 832 of a non-whispered speech input can be greater than the absolute value of the slope of spectrum 834 of a whispered speech input by a threshold slope percentage; abstract, para [0077], [0244]). Users complaining of sore throats, coughing, and/or other illnesses or diseases often have discernable differences in their speech, therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Chang to include determining spectrum characteristics of speech that include a threshold associated with the user having a behavioral or pathological state such as a threshold associated with a whispered speech and determining, based at least in part on the one or more biomarkers satisfying the threshold such as the presence of whispered speech, that the sound is indicative of the animal having the behavioral or pathological state such as sore throats, coughing, and/or other illnesses or diseases, and causing output of a first notification such as an alert or message on the output component, and sending data associated with a second notification such as an alert or message to be output on an electronic device associated with a caregiver as taught by Raitio into the system of Chang for the purpose of identifying a whispered speech input having one or more spectrum characteristics that are different from the corresponding spectrum characteristics of a non-whispered speech input providing a whispered speech determination module which improves the accuracy of recognizing subsequent user speech inputs (‘301, para. [0245]). Further, Chang and Raitio fails to disclose a second microphone; receiving, from the second microphone, second audio data associated with the sound. Zhou discloses a second microphone (e.g. at least two microphones; abstract, para [0075]); receiving, from the second microphone, second audio data associated with the sound (e.g. at least two microphones configured to collect audio signals, abstract, para [0075]) and a microcontroller configured to, process the audio signals collected by the at least two microphones to generate one data stream; the audio signal from the first and second microphones (i.e., a microphone array) may be considered as from a desired sound source (e.g., voice signal from human speaker(s); abstract, para [0075]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a second microphone; receiving, from the second microphone, second audio data associated with the sound as taught by Zhou into the system of Chang and Raitio for the purpose of providing a technical solution of using the dual microphone array, if the approximate orientation of the target sound source can be determined in advance, in order to improve the performance of the multi-input audio processing scheme. In reference to at least claim 17 Chang modified by Raitio and Zhou renders obvious a device according to claim 16. Chang further discloses sending data associated with the one or more characteristics to an electronic device (e.g. the individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual; these characteristics may include voice level and tone, movements, blood pressure, temperature, the device allows individuals to constantly monitor their health, abstract, para [0022]). In reference to at least claim 18 Chang modified by Raitio and Zhou renders obvious a device according to claim 16. Chang further discloses one or more sensors that include at least one of an accelerometer, a gyroscope, an internal measurement unit ((MU), or a piezoelectric sensor (e.g. accelerometer 313 is also used to detect (sensor) motions of health monitoring device 300; para [0041]), the acts further comprising receiving, from the one or more sensors, sensor data (e.g. sensors; para [0040]), wherein: determining that the sound is associated with the animal is based at least in part on the sensor data (e.g. the individual wears a health monitoring device, with an attached strap, capable of sensing characteristics of the individual; these characteristics may include voice level and tone; Figs, 1, 2, 3A, 3B, 4A:4c, abstract, para [0022]); and determining the one or more characteristics associated with the sound is based at least in part on the sensor data (e.g. the individual wears a health monitoring device, with ail attached strap, capable of sensing characteristics of the individual; these characteristics may include voice level and tone; Figs, 1, 2, 3A, 38, 4A-4C, abstract, para [0022]). In reference to at least claim 19 Chang modified by Raitio and Zhou renders obvious a device according to claim 16. Chang further discloses the acts further comprising determining one or more characteristics associated with an environment of the animal (e.g. the individual wears a health monitoring device·, with an attached strap, capable of sensing characteristics of the individual; these characteristics may include voice level and tone; the device allows individuals to constantly monitor their health; an individual uses a health monitoring device, in the form of a wearable wireless voice remote, to interact with a 'computerized' healthcare service provider; Figs, 1, 2, 3A, 3B, 4A-4C, 7, abstract, para [0022]), and wherein determining that the sound is indicative of the pathological state is based at least in part on the one or more characteristics associated with the environment (e.g. the individual wears a health monitoring device·, with an attached strap, capable of sensing characteristics of the individual; these characteristics may include voice level and tone; users complaining of sore throats, coughing, and/or other illnesses or diseases often have discernable differences in their speech; determining these differences may assist in the diagnosis of these users; abstract, para [0022], [0039]). In reference to at least claim 20 Chang modified by Raitio and Zhou renders obvious a device according to claim 16. Chang fails to disclose wherein determining the one or more characteristics is based at least in part on: providing, as an input to a machine-learned (ML) model trained to identify hypophonia, the first audio data and the second audio data; and receiving, as an output from the ML model, an indication associated with the one or more characteristics. Raitio discloses wherein determining the one or more characteristics (e.g. determination whether the speech input includes a whispered speech; para [0248]) is based at least in part on: providing, as an input to a machine-learned (ML) model trained to identify a pathological state (e.g. determination whether the speech input includes a whispered speech; the adjustment may be performed automatically and dynamically using, for example, machine learning technique; para [0248]); and receiving, as an output from the ML model, an indication associated with the one or more biomarkers (e.g. determination whether the speech input includes a whispered speech; the adjustment may be performed automatically and dynamically using, for example, machine learning technique; para [0248]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include wherein determining the one or more biomarkers is based at least in part on: providing as an input to a machine-learned (ML) model trained to identify a pathological state the first audio data and the second audio data; and receiving, as an output from the ML model, an indication associated with the one or more biomarkers as taught by Raitio into the system of Chang and Zhou for the purpose of providing a whispered speech determination module which improves the accuracy of recognizing the subsequent user speech inputs and adjustments may increase the accuracy of determination whether the speech input includes a whispered speech input (‘301, para. [0248]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2008/0306367 to Koehler et al. which discloses using bioacoustics sensors to detect lung sounds for early detection of diseases. US 2022/0125021 to Herborn et al. which discloses livestock rearing that can monitor distress using sounds. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNIFER L GHAND whose telephone number is (571)270-5844. The examiner can normally be reached Mon-Fri 7:30AM - 3:30PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JENNIFER MCDONALD can be reached at (571)270-3061. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENNIFER L GHAND/Examiner, Art Unit 3796
Read full office action

Prosecution Timeline

Jun 26, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599768
ADVANCED ELECTRODE DATA ANALYSIS
2y 5m to grant Granted Apr 14, 2026
Patent 12594429
STIMULATION PROGRAMMING AND CONTROL BASED ON PATIENT AMBULATORY VELOCITY
2y 5m to grant Granted Apr 07, 2026
Patent 12564710
SYSTEM FOR SECURING A RELEASABLE CONNECTION BETWEEN TWO ELEMENTS
2y 5m to grant Granted Mar 03, 2026
Patent 12539429
AUTONOMOUS IMPLANTABLE MEDICAL DEVICE TUNING
2y 5m to grant Granted Feb 03, 2026
Patent 12533515
COCHLEAR STIMULATION SYSTEM WITH SURROUND SOUND AND NOISE CANCELLATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
61%
Grant Probability
89%
With Interview (+28.8%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 667 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month