Prosecution Insights
Last updated: April 19, 2026
Application No. 18/286,882

METHOD FOR ESTIMATING AN AUDIOGRAM FOR A SPECIFIC USER

Non-Final OA §101§102§103§112
Filed
Oct 13, 2023
Examiner
PARK, EVELYN GRACE
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Widex A/S
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
45 granted / 80 resolved
-13.7% vs TC avg
Strong +47% interview lift
Without
With
+46.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
33 currently pending
Career history
113
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
31.7%
-8.3% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 80 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on October 13, 2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Drawings The drawings are objected to because the reference characters in Figs. 1-3 are difficult to read. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 14 is objected to because of the following informalities: “correspond-ing” should read “corresponding” in line 10; “be-tween” should read “between” in line 11; “hear-ing” should read “hearing” in line 12; and “fac-tors” should read “factors” in line 15. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-15 are directed to a system and non-transitory computer readable medium for determining patient-specific audiograms using a computational algorithm, which is an abstract idea. Claims 1-15 do not include additional elements that integrate the exception into a practical application or that are sufficient to amount to significantly more than the judicial exception for the reasons provided below which are in line with the 2014 Interim Guidance on Patent Subject Matter Eligibility (Federal Register, Vol. 79, No. 241, p 74618, December 16, 2014), the July 2015 Update on Subject Matter Eligibility (Federal Register, Vol. 80, No. 146, p. 45429, July 30, 2015), the May 2016 Subject Matter Eligibility Update (Federal Register, Vol. 81, No. 88, p. 27381, May 6, 2016), and the 2019 Revised Patent Subject Matter Eligibility Guidance (Federal Register, Vol. 84, No. 4, page 50, January 7, 2019). The analysis of claim 1 is as follows: Step 1: Claim 1 is drawn to a machine. Step 2A – Prong One: Claim 1 recites an abstract idea. In particular, claim 1 recites the following limitations: [A1] - receiving a first plurality of first audiograms from a corresponding first plurality of persons; [B1] - providing, for each of the first audiograms, a weighting factor and an initial value of said weighting factor; [C1] - selecting a first frequency, out of a first set of frequencies; [D1] - performing a first update on each of said weighting factor corresponding to said first plurality of audiograms in dependence on the difference between the value of the respective first audiogram and said measured hearing threshold of said specific user at said first frequency; and [E1] - determining an estimated audiogram for said specific user as a weighted mean of said first plurality of first audiograms based on said weighting factor, according to their respective first update. These elements [A1]-[E1] of claim 1 are drawn to an abstract idea since (1) they involve mathematical concepts in the form of mathematical relationships, mathematical formulas or equations, and/or mathematical calculations; and (2) they involve a mental process that can be practically performed in the human mind including observation, evaluation, judgment, and opinion and using pen and paper. Step 2A – Prong Two: Claim 1 recites the following limitations that are beyond the judicial exception: [A2] – “a hearing estimation system comprising a computerized device and an acoustic output transducer, wherein the computerized device comprises or is operationally connected to the acoustic output transducer and wherein the computerized device comprises a graphical user interface, a program storage for storing an executable program and a processor for executing said program to perform a method”; and [B2] - measuring a hearing threshold of a specific user at said first frequency using said acoustic output transducer. These elements [A2]-[B2] of claim 1 do not integrate the exception into a practical application of the exception. In particular, the elements [A2-B2] are merely adding insignificant extra-solution activity to the judicial exception, i.e., mere data gathering at a higher level of generality - see MPEP 2106.04(d) and MPEP 2106.05(g). Furthermore, the element [A2] is merely an instruction to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.04(d) and MPEP 2106.05(f). Step 2B: Claim 1 does not recite additional elements that amount to significantly more than the judicial exception itself. In particular, the recitation “a hearing estimation system comprising a computerized device and an acoustic output transducer” is merely insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with the abstract idea that uses conventional, routine, and well-known elements or simply displaying the results of the algorithm that uses conventional, routine, and well-known elements. In particular, an acoustic output transducer is nothing more than a microphone. Such transducers are conventional as evidenced by: U.S. Patent Application Publication No. US 20030073920 A1 (Smits et al.) discloses that microphones are conventional for detecting acoustic noise [0052]. Further, the “computerized device” does not qualify as significantly more because this limitation is simply appending well-understood, routine and conventional activities previously known in the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known in the industry (see Electric Power Group, 830 F.3d 1350 (Fed. Cir. 2016); Alice Corp. v. CLS Bank Int’l, 110 USPQ2d 1976 (2014)) and/or a claim to an abstract idea requiring no more than being stored on a computer readable medium which is a well-understood, routine and conventional activity previously known in the industry (see Electric Power Group, 830 F.3d 1350 (Fed. Cir. 2016); Alice Corp. v. CLS Bank Int’l, 110 USPQ2d 1976 (2014); SAP Am. v. InvestPic, 890 F.3d 1016 (Fed. Circ. 2018)). Claims 2-13 depend from claim 1, and recite the same abstract idea as claim 1. Furthermore, these claims only contain recitations that further limit the abstract idea (that is, the claims only recite limitations that further limit the algorithm), with the following exception: Claim 13: “hearing aid”. The claims limitation does not integrate the exception into a practical application. In particular, the elements of claim 13 is merely adding insignificant extra-solution activity to the judicial exception, i.e., mere data gathering at a higher level of generality - see MPEP 2106.04(d) and MPEP 2106.05(g). Also, each of these limitations does not recite additional elements that amount to significantly more than the judicial exception itself because they are merely insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with the abstract idea that uses conventional, routine, and well-known elements or simply displaying the results of the algorithm that uses conventional, routine, and well-known elements. In particular, the hearing aid is nothing more than an audio device that is present or not present at the patients’ ears. Such devices are conventional as evidenced by: U.S. Patent Application Publication No. US 20150327797 A1 (Schmitt et al.) describes that hearing aids are conventional means for aiding in a patient’s hearing loss [0001]. Also, this limitation from claim 13 is simply appending well-understood, routine and conventional activities previously known in the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions (that is, one of display) that are well-understood, routine and conventional activities previously known in the industry (see Electric Power Group, 830 F.3d 1350 (Fed. Cir. 2016); Alice Corp. v. CLS Bank Int'l, 110 USPQ2d 1976 (2014); SAP Am. v. InvestPic, 890 F.3d 1016 (Fed. Circ. 2018)). In view of the above, the additional elements individually do not integrate the exception into a practical application and do not amount to significantly more than the above-judicial exception (the abstract idea). Looking at the limitations of each claim as an ordered combination in conjunction with the claims from which they depend (that is, as a whole) adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer, for example, or improves any other technology. There is no indication that the combination of elements permits automation of specific tasks that previously could not be automated. There is no indication that the combination of elements includes a particular solution to a computer-based problem or a particular way to achieve a desired computer-based outcome. Rather, the collective functions of the claimed invention merely provide conventional computer implementation, i.e., the computer is simply a tool to perform the process. The analysis of claim 14 is as follows: Step 1: Claim 14 is drawn to a machine. Step 2A – Prong One: Claim 14 recites an abstract idea. In particular, claim 14 recites the following limitations: [A1] - receiving a first plurality of first audiograms from a corresponding first plurality of persons; [B1] - providing, for each of the first audiograms, a weighting factor and an initial value of said weighting factor; [C1] - selecting a first frequency, out of a first set of frequencies; [D1] - performing a first update on each of said weighting factor corresponding to said first plurality of audiograms in dependence on the difference between the value of the respective first audiogram and said measured hearing threshold of said specific user at said first frequency; and [E1] - determining an estimated audiogram for said specific user as a weighted mean of said first plurality of first audiograms based on said weighting factor, according to their respective first update. These elements [A1]-[E1] of claim 14 are drawn to an abstract idea since (1) they involve mathematical concepts in the form of mathematical relationships, mathematical formulas or equations, and/or mathematical calculations; and (2) they involve a mental process that can be practically performed in the human mind including observation, evaluation, judgment, and opinion and using pen and paper. Step 2A – Prong Two: Claim 14 recites the following limitations that are beyond the judicial exception: [A2] – a non-transitory computer readable medium carrying instructions which, when executed by a computer, cause the following method steps to be performed; and [B2] - using an electro-acoustical transducer at least operationally connected to the computer to measure a hearing threshold of a specific user at said first frequency. These elements [A2]-[B2] of claim 14 do not integrate the exception into a practical application of the exception. In particular, the element [B2] is merely adding insignificant extra-solution activity to the judicial exception, i.e., mere data gathering at a higher level of generality - see MPEP 2106.04(d) and MPEP 2106.05(g). Furthermore, the element [A2] is merely an instruction to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.04(d) and MPEP 2106.05(f). Step 2B: Claim 14 does not recite additional elements that amount to significantly more than the judicial exception itself. In particular, the recitation “a hearing estimation system comprising a computerized device and an acoustic output transducer” is merely insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with the abstract idea that uses conventional, routine, and well-known elements or simply displaying the results of the algorithm that uses conventional, routine, and well-known elements. In particular, an acoustic output transducer is nothing more than a microphone. Such transducers are conventional as evidenced by Smits, as described above in the rejection of claim 1. Further, the element [A2] does not qualify as significantly more because this limitation is simply appending well-understood, routine and conventional activities previously known in the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known in the industry (see Electric Power Group, 830 F.3d 1350 (Fed. Cir. 2016); Alice Corp. v. CLS Bank Int’l, 110 USPQ2d 1976 (2014)) and/or a claim to an abstract idea requiring no more than being stored on a computer readable medium which is a well-understood, routine and conventional activity previously known in the industry (see Electric Power Group, 830 F.3d 1350 (Fed. Cir. 2016); Alice Corp. v. CLS Bank Int’l, 110 USPQ2d 1976 (2014); SAP Am. v. InvestPic, 890 F.3d 1016 (Fed. Circ. 2018)). Claim 15 depends from claim 14, and recite the same abstract idea as claim 14. Furthermore, this claims only contains recitations that further limit the abstract idea (that is, the claims only recite limitations that further limit the algorithm). In view of the above, the additional elements individually do not integrate the exception into a practical application and do not amount to significantly more than the above-judicial exception (the abstract idea). Looking at the limitations of each claim as an ordered combination in conjunction with the claims from which they depend (that is, as a whole) adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer, for example, or improves any other technology. There is no indication that the combination of elements permits automation of specific tasks that previously could not be automated. There is no indication that the combination of elements includes a particular solution to a computer-based problem or a particular way to achieve a desired computer-based outcome. Rather, the collective functions of the claimed invention merely provide conventional computer implementation, i.e., the computer is simply a tool to perform the process. Section 33(a) of the America Invents Act reads as follows: Notwithstanding any other provision of law, no patent may issue on a claim directed to or encompassing a human organism. Claim 13 is rejected under 35 U.S.C. 101 and section 33(a) of the America Invents Act as being directed to or encompassing a human organism. See also Animals - Patentability, 1077 Off. Gaz. Pat. Office 24 (April 21, 1987) (indicating that human organisms are excluded from the scope of patentable subject matter under 35 U.S.C. 101). Claim 13 recites “a hearing aid at the respective ear of the corresponding person” in line 3, which encompasses a human organism as part of the invention. In order to overcome this rejection, the claim could be amended to recite “a hearing aid configured to be worn at the respective ear of the corresponding person”. Similarly, Claim 13 recites “a hearing aid at the respective ear of the corresponding person” in lines 5-6, which encompasses a human organism as part of the invention. In order to overcome this rejection, the claim could be amended to recite “a hearing aid configured to be worn at the respective ear of the corresponding person”. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “said first plurality of audiograms” in lines 13-14. It is unclear if there is a difference between “first plurality of audiograms” and “first plurality of first audiograms”. Each prior recitation of “audiograms” in claim 1 is preceded by “first”, so it is unclear if “audiograms” in line 14 is meant to be “first audiograms”, or if these are different audiograms. Claims 2-13 are rejected based on their dependence on claim 1. Claim 1 recites the limitation "the difference" in line 14. There is insufficient antecedent basis for this limitation in the claim. Claim 1 recites “the value of the respective first audiogram” in lines 14-15. It is unclear if this is meant to be the same as the “initial value” in line 8 of claim 1, or if this is a different value. Claim 2 recites “the frequency” in line 6. It is unclear if this is meant to be the same as the “first frequency” as recited claim 1 and line 5 of claim 2, or if this is a different frequency. Claim 6 recites the limitation “the specific person” and "the person" in line 4. There is insufficient antecedent basis for these limitations in the claim. Claim 9 recites the limitation "the interval" in line42. There is insufficient antecedent basis for this limitation in the claim. Claim 10 recites “a value of a first audiogram” in line 2. It is unclear if this is meant to be the same as the “value” described in claim 1, or if this is a different value. Claim 10 recites “a continuous fit” in line 3. It is unclear if this is meant to be the same as the “continuous fit” described in claim 9, upon which claim 10 depends, or if this is a different value. Claim 12 recites the limitation "the same persons" in line 2. There is insufficient antecedent basis for this limitation in the claim. Claim 12 recites the limitation "the two ears" in lines 4-5. There is insufficient antecedent basis for this limitation in the claim. Claim 14 recites “said first plurality of audiograms” in lines 10-11. It is unclear if there is a difference between “first plurality of audiograms” and “first plurality of first audiograms”. Each prior recitation of “audiograms” in claim 14 is preceded by “first”, so it is unclear if “audiograms” in line 11 is meant to be “first audiograms”, or if these are different audiograms. Claim 14 recites the limitation "the difference" in line 11. There is insufficient antecedent basis for this limitation in the claim. Claim 14 recites “the value of the respective first audiogram” in lines 11-12. It is unclear if this is meant to be the same as the “initial value” in line 5 of claim 14, or if this is a different value. Claim 15 recites “the frequency” in line 6. It is unclear if this is meant to be the same as the “first frequency” as recited claim 14 and line 5 of claim 15, or if this is a different frequency. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-12 and 14-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20100257128 A1 (De Vries et al.). Regarding claim 1, De Vries teaches a hearing estimation system comprising a computerized device and an acoustic output transducer, wherein the computerized device comprises or is operationally connected to the acoustic output transducer ([0023-0024] “a hearing evaluation device configured to provide a stimulus relating to a hearing evaluation event, an observation registering device configured to register a response related to the hearing evaluation event”; [0134] “a system 44 comprising a sound emitting device 46 configured to emit sounds matching the desired stimuli of the hearing evaluation event.”) and wherein the computerized device comprises a graphical user interface, a program storage for storing an executable program and a processor for executing said program to perform a method ([0123] “a system implemented using a computer, comprising data storage configured for storing data regarding a representation of a hearing ability of a population, and a sound system configured to perform the stimulus of a hearing evaluation event, a mouse or keyboard configured to register the response of the hearing evaluation event and a processor configured to establish a hearing ability model based on the representation of the hearing ability of the population and a observed response. A computer screen may be used to display graphically one or more hearing ability value together with related measure of uncertainty”); comprising the steps of: a) receiving a first plurality of first audiograms from a corresponding first plurality of persons ([0028] “a hearing ability model for a person using a representation of a distribution of hearing ability for a population of individuals, includes obtaining information regarding a person's response to a stimulus of a hearing evaluation event”; [0126] “the population data base D.sub.c comprised about 100,000 measured audiograms.”; [0128] “The lines 28 and 30 are determined based on population data, i.e. data from a large group of individuals”; Figs. 4-6); b) providing, for each of the first audiograms, a weighting factor and an initial value of said weighting factor (Figs. 4-6 depict initial values recorded for audiograms; [0061-0062] “At this stage an updated estimate of the audiogram is available, providing combined knowledge of the most likely values x.sub.n, and an associated measure of uncertainty .lamda..sub.n.”); c) selecting a first frequency, out of a first set of frequencies ([0044] “a hearing threshold audiogram by pure tone audiometry the hearing evaluation event corresponds to a combination of a stimulus characterized by a frequency and power level, and a response, i.e. whether the stimulus is heard.”; [0064] “The set of possible frequencies and power levels defines the set of possible next stimuli”; [0128] “FIG. 4 could be part of the first image displayed when a person is being examined. The lines 28 and 30 delimit the upper and lower boundaries indicating the uncertainty of the hearing loss in dB, the y-axis, at a given frequency, the x-axis”); d) measuring a hearing threshold of a specific user at said first frequency using said acoustic output transducer ([0041] “a sequence of N tones (s.sub.1,s.sub.2,K,s.sub.N) is presented at selectable frequency and power levels and the person tested is asked after each presentation if he or she hears the stimulus”; [0127-0128] “The filled circles indicate the expected hearing thresholds at the test frequencies, here: 125, 250, 500, 1000, 2000, 4000, 8000 Hz, and connecting line the estimated thresholds at intermediate frequencies, i.e. the current values of the model … FIG. 4 could be part of the first image displayed when a person is being examined. The lines 28 and 30 delimit the upper and lower boundaries indicating the uncertainty of the hearing loss in dB, the y-axis, at a given frequency, the x-axis”); e) performing a first update on each of said weighting factor corresponding to said first plurality of audiograms in dependence on the difference between the value of the respective first audiogram and said measured hearing threshold of said specific user at said first frequency ([0050-0052] “The determination of a hearing threshold audiogram by pure tone audiometry will be used as an example,”; [0132] “Since the hearing thresholds at different frequencies are correlated and the method is designed to incorporate this correlation, the listening event 36 does not only have effect for the model at the precise frequency at which the event took place, but has relevance to the entire model”); and f) determining an estimated audiogram for said specific user as a weighted mean of said first plurality of first audiograms based on said weighting factor, according to their respective first update ([0070] “compute mean hearing threshold estimate”; [0101] “the i-audiogram can be updated as shown in FIG. 5. We see that the current mean hearing threshold estimated shifted a bit downwards while the uncertainty about the thresholds decreased.”; [0128] “The population data may be established from a larger pool of data, e.g. by using selection criteria that may characterize the person being tested.”; “establish a proper model for the hearing ability of that person tested”; Figs. 4-6). Regarding claim 2, De Vries teaches the hearing estimation system according to claim 1, wherein said method comprises a further step of: c') determining, at each of said first set of frequencies, a weighted variance metric for said first plurality of first audiograms, in dependence of said weighting factors, wherein the first frequency is selected out of said first set of frequencies by a determination in dependence on the frequency out of a first frequency range for which the weighted variance metric is largest ([0056] “In this case, the model parameters consist of the set .theta.={.pi..sub.k,.mu..sub.k,.SIGMA..sub.k:k=1,K,K}, where .pi. is a scaling factor, and .mu. and .SIGMA. correspond to mean value and covariance matrix where the subscripts are indices for the tested frequencies. Alternative probabilistic model choices, including a Gaussian process model or polynomial regression model are also possible. Prior to any experiments, our state of knowledge about proper values for the hearing threshold model parameters is represented by a distribution p(.theta.). Usually, we take a uniform or Gaussian distribution with large variance for p(.theta.).”; [0076] “In this case, the model parameters comprise the set .theta.={.pi..sub.k, .mu..sub.k, .SIGMA..sub.k:k=1,K,K}. Prior to any experiments, our state of knowledge about proper values for the hearing threshold model parameters is represented by a distribution p(.theta.), which usually, is uniform or Gaussian with large variance for p(.theta.) … Given the database of hearing threshold (population) measurements, it is possible to update our knowledge about the hearing threshold model parameters, in the first instance the model is based on the population data alone, in the following the model is based on the population data and one or more previous measurements”; [0099] “A pure-tone stimulus is a function of a chosen frequency and chosen power level. The set of possible frequencies and power levels defines the set of possible next stimuli. Having access to the full probability distribution p(x|D.sub.n) for the thresholds, makes it possible to select the stimulus s* from the set of all possible stimuli that provides the largest expected information gain (reduction of uncertainty).”). Regarding claim 3, De Vries teaches the hearing estimation system according to claim 1, wherein said method comprises an iteration of the following steps: g) determining, at several frequencies out of said first set of frequencies, a weighted variance metric for said first plurality of first audiograms in dependence on said weighting factors, according to their most recent update, respectively ([0056] “In this case, the model parameters consist of the set .theta.={.pi..sub.k,.mu..sub.k,.SIGMA..sub.k:k=1,K,K}, where .pi. is a scaling factor, and .mu. and .SIGMA. correspond to mean value and covariance matrix where the subscripts are indices for the tested frequencies. Alternative probabilistic model choices, including a Gaussian process model or polynomial regression model are also possible. Prior to any experiments, our state of knowledge about proper values for the hearing threshold model parameters is represented by a distribution p(.theta.). Usually, we take a uniform or Gaussian distribution with large variance for p(.theta.).”; [0076] “In this case, the model parameters comprise the set .theta.={.pi..sub.k, .mu..sub.k, .SIGMA..sub.k:k=1,K,K}. Prior to any experiments, our state of knowledge about proper values for the hearing threshold model parameters is represented by a distribution p(.theta.), which usually, is uniform or Gaussian with large variance for p(.theta.) … Given the database of hearing threshold (population) measurements, it is possible to update our knowledge about the hearing threshold model parameters, in the first instance the model is based on the population data alone, in the following the model is based on the population data and one or more previous measurements”); h) determining a distinguished frequency out of said several frequencies, in dependence on the frequency out of a second frequency range for which said weighted variance metric is largest ([0099] “A pure-tone stimulus is a function of a chosen frequency and chosen power level. The set of possible frequencies and power levels defines the set of possible next stimuli. Having access to the full probability distribution p(x|D.sub.n) for the thresholds, makes it possible to select the stimulus s* from the set of all possible stimuli that provides the largest expected information gain (reduction of uncertainty).”); i) measuring a hearing threshold of said specific user at said distinguished frequency ([0103] “we have available a hearing threshold estimates {circumflex over (x)}.sub.n, uncertainty measures .lamda..sub.n and the best next stimulus s*.sub.n+1.”; [0128] “The population data may be established from a larger pool of data, e.g. by using selection criteria that may characterize the person being tested.”; “establish a proper model for the hearing ability of that person tested”; Figs. 4-6); j) performing an update on each of said weighting factor corresponding to said first plurality of first audiograms also in dependence on the difference between the value of the respective first audiogram and said measured hearing threshold of said specific user at said distinguished frequency, respectively ([0103] “In a regular audiogram, hearing loss (in dB HL) is displayed on the ordinate axis versus frequency (in Hz) on the abscissa. In contrast, the i-audiogram displays, after the n-th stimulus-response event, the current best hearing threshold estimate {circumflex over (x)}.sub.n (32 in FIG. 4), the current uncertainty about the thresholds .lamda..sub.n (28/30 in FIG. 4, also indicated by the shaded region), and the best next stimulus s*.sub.n+1 (36 in FIG. 4). Note that the i-audiogram is updated after each response of the person tested.”); and, after finishing said iteration ([0109] “estimation updates in the next iteration of the REPEAT loop”), further comprising the step of f') determining the estimated audiogram for said specific user as a weighted mean of said first plurality of first audiograms, using said weighting factors according to their most recent update, respectively, for said weighted mean ([0109] “On the basis of this new information, the i-audiogram can be updated as shown in FIG. 5. We see that the current mean hearing threshold estimated shifted a bit downwards while the uncertainty about the thresholds decreased. Also, a new best next stimulus is indicated by the circle in FIG. 5. After a certain number of hearing evaluation events, the i-audiogram might look as shown in FIG. 6, where the threshold uncertainty has been drastically reduced on the basis of the newly obtained observations.”). Regarding claim 4, De Vries teaches the hearing estimation system according to claim 3, wherein the iteration is finished after completion of a given number of iteration runs or when the weighted variance metric for said first plurality of first audiograms or a variance for said first plurality of first audiograms falls below a given first threshold ([0062] “From the updated estimate of the audiogram uncertainty, .lamda..sub.n, a decision is established whether the uncertainty is satisfactory, in which case the audiogram is considered the final value and the test is completed, or whether a next hearing evaluation event must be carried out”; [0109]). Regarding claim 5, De Vries teaches the hearing estimation system according to claim 3, wherein, after each iteration run, -the estimated audiogram for said specific user is determined as a weighted mean of said first plurality of first audiograms, using said weighting factors according to their most recent update, respectively, for said weighted mean ([0109] “The event index n is incremented by 1 and consequently, d.sub.n.rarw.d.sub.n+1 in order to prepare for the estimation updates in the next iteration of the REPEAT loop. Assume now that the audiologist selected for s.sub.n+1 where the `+`-sign is positioned in FIG. 4. Assume that the response of the person tested is `no` (did not hear the stimulus). On the basis of this new information, the i-audiogram can be updated as shown in FIG. 5. We see that the current mean hearing threshold estimated shifted a bit downwards while the uncertainty about the thresholds decreased.”), -the estimated audiogram is visualized using said graphical user interface ([0127] “FIG. 4 schematically illustrates what could be displayed to an operator”), and -said visualized estimated audiogram is presented to a hearing care professional for decision about stopping the iteration ([0047] “The uncertainty relates to the model and provides an indication to the operator, e.g. an audiologist, how certain, or uncertain, the model is. Based on this uncertainty the operator may decide if more observations are needed or if the model is sufficient.”; [0122] “the estimated hearing ability value is displayed graphically together with a measure of uncertainty relating to the hearing ability value giving the operator an overview of the progress of the test”). Regarding claim 6, De Vries teaches the hearing estimation system according to claim 1, wherein for each weighting factor, the initial value is chosen in dependence on a demographic similarity of the specific person with the person corresponding to the first audiogram for which the weighting factor is being provided ([0113] “Effectively, this means that the individual responses in the population data are weighted according to their relevance for estimating the thresholds of the person tested.”; [0118] “An embodiment may include parameters known to correlate to hearing loss, without explicitly being related to a test of hearing loss. These parameters may include age, gender and medical status and history of the person tested, or a combination thereof. The parameters may either be used as model parameters, or for defining subsets of the population, matching the person tested better”; [0128]). Regarding claim 7, De Vries teaches the hearing estimation system according to claim 1, wherein the first plurality of first audiograms is chosen as a subset out of a multiplicity of first audiograms from a corresponding multiplicity of persons in dependence on a demographic similarity of the specific user with a person from said multiplicity corresponding to a respective first audiogram ([0045] “establishing a hearing ability model representing the hearing ability of the person tested, based on the observation of a response related to the hearing evaluation event and the representation of a population 18. Further the step 12 may include providing previously recorded data relating to the person tested, e.g. previously observed responses to hearing evaluation events, age and/or gender etc.”; [0128]). Regarding claim 8, De Vries teaches the hearing estimation system according to claim 1, wherein a starting sound pressure level for measuring the hearing threshold of said specific user at a certain frequency is chosen in dependence on the estimated audiogram and/or the weighted variance metric at said certain frequency, using the weighting factors according to their most recent update, respectively ([0128] “The lines 28 and 30 delimit the upper and lower boundaries indicating the uncertainty of the hearing loss in dB, the y-axis, at a given frequency, the x-axis. The lines 28 and 30 are determined based on population data, i.e. data from a large group of individuals. The population data may be established from a larger pool of data, e.g. by using selection criteria that may characterize the person being tested. Such criteria may for instance be age, gender, occupation, medical history”). Regarding claim 9, De Vries teaches the hearing estimation system according to claim 1, wherein a continuous fit is performed on the weighted variance metric for all frequencies in the interval spanned by said first set of frequencies ([0054] “The present embodiments are based on the availability of a data set of hearing abilities for a group of persons, with a certain similarity to the person tested. From this data set a representation of the hearing abilities for a population (in its statistical sense i.e. a defined group of individuals) is provided,--either by looking up values in a database comprising the dataset, or by establishing a mathematical model of the hearing ability of the population (in the following "a population model"). In the case where a mathematical model is established, this may be done by any appropriate regression method, and the mathematical population model may be either nonparametric, such as a neural network, or the model may be parametric”; [0055-0056]; Figs. 4-6), and wherein the first frequency is determined from said continuous fit ([0056] “the model parameters consist of the set .theta.={.pi..sub.k,.mu..sub.k,.SIGMA..sub.k:k=1,K,K}, where .pi. is a scaling factor, and .mu. and .SIGMA. correspond to mean value and covariance matrix where the subscripts are indices for the tested frequencies”). Regarding claim 10, De Vries teaches the hearing estimation system according to claim 9, wherein in order to obtain a value of a first audiogram at the first frequency, an interpolation and/or a continuous fit is performed on basis of said first audiogram's values at the frequencies of said first set of frequencies (Figs. 4-6; [0054-0056] “The following example will relate to a probabilistic model for hearing thresholds p(x|.theta.) where x refers to the hearing thresholds and .theta. to the model parameters”). Regarding claim 11, De Vries teaches the hearing estimation system according claim 1, wherein said method comprises the further steps of: -providing a second plurality of second audiograms from a corresponding second plurality of persons, wherein each person out of the first plurality is also comprised in the second plurality of persons ([0116] “forming the basis of selection of sub-groups of the population, with a higher internal similarity, and thus a lower estimated uncertainty.”); -providing, for each of the second audiograms, a value of a weighting factor ([0061-0062] “At this stage an updated estimate of the audiogram is available, providing combined knowledge of the most likely values x.sub.n, and an associated measure of uncertainty .lamda..sub.n.”); -determining, at each of a second set of frequencies, a weighted variance metric for said second plurality of second audiograms, in dependence of said weighting factors, wherein the second set of frequencies is a subset said first set of frequencies ([0044] “a hearing threshold audiogram by pure tone audiometry the hearing evaluation event corresponds to a combination of a stimulus characterized by a frequency and power level, and a response, i.e. whether the stimulus is heard.”; [0064] “The set of possible frequencies and power levels defines the set of possible next stimuli”; [0128] “FIG. 4 could be part of the first image displayed when a person is being examined. The lines 28 and 30 delimit the upper and lower boundaries indicating the uncertainty of the hearing loss in dB, the y-axis, at a given frequency, the x-axis”); and -determining said first frequency and/or said distinguished frequency for the first plurality of first audiograms and/or performing said first update on the weighting factors corresponding to the first audiograms also based on the weighted variance metric for said second plurality of second audiograms ([0054-0056]; [0099] “A pure-tone stimulus is a function of a chosen frequency and chosen power level. The set of possible frequencies and power levels defines the set of possible next stimuli. Having access to the full probability distribution p(x|D.sub.n) for the thresholds, makes it possible to select the stimulus s* from the set of all possible stimuli that provides the largest expected information gain (reduction of uncertainty).”);. Regarding claim 12, De Vries teaches the hearing estimation system according to claim 11, wherein the first and second pluralities comprise the same persons ([0115] “A further related hearing loss ability value may be historical hearing ability values for the same person”), and wherein for each person out of said first and second pluralities, respectively, the corresponding first and second audiogram are representing the hearing at either of the two ears ([0115] “For several types of hearing losses a correlation between left and right ear hearing ability will also mean that the use of binaural information, i.e. any information relating to the hearing ability of the other ear of the person tested, will be beneficial.”). Regarding claim 14, De Vries teaches a non-transitory computer readable medium carrying instructions which, when executed by a computer ([0135] “system 44 includes a computer-readable medium having a set of stored instructions”), cause the following method steps to be performed: a) receiving a first plurality of first audiograms from a corresponding first plurality of persons ([0028] “a hearing ability model for a person using a representation of a distribution of hearing ability for a population of individuals, includes obtaining information regarding a person's response to a stimulus of a hearing evaluation event”; [0126] “the population data base D.sub.c comprised about 100,000 measured audiograms.”; [0128] “The lines 28 and 30 are determined based on population data, i.e. data from a large group of individuals”; Figs. 4-6); b) providing, for each of the first audiograms, a weighting factor and an initial value of said weighting factor (Figs. 4-6 depict initial values recorded for audiograms; [0061-0062] “At this stage an updated estimate of the audiogram is available, providing combined knowledge of the most likely values x.sub.n, and an associated measure of uncertainty .lamda..sub.n.”); c) selecting a first frequency, out of a first set of frequencies ([0044] “a hearing threshold audiogram by pure tone audiometry the hearing evaluation event corresponds to a combination of a stimulus characterized by a frequency and power level, and a response, i.e. whether the stimulus is heard.”; [0064] “The set of possible frequencies and power levels defines the set of possible next stimuli”; [0128] “FIG. 4 could be part of the first image displayed when a person is being examined. The lines 28 and 30 delimit the upper and lower boundaries indicating the uncertainty of the hearing loss in dB, the y-axis, at a given frequency, the x-axis”); d) using an electro-acoustical transducer at least operationally connected to the computer to measure a hearing threshold of a specific user at said first frequency ([0023-0024] “a hearing evaluation device configured to provide a stimulus relating to a hearing evaluation event, an observation registering device configured to register a response related to the hearing evaluation event”; [0134] “a system 44 comprising a sound emitting device 46 configured to emit sounds matching the desired stimuli of the hearing evaluation event.”); e) performing a first update on each of said weighting factors correspond-ing to said first plurality of audiograms in dependence on the difference be-tween the value of the respective first audiogram and said measured hear-ing threshold of said specific user at said first frequency ([0050-0052] “The determination of a hearing threshold audiogram by pure tone audiometry will be used as an example,”; [0132] “Since the hearing thresholds at different frequencies are correlated and the method is designed to incorporate this correlation, the listening event 36 does not only have effect for the model at the precise frequency at which the event took place, but has relevance to the entire model”); and f) determining an estimated audiogram for said specific user as a weighted mean of said first plurality of first audiograms based on said weighting fac-tors, according to their respective first update ([0070] “compute mean hearing threshold estimate”; [0101] “the i-audiogram can be updated as shown in FIG. 5. We see that the current mean hearing threshold estimated shifted a bit downwards while the uncertainty about the thresholds decreased.”; [0128] “The population data may be established from a larger pool of data, e.g. by using selection criteria that may characterize the person being tested.”; “establish a proper model for the hearing ability of that person tested”; Figs. 4-6). Regarding claim 15, De Vries teaches the non-transitory computer readable medium according to claim 14 causing the following further method step to be performed: c') determining, at each of said first set of frequencies, a weighted variance metric for said first plurality of first audiograms, in dependence of said weighting factors, wherein the first frequency is selected out of said first set of frequencies by a determination in dependence on the frequency out of a first frequency range for which the weighted variance metric is largest ([0056] “In this case, the model parameters consist of the set .theta.={.pi..sub.k,.mu..sub.k,.SIGMA..sub.k:k=1,K,K}, where .pi. is a scaling factor, and .mu. and .SIGMA. correspond to mean value and covariance matrix where the subscripts are indices for the tested frequencies. Alternative probabilistic model choices, including a Gaussian process model or polynomial regression model are also possible. Prior to any experiments, our state of knowledge about proper values for the hearing threshold model parameters is represented by a distribution p(.theta.). Usually, we take a uniform or Gaussian distribution with large variance for p(.theta.).”; [0076] “In this case, the model parameters comprise the set .theta.={.pi..sub.k, .mu..sub.k, .SIGMA..sub.k:k=1,K,K}. Prior to any experiments, our state of knowledge about proper values for the hearing threshold model parameters is represented by a distribution p(.theta.), which usually, is uniform or Gaussian with large variance for p(.theta.) … Given the database of hearing threshold (population) measurements, it is possible to update our knowledge about the hearing threshold model parameters, in the first instance the model is based on the population data alone, in the following the model is based on the population data and one or more previous measurements”; [0099] “A pure-tone stimulus is a function of a chosen frequency and chosen power level. The set of possible frequencies and power levels defines the set of possible next stimuli. Having access to the full probability distribution p(x|D.sub.n) for the thresholds, makes it possible to select the stimulus s* from the set of all possible stimuli that provides the largest expected information gain (reduction of uncertainty).”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over US 20100257128 A1 (De Vries et al.) in view of US 20200268260 A1 (Tran, Bao). Regarding claim 13, De Vries teaches the hearing estimation system according to claim 11, wherein each of the second audiograms is measured in the absence of a hearing aid at the respective ear of the corresponding person out of the second plurality (Figs. 4-6; [0101] “i-audiogram is updated after each response of the person tested.”; [0115]). De Vries does not teach wherein each of the first audiograms is measured as an in-situ audiogram in the presence of a hearing aid at the respective ear of the corresponding person out of the first plurality. However, Tran teaches wherein each of the first audiograms is measured as an in-situ audiogram in the presence of a hearing aid at the respective ear of the corresponding person out of the first plurality ([0163] “A representative audiogram is created for each set of audiograms. A hearing enhancement fitting is computed from each representative audiogram. A hearing aid device is programmed with one or more hearing enhancement fittings computed from each representative audiogram.”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the system taught by De Vries to include measuring audiograms in the presence of a hearing aid. One would have been motivated to make this modified cation because determining the threshold of hearing in each frequency band enables adjustments to be determined to compensate for the individual’s loss of hearing, a suggested by Tran [0163]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVELYN GRACE PARK whose telephone number is (571)272-0651. The examiner can normally be reached Monday - Friday, 9AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert (Tse) Chen can be reached at (571)272-3672. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EVELYN GRACE PARK/Examiner, Art Unit 3791 /TSE W CHEN/Supervisory Patent Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Oct 13, 2023
Application Filed
Mar 13, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594006
SMARTPHONE APPLICATION WITH POP-OPEN SOUNDWAVE GUIDE FOR DIAGNOSING OTITIS MEDIA IN A TELEMEDICINE ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12588835
METHOD AND SYSTEM FOR TRACKING MOVEMENT OF A PERSON WITH WEARABLE SENSORS
2y 5m to grant Granted Mar 31, 2026
Patent 12569147
FLUID RESPONSIVENESS DETECTION DEVICE AND METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12564390
A BIOPSY ARRANGEMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12557991
TEMPERATURE MEASUREMENT DEVICE AND SYSTEM FOR DETERMINING A DEEP INTERNAL TEMPERATURE OF A HUMAN BEING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
99%
With Interview (+46.9%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 80 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month