Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
1. Claims 1-6, 8-10, 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable Leong (US2022/0345833A1) in view of DeVries et al. (US 2022/0400350A1), hereafter DeVries.
As to Claim 1, Leong teaches a fitting agent for a hearing device system ( a mobile phone 106 arranged to enable fitting of hearing aid 102, [0056], Figure 1 or an information handling system 200 which can be used as the computing device 106,[0059], Figure 2) comprising a hearing device to be worn by a hearing device user ( hearing aid 102), [0002]) wherein the fitting agent( information handling system 200, [0059]) comprises one or more processors ( processor 202)configured to: initialize a user model and an environment model,( processor 302 including a machine learning processing module 302A that determined a sound processing profile with a set of sound processing settings for use in hearing aid 102 based on data associated with a hearing response of a user of the hearing aid 102 (e.g., different hearing responses in different frequency bands) and one or more properties of the environment in which the hearing aid 102 is located (e.g., location of the environment, sound profile or ambient noise profile of the environment, properties of the environment determined from an image (or a stream of images or a video.. See at least [0064]). Regarding the following: the user model comprising a plurality of user preference functions and a user response distribution; obtain environment data; determine a first initial environment probability of a first environment and a second initial environment probability of a second environment based on the environment data and the environment model; obtain a test setting comprising a primary test setting and a secondary test setting for the hearing device based on the first initial environment probability and the second initial environment probability; provide the test setting for presentation to the hearing device user; obtain a user input of a preferred test setting indicative of a preference for either the primary test setting or the secondary test setting; and update the user model for provision of an updated user model based on the preferred test setting and the environment data, Leong teaches on [0103] The user may take a relatively fast and simple self-administered mobile application-based hearing test and use the test result to adjust the hearing aid's initial sound processing settings to match the user's individual hearing profile, or to calibrate the hearing aid's settings over time to take into account the user's change in hearing perception over time. (2) The hearing aid may include built-in capability to “self-learn” and continuously optimize the user's hearing experience out-of-the-box based on the user's environmental context, ambient noise profile (e.g. eating in a restaurant, watching a movie in a movie theatre, watching an outdoor concert or football match, watching TV at home, talking one-on-one in a quiet place etc.) and preferences. (3) Information on the environmental context, including noise, images, GPS location, etc. of the environment, and user's input preferences (which are input through the hearing aid or the mobile phone) may be sent to the server (cloud computing server) in real time during operation for training the machine learning processing models (e.g., neural networks). The machine learning processing models may thus improve over time. (4) The hearing aid may automatically recognize the user's environment context and the machine learning processing model may automatically set the hearing aids settings and preferences to optimize the user's hearing experience, with little or no user input required. (5) The accumulated data of a large number of users over time will be continuously used to train or optimize machine learning processing models (e.g., neural networks) to determine and set the initial settings of the hearing aid more accurately after users complete the hearing test administered through the mobile application. Leong however, does not explicitly teach: the user model comprising a plurality of user preference functions and a user response distribution…provide the test setting for presentation to the hearing device user; obtain a user input of a preferred test setting indicative of a preference for either the primary test setting or the secondary test setting; and update the user model for provision of an updated user model based on the preferred test setting and the environment data. However, De Vries in related field ( hearing aid fitting) teaches a method for updating a user model and fitting agent for a hearing device system is disclosed, the hearing device system comprising a hearing device worn by a hearing device user, wherein the fitting agent comprises one or more processors configured to initialize a user model comprising a plurality of user preference functions and associated user response distributions, wherein each user preference function is associated with an environment; obtain environment data indicative of a present environment; obtain a test setting comprising a primary test setting and a secondary test setting for the hearing device; present the test setting to the hearing device user; obtain a user input of a preferred test setting indicative of a preference for either the primary test setting or the secondary test setting; and update the user model based on hearing device parameters of the preferred test setting and the environment data. See at least abstract. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention to modify the fitting agent such that the fitting agent has processing units configured to access a user model comprising multiple user preference functions and associated user response distributions, where the user preference functions are associated with respective environments to improve listening experience to the user by improving modeling of user preferred haring aid parameter settings in different environments.
As to Claim 2, Leong in view of De Vries teaches the limitations of Claim 1, and wherein the one or more processors are configured to estimate a personalized environment probability of a present environment based on the updated user model, De Vries on [0047] teaches to update the user model may comprise to update the user response distribution(s), or at least parameters thereof, based on one or more environment probabilities optionally including a first environment probability and a second environment probability.
As to Claim 3, Leong in view of De Vries teaches the limitations of Claim 2, and, wherein the personalized environment probability is a probability of one or more user preference clusters in a domain of hearing device parameters, De Vries on [0056] to [0058] teaches in one or more example fitting agents, the environment data/observed context is modelled using Gaussian mixture models (GMM), given that there are K possible environments/states of environments (claimed clusters. See at least specification of instant application on [0065]- [0068]) and [0074] teaches obtain environment data comprises to determine an environment identifier and/or to obtain one or more environment probabilities, such as K environment probabilities, e.g. using a Gaussian mixture model. To update the user model may be based on the environment identifier and/or one or more environment probabilities, such as environment probabilities for environments ENV_k, k=1, 2, . . . , K.
As to Claim 4, Leong in view of De Vries teaches the limitations of Claim 2, and, wherein the one or more processors are configured to update the environment model for provision of an updated environment model based on the personalized environment probability and the environment data, De Vries on [0067] teaches the one or more processors are configured to obtain a first environment probability of a first environment and/or a second environment probability of a second environment based on the environment data, and wherein to obtain a test setting and/or to update the user model is optionally based on the first environment probability and/or the second environment probability. In other words, one or more processors of the fitting agent may be configured to obtain, such as one or more of determine, estimate, receive, and retrieve, a first environment probability also denoted ENVP_1 of a first environment ENV_1 and/or a second environment probability ENVP_2 of a second environment ENV_2, and optionally obtain a test setting and/or update the user model, such as the first user preference function and/or the second user preference function, based on the first environment probability and/or the second environment probability.
As to Claim 5, Leong in view of De Vries teaches the limitations of Claim 4, and wherein the one or more processors are configured to update the environment model by updating an environment distribution function based on the personalized environment probability and the environment data, De Vries teaches on [0048] and [0049] update the user model may be based on Bayesian inference. Updating the user model may comprise updating one or more of the parameters of the user preference function and/or user response distribution(s) and/or environmental model and/or parameter distributions associated with the user preference functions. Updating the user model may comprise to determine one or more posteriors of parameters of the user preference function(s).
[0049] To update the user model may comprise to determine a posterior of the parameters of one or more of, such as a subset of or all the user preference functions, e.g. based on the environment data, a previous parameter posterior, such as the last or current parameter posterior, preferred test setting, and a non-preferred test setting.
As to Claim 6, Leong in view of De Vries teaches the limitations of Claim 4, and, wherein the environment model is a Gaussian Mixture Model, De Vries on [0074] teaches obtain environment data comprises to determine an environment identifier and/or to obtain one or more environment probabilities, such as K environment probabilities, e.g. using a Gaussian mixture model.
As to Claim 8, Leong in view of De Vries teaches the limitations of Claim 1 and wherein the one or more processors are configured to obtain the environment data by obtaining position data indicative of a user position and determining the environment data based on the position data, De Vries teaches on [0065] n one or more example fitting agents, to obtain environment data comprises to obtain context data and optionally determining the environment data based on the context data and/or including the context data in the environment data. In other words, one or more processors of the fitting agent may be configured to obtain context data and optionally determining the environment data based on the context data. Context data may be indicative of the context in which the user is in, such as indicative of a user's location, position, movement, temperature, pulse, or other data relevant for the environment.
As to Claim 9, Leong in view of De Vries teaches the limitations of Claim 1 and, wherein the one or more processors are configured to obtain the environment data by obtaining audio data indicative of sound in a present environment and determining the environment data based on the audio data, De Vries teaches on [0064] to obtain environment data comprises to obtain audio data and optionally determining the environment data based on the audio data and/or including the audio data in the environment data. In other words, one or more processors of the fitting agent may be configured to obtain audio data and determining the environment data or at least one or more environment parameters based on the audio data.
As to Claim 10, Leong in view of De Vries teaches the limitations of Claim 1 and wherein the one or more processors are configured to obtain the environment data by obtaining context data indicative of a surrounding and/or an activity of the user, and determining the environment data based on the context data, De Vries on [0065] teaches to obtain environment data comprises to obtain context data and optionally determining the environment data based on the context data and/or including the context data in the environment data. In other words, one or more processors of the fitting agent may be configured to obtain context data and optionally determining the environment data based on the context data. Context data may be indicative of the context in which the user is in, such as indicative of a user's location, position, movement, temperature, pulse, or other data relevant for the environment. For example, the context data may comprise location data, e.g. GPS coordinates, and/or movement data, such as accelerometer data. The context data may comprise calendar data, and the environment data may be based on the calendar data. The context data may comprise sensor data, e.g. from one or more sensors of an accessory device and/or from one or more sensors of the hearing device. The context data may comprise hearing device data transmitted from the hearing device, such as one or more program identifiers, one or more operating parameters, and/or one or operating mode identifiers of the hearing device.
As to Claim 13, Leong in view of DeVries teaches the limitations of Claim 1, and wherein the one or more processors are configured to obtain an environment profile, and to initialize the environment model based on the environment profile, De Vries teaches on [0046] the fitting agent/one or more processors of the fitting agent is/are configured to update the user model, such as one or more user preference functions and/or one or more user response distributions.., based on hearing device parameters of the preferred test setting and the environment data. In other words, a user response distribution P.sub.u, may model a user response in one or more environments. A user response distribution may be a weighted user response distribution for all or a subset of the K environments, e.g. where the weights are based on environment probabilities, thus teaching environmental model based on environment profile obtained by user’s response.
As to Claim 14, Leong in view of DeVries teaches the limitations of Claim 1, and wherein the environment data is indicative of a present environment, De Vries on abstract as to obtain environment data indicative of a present environment; [0029] teaches the secondary test setting may be based on and/or dependent on the present environment. The secondary test setting may be based on and/or dependent on the present environment. In other words, the secondary test setting may be based on and/or dependent on the environment data.
2. Claims 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable Leong (US2022/0345833A1) in view of DeVries et al. (US2022/0400350A1), hereafter DeVries in view of Van Hasselt et al. (US20180035216A1), hereinafter Van Hasselt.
As to Claim 11, Leong in view of DeVries teaches the limitations of Claim 1, and wherein the one or more processors are configured to obtain a user profile, and to initialize the environment model based on the user profile, De Vries teaches fitting hearing aid parameters based on user’s hearing loss and audiograms is well-known, [0003]), but does not explicitly teach to initialize the environment model based on the user profile. However, Van Hasselt in related field (Hearing aid) teaches a customized enhanced sound system 10 where the system 10 of device or devices 1A/1B/8A provides the dual functions of testing to develop profiles and of sound reproduction in a particular environment. In the testing mode, the system 10 interactively measures personal hearing capabilities in one function (typically prior to use for subsequent storage) and measures environmental sound/noise in another function (typically contemporaneous with reproduction). The system 10 stores the individualized audiological profile locally or remotely. The system 10 stores the environment profile locally. Analysis of raw data to generate the individualized audiological profile may also be performed either locally or remotely (via telecommunication links). In the reproduction or playback mode, the system 10 modifies a source audio program (input audio signals) according to the individual and environment profiles to adapt the program to the hearing capabilities and preferences of the individual user. In a specific embodiment, the system 10 captures and measures, or receives captured data, analyzes the data, generates target gain for each audiometric frequency, applies the target gain and/or tinnitus relieving signals to the audio signal, and forms the enhanced audio output signals with safeguards against uncomfortable or damaging loudness. See at least abstract, [0066]- [0067]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to obtain and initialize a user’s profile to provide a customized enhanced sound to the user.
As to Claim 12, Leong in view of DeVries teaches the limitations of Claim 1, and, wherein the user profile comprises one or more of age, gender, hearing loss degree, and activity level, ( De Vries teaches fitting hearing aid parameters based on user’s hearing loss and audiograms is well-known, [0003]) but does not explicitly teach wherein the one or more processors are configured to initialize the environment model based on one or more of the age, the gender, the hearing loss degree, and the activity level. However, Van Hasselt in related field (Hearing aid) teaches a customized enhanced sound system 10 where the he system 10 of device or devices 1A/1B/8A provides the dual functions of testing to develop profiles and of sound reproduction in a particular environment. In the testing mode, the system 10 interactively measures personal hearing capabilities in one function (typically prior to use for subsequent storage) and measures environmental sound/noise in another function (typically contemporaneous with reproduction). The system 10 stores the individualized audiological profile locally or remotely. The system 10 stores the environment profile locally. Analysis of raw data to generate the individualized audiological profile may also be performed either locally or remotely (via telecommunication links). In the reproduction or playback mode, the system 10 modifies a source audio program (input audio signals) according to the individual and environment profiles to adapt the program to the hearing capabilities and preferences of the individual user. In a specific embodiment, the system 10 captures and measures, or receives captured data, analyzes the data, generates target gain for each audiometric frequency, applies the target gain and/or tinnitus relieving signals to the audio signal, and forms the enhanced audio output signals with safeguards against uncomfortable or damaging loudness. See at least abstract, [0066]- [0067]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to initialize the environmental profile based on user’s profile to provide a customized enhanced sound to the user.
Allowable Subject Matter
Claim 7 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUNITA JOSHI whose telephone number is (571)270-7227. The examiner can normally be reached 8-3.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 5712727503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SUNITA JOSHI/Primary Examiner, Art Unit 2691