DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending under this Office action.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3 and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Pontoppidan, etc. (US 20230037356 A1) in view of Goodall, etc. (US 20190046794 A1), further in view of Jobani (US 20150382123 A1), Wu, etc. (US 20240296931 A1), and Bologna, etc. (US 20200100554 A1).
Regarding claim 1, Pontoppidan teaches that a method (See Pontoppidan: Figs. 7A-B, and [0201], “FIG. 7A shows a flow diagram for an embodiment of a method of determining a parameter setting for a specific hearing aid of a particular user according to the present disclosure”) comprising:
during a first time period (See Pontoppidan: Figs. 1A-B, and [0167], “In a first loop, the recommended hearing aid setting is solely based on the simulation model (using a hearing profile of the specific user and (previously) generated hearing aid input signals corresponding to a variety of acoustic environments (signal and noise levels, noise types, user preferences, etc.), cf. arrow denoted ‘1.sup.st loop’ in FIG. 1A, 1B symbolizing at least one (but typically a multitude of runs) through the functional blocks of the model (‘AI-hearing model’->‘Audiologic profile’->‘Loudness, speech’->‘Acoustic situations and user preferences’->‘AI-hearing model’ in FIG. 1A”. Note: the 1st loop is mapped to the first time. The 1st and 2nd loops can be repeated many times depending on the final hearing ais performance for a user, and can be used for many different users as well. Thus, the first visit of a user may also be mapped to the 1st time, and the following visits of the same user may be mapped to the 2nd time):
serving a hearing assessment to a user (See Pontoppidan: Figs. 1A-B, and [0167], “The model of the physical environment comprises a simulation of the impact of the hearing profile of the user on the sound signals provided by the hearing aid (block ‘Audiologic profile’ in FIG. 1A, and ‘Simulation of user's hearing loss’ in FIG. 1B) based on hearing data of the particular user, cf. block ‘Hearing diagnostics of particular user’ in FIG. 1A, 1B)”. Note: the Hearing diagnostics is mapped to the hearing assessment to a user.);
generating a hearing profile of the user based on a result of the hearing assessment (See Pontoppidan: Fig. 2, and [0174], “The information in box 4, denoted ‘Big5 personality traits added to hearing profile for stratification’ is fed to the ‘Hearing diagnostics of particular user’ to provide a supplement to the possible more hearing loss dominated data of the user. The information in boxes 2 (2A, 2B) and 3 are all fed to the AI-hearing model, representing exemplary data of the acoustic environments encountered by the user when wearing the hearing aid, and the user's reaction to these environments”. Note: the Big5 personality traits added to hearing profile for stratification is mapped to the generating a hearing profile of the user based on a result of the hearing assessment.);
accessing a first set of image data depicting a head of the user ;
extracting an ear morphology from the first set of image data;
matching the user to a first hearing aid type in a corpus of hearing aid types, based on the hearing profile and the ear morphology ;
accessing a virtual representation of the first hearing aid type (See Pontoppidan: Fig. 2, and [0174], “The information in box 4, denoted ‘Big5 personality traits added to hearing profile for stratification’ is fed to the ‘Hearing diagnostics of particular user’ to provide a supplement to the possible more hearing loss dominated data of the user. The information in boxes 2 (2A, 2B) and 3 are all fed to the AI-hearing model, representing exemplary data of the acoustic environments encountered by the user when wearing the hearing aid, and the user's reaction to these environments”. Note: the AI-hearing model is mapped to a virtual representation of the first hearing aid type);
accessing a first fitment definition of the first hearing aid type;
generating a first annotated head model depicting the first hearing aid positioned accurately on an ear of the user based on:
the first set of image data;
the virtual representation of the first hearing aid type (See Pontoppidan: Fig. 6, and [0199], “FIG. 6 shows a fourth embodiment of a hearing system according to the present disclosure. The embodiment of a hearing system illustrated in FIG. 6 is based on a partition of the system in a hearing aid and a (e.g. handheld) processing device hosting the simulation model of the hearing aid as well as a user interface for the hearing aid (cf. arrow denoted ‘User input via APP’). The handheld processing device is indicated in FIG. 6 as Smartphone or dedicated portable processing device (comprising or having access to AI-hearing model)′. The simulation model and possibly the entire fitting system of the hearing aid may be accessible via an APP on the handheld processing device”. Note: the simulation model of the hearing aid is mapped to the virtual representation of the first hearing aid type); and
the first fitment definition of the first hearing aid type;
rendering the first annotated head model for the user; and
during a second time period, following provision of a first hearing aid of the first hearing aid type to the user (See Pontoppidan: Figs. 1A-B, and [0171], “Based on the transferred data from the user's personal experience while wearing the hearing aid(s) a 2.sup.nd loop is executed by the simulation model where the logged data are used instead of or as a supplement to the predefined (general) data representing acoustic environments, user intent, etc. Thereby an optimized hearing aid setting is provided. The optimized hearing aid setting is transferred to the specific hearing aid and applied to the appropriate processing algorithms. Thereby an optimized (personalized) hearing aid is provided”; and [0172], “The 2.sup.nd loop can be repeated continuously or with a predefined frequency, or triggered by specific events (e.g. power-up, data logger full, consultation with HCP (e.g. initiated by HCP), initiated by the user via a user interface, etc.)”. Note: the 2nd t loop is mapped to the second time.):
loading a representation of the hearing profile onto the first hearing aid (See Pontoppidan: Figs. 7A-B, and [0209], “S2. Transferring the simulation-based hearing aid setting to an actual version of said specific hearing aid”. Note: the transferring the simulated setting to the actual hearing aid is mapped to the loading a representation of the hearing profile onto the first hearing aid.);
generating an ear placement instruction for the first hearing aid based on:
the first set of image data; and
the virtual representation of the first hearing aid type (See Pontoppidan: Fig. 6, and [0199], “FIG. 6 shows a fourth embodiment of a hearing system according to the present disclosure. The embodiment of a hearing system illustrated in FIG. 6 is based on a partition of the system in a hearing aid and a (e.g. handheld) processing device hosting the simulation model of the hearing aid as well as a user interface for the hearing aid (cf. arrow denoted ‘User input via APP’). The handheld processing device is indicated in FIG. 6 as Smartphone or dedicated portable processing device (comprising or having access to AI-hearing model)′. The simulation model and possibly the entire fitting system of the hearing aid may be accessible via an APP on the handheld processing device”. Note: the simulation model of the hearing aid is mapped to the virtual representation of the first hearing aid type);
rendering the ear positioning instructions;
accessing a second set of image data depicting the head of the user with the first hearing aid located on the ear of the user;
detecting a first position of the first hearing aid, arranged on the ear of the user, in the second set of image data; and
in response to the first position of the first hearing aid, arranged on the ear of the user, differing from the first fitment definition of the first hearing aid type:
generating an adjustment prompt to adjust placement of the first hearing aid on the ear of the user; and
presenting the adjustment prompt to the user.
However, Pontoppidan fails to explicitly disclose that accessing a first set of image data depicting a head of the user; extracting an ear morphology from the first set of image data; matching the user to a first hearing aid type in a corpus of hearing aid types, based on the hearing profile and the ear morphology; accessing a first fitment definition of the first hearing aid type; generating a first annotated head model depicting the first hearing aid positioned accurately on an ear of the user based on: the first set of image data; the first fitment definition of the first hearing aid type; rendering the first annotated head model for the user; generating an ear placement instruction for the first hearing aid based on: the first set of image data; and rendering the ear positioning instructions; accessing a second set of image data depicting the head of the user with the first hearing aid located on the ear of the user; detecting a first position of the first hearing aid, arranged on the ear of the user, in the second set of image data; and in response to the first position of the first hearing aid, arranged on the ear of the user, differing from the first fitment definition of the first hearing aid type: generating an adjustment prompt to adjust placement of the first hearing aid on the ear of the user; and presenting the adjustment prompt to the user.
However, Goodall teaches that accessing a first set of image data depicting a head of the user (See Goodall: Figs. 43-46, and [0361], “In an aspect, method 4300 in FIG. 43 includes capturing, with image capture circuitry on the computing device, via a user-facing imager associated with a computing device, an image of a user of the computing device, as indicated at 4302; processing the image, using image processing circuitry on the computing device, to determine at least one parameter as indicated at 4304; and controlling, with neural stimulus control signal determination circuitry on the computing device, based at least in part on the at least one parameter, delivery of a stimulus to at least one nerve innervating an ear of the user with the ear stimulation device, as indicated at 4306”. Note that the images captured by using the user-facing imager to capture the user image are mapped to accessing a first set of image data depicting a head of the user);
extracting an ear morphology from the first set of image data;
matching the user to a first hearing aid type in a corpus of hearing aid types, based on the hearing profile and the ear morphology ;
accessing a first fitment definition of the first hearing aid type;
generating a first annotated head model depicting the first hearing aid positioned accurately on an ear of the user based on:
the first set of image data (See Goodall: Figs. 43-46, and [0361], “In an aspect, method 4300 in FIG. 43 includes capturing, with image capture circuitry on the computing device, via a user-facing imager associated with a computing device, an image of a user of the computing device, as indicated at 4302; processing the image, using image processing circuitry on the computing device, to determine at least one parameter as indicated at 4304; and controlling, with neural stimulus control signal determination circuitry on the computing device, based at least in part on the at least one parameter, delivery of a stimulus to at least one nerve innervating an ear of the user with the ear stimulation device, as indicated at 4306”. Note that the images captured by using the user-facing imager to capture the user image is analyzed to determine the position of the hearing aid (earpieces, which is mapped to the first set of image data); and
the first fitment definition of the first hearing aid type;
rendering the first annotated head model for the user;
generating an ear placement instruction for the first hearing aid (See Goodall: Figs. 48-52, and [0384], “As discussed herein above, in an aspect, image detection and analysis is used to detect improper placement of one or more earpieces on the ear(s) of a user of a computing device”; and [0391], “In another aspect, method 5200 includes delivering, under control of test signal circuitry on the computing device, an audio test signal via a sound source associated with the at least one earpiece, and determining proper placement of the at least one earpiece based upon audio feedback, as indicated at 5220. In an aspect, audio feedback is determined from an audio signal detected from the earpiece, which will vary depending upon the placement of the earpiece, e.g. whether or not it is firmly seated within the ear canal. In an aspect, audio feedback is determined from the user, e.g. the user self-reporting of audio quality”. Note that image analysis is used to determine the placement of the earpieces (earpieces are mapped to the hearing aids) on the ears, which is mapped to generating an ear placement instruction for the first hearing aid) based on:
the first set of image data (See Goodall: Figs. 43-46, and [0361], “In an aspect, method 4300 in FIG. 43 includes capturing, with image capture circuitry on the computing device, via a user-facing imager associated with a computing device, an image of a user of the computing device, as indicated at 4302; processing the image, using image processing circuitry on the computing device, to determine at least one parameter as indicated at 4304; and controlling, with neural stimulus control signal determination circuitry on the computing device, based at least in part on the at least one parameter, delivery of a stimulus to at least one nerve innervating an ear of the user with the ear stimulation device, as indicated at 4306”. Note that the images captured by using the user-facing imager to capture the user image is analyzed to determine the position of the hearing aid (earpieces, which is mapped to the first set of image data); and
rendering the ear positioning instructions (See Goodall: Fig. 13, and [0272], “In an aspect, secondary signal input 1360 is adapted to receive a position signal indicative of a position of the external neural stimulator with respect to the pinna of the subject. In connection therewith, system 1300 may also include notification circuitry 1406 for delivering a notification to the subject indicating that the external neural stimulator should be repositioned. In an aspect, notification circuitry 1406 includes circuitry for delivering the notification via a graphical display 1368 of computing device 1302”. Note that the position and reposition signals are mapped to rendering the ear positioning instructions);
accessing a second set of image data depicting the head of the user with the first hearing aid located on the ear of the user (See Goodall: Figs. 45-46, and [0364], “This may include capturing the image of the user of the computing device responsive to receiving the handshake signal from the ear stimulation device control circuitry, as indicated at 4506. In an aspect, method 4500 includes sending a handshake signal to the ear stimulation device control circuitry responsive to determining the presence of the at least one earpiece located at the ear of the user in the image, as indicated at 4508”. Note that the images captured by using the user-facing imager after receiving the hand-shaking signal are mapped to accessing a second set of image data depicting the head of the user with the first hearing aid located on the ear of the user);
detecting a first position of the first hearing aid, arranged on the ear of the user, in the second set of image data (See Goodall: Figs. 44-46, and [0362], “FIGS. 44-46 depict further aspects of the method of FIG. 43, wherein steps 4302, 4304, and 4306 are as depicted and described in connection with FIG. 43. As depicted in FIG. 44, in further aspects of method 4400, the at least one parameter is indicative of at least one emotion of the user 4402, is indicative of a physiological condition of the user 4404, is indicative of a medical condition of the user 4406, is indicative of an identity of the user 4408, is a heart rate of the user 4410, is related to eye position of the user 4412, is related to eye movement of the user 4414 of the user, or is indicative of a position of the earpiece with respect to the ear of the user 4416”. Note that the position of the earpiece with respect to the ear of the user is mapped to detecting a first position of the first hearing aid, arranged on the ear of the user, in the second set of image data); and
in response to the first position of the first hearing aid, arranged on the ear of the user, differing from the first fitment definition of the first hearing aid type (See Goodall: Fig. 49, and [0384], “As discussed herein above, in an aspect, image detection and analysis is used to detect improper placement of one or more earpieces on the ear(s) of a user of a computing device. In some aspects, it is desirable to detect quality of electrical contact between the ear and an electrode used for delivering electrical stimuli to or sensing electrical signals from the ear”. Note that the image detection and analysis is used to detect improper placement of one or more earpieces on the ear(s) of a user is mapped to in response to the first position of the first hearing aid, arranged on the ear of the user, differing from the first fitment definition of the first hearing aid type):
generating an adjustment prompt to adjust placement of the first hearing aid on the ear of the user (See Goodall: Figs. 43-48, and [0363], “In an aspect, method 4400 further includes delivering, under control of notification circuitry on the computing device, a notification to the user informing the user of the need to adjust a position of an earpiece of the ear stimulation device with respect to the ear of the user, as indicated at 4418. In various aspects, delivering a notification includes delivering a text notification 4420, delivering a visible notification 4422, or delivering an audio notification 4424. The notification can be specific (e.g., a text or audio notification instructing the user to “push the earpiece further into the ear canal” or “move the earpiece higher up on the pinna”) or non-specific (e.g., a flashing light or beeping sound that indicates the need to reposition the earpiece without providing detail on how specifically it should be repositioned)”. Note that the specific text notification to the user to adjust the earpiece is mapped to generating an adjustment prompt to adjust placement of the first hearing aid on the ear of the use); and
presenting the adjustment prompt to the user (See Goodall: Figs. 48A-B, and [0381], “In the example of FIG. 48A, the computing device is a smart phone 4800 configured with application software that notifies the user of improper placement of the earpieces. Detection and notification is performed, e.g. as described in connection with FIGS. 43-47. Delivery of text, visible, and audio notifications to the user (e.g., as in the method of FIG. 46) are illustrated in FIG. 48A”. Note that the Delivery of text, visible, and audio notifications to the use is mapped to presenting the adjustment prompt to the user).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Pontoppidan to have accessing a first set of image data depicting a head of the user; the first set of image data; generating an ear placement instruction for the first hearing aid based on: the first set of image data; and rendering the ear positioning instructions; accessing a second set of image data depicting the head of the user with the first hearing aid located on the ear of the user; detecting a first position of the first hearing aid, arranged on the ear of the user, in the second set of image data; and in response to the first position of the first hearing aid, arranged on the ear of the user, differing from the first fitment definition of the first hearing aid type: generating an adjustment prompt to adjust placement of the first hearing aid on the ear of the user; and presenting the adjustment prompt to the user as taught by Goodall in order to allow the wearable neural stimulation device to be worn in the certain ear, be worn in a certain orientation relative to the ear, or allow the wearable neural stimulation device to be easily removed from the ear (See Goodall: Fig. 2, and [0168], “In an example, the wearable neural stimulation device 202 can exhibit a shape and fit that attaches the wearable neural stimulation device 202 to the ear 204 of the subject 206 which can allow the neural stimulator 212 and, optionally neural stimulator 222, to contact the ear 204. In an example, the wearable neural stimulation device 202 can operate while the subject 206 is moving. In an example, the wearable neural stimulation device 202 can exhibit a shape, size, or attachment mechanism that allows the wearable neural stimulation device 202 to at least one of be worn only in a certain ear 204, be worn in a certain orientation relative to the ear 204, or allow the wearable neural stimulation device 202 to be easily removed from the ear 204 (e.g., using a single action)”). Pontoppidan teaches a method and system that may optimize the setting parameters for the hearing aids using AI model to minimize the cost functions with simulating acoustic signals and user feedbacks about the sound quality in physical environments and hearing aid combinations; while Goodall teaches a system and method that may simulate the ear functions under various conditions with images captured when earpieces are worn, image analysis is used to arrive at the optimized positions of the earpieces, and multiple factors are considered in the simulation to improve the comfort of the users wearing the earpieces. Therefore, it is obvious to one of ordinary skill in the art to modify Pontoppidan by Goodall to take images and analyze the images to simulate the ear functions and optimize the settings for the earpieces including the positions of the earpieces on the ears. The motivation to modify Pontoppidan by Goodall is “Use of known technique to improve similar devices (methods, or products) in the same way”.
However, Pontoppidan, modified by Goodall, fails to explicitly disclose that extracting an ear morphology from the first set of image data; matching the user to a first hearing aid type in a corpus of hearing aid types, based on the hearing profile and the ear morphology; accessing a first fitment definition of the first hearing aid type; generating a first annotated head model depicting the first hearing aid positioned accurately on an ear of the user based on: the first fitment definition of the first hearing aid type; and rendering the first annotated head model for the user.
However, Jobani teaches that extracting an ear morphology from the first set of image data (See Jobani: Figs. 5A-B, and [0059], “FIG. 5A illustrates a flow chart showing a method of producing the personalized earphone unit 114 by utilizing the mobile application 108 for communicating with the computer implemented system 100. The method starts by providing the electronic communication device 102 installed with the mobile application 108 for capturing at least one front structure and/or at least one back structure of at least one ear of a user, and possibly neck/back of ears/other anatomy of the user's head as relevant for the user's preferred earphone design, as shown in block 500”; and [0066], “The plurality of images or video detailing the structure of the ears is then processed by Photogrammetry algorithm of the 3d modeling application. The photogrammetry algorithm of the three dimensional modeling application includes four main sequential procedures for creating the three-dimensional model of the earphone unit”. Note that the ear structure (one front structure and/or at least one back structure of at least one ear) is mapped to an ear morphology);
matching the user to a first hearing aid type in a corpus of hearing aid types, based on the hearing profile and the ear morphology (See Jobani: Figs. 1-5, and [0045], “The program may run on the server 104 and processes the plurality of received images and/or video of the structure of the ear(s), the plurality of design preferences from the users, and/or other information. The program may thereby allow the plurality of users to design their own personalized earphone units 114 that would form a comfortable fit with the ears of the user without falling off, and would be specifically designed to stay in place while the user engages in his/her desired activities while wearing the earphone(s). In some embodiments of the invention, the mobile application may suggest at least one design for the plurality of earphones and based on that suggestion, the users may create the custom fit personalized earphone units 114”; and [0059], “Then as in block 514, the three dimensional printer unit 112 is operated to print the personalized earphone unit 114. Then different audio electronic components are inserted to the personalized earphone unit 114 casing printed using the above said method for generating the personalized earphone unit 114. In some embodiments the audio electronic components were inserted to the personalized earphone unit 114 during the printing process using the 3d printer unit 112. Post-Processing Procedure: There are various possible post-processing steps, depending on user's preferences, including (1) tumble smoothing to smooth the 3D print, (2) vapor finishing, (3) PAD printing or silk printing to print graphics on the earphones, (4) Vapor Deposition to coat the printed earphone with metal coating, (5) “lost wax investment casting,” (6) 3D printing of a mold of the intended earphones in order to cast them in various materials such as resin, polyurethane, metals, rubbers, etc. i.e., materials that are not yet efficiently printed using 3D printers, and/or (7) coating, painting, or dipping the printed earphones. These various post-processing procedures may be utilized to achieve cosmetic and/or utilitarian (such as comfort, durability, heat resistance) objectives of the user”. Note that the personalized custom fit earpieces printed based on the various information including user preference and ear structures is mapped to matching the user to a first hearing aid type in a corpus of hearing aid types, based on the hearing profile and the ear morphology);
accessing a first fitment definition of the first hearing aid type;
generating a first annotated head model depicting the first hearing aid positioned accurately on an ear of the user (See Jobani: Figs. 5A-B, and [0060], “Then as is block 524, the video or the images detailing the front and/or back structure of the pair of ears of the user is processed to create a 3d model of the user's ear via a photogrammetry process involving search and matching feature point in the plurality of images, creating a coarse cloud point, creating dense cloud point, running a cloud point cleanup process in order to create a smooth mesh from the cloud point”. Note that create a 3d model of the user's ear and use the 3D ear model to create custom fit earpieces for the user is mapped to generating a first annotated head model depicting the first hearing aid positioned accurately on an ear of the user) based on:
the first fitment definition of the first hearing aid type; and
rendering the first annotated head model (See Jobani: Fig.1, and [0065], “Also, in some in-stances, a plurality of models and designs of the earphones unit may be displayed superimposed on the 3D model of the user's ears to simulate what the earphones may look like when the user wears them on his/her ears”. Note that displayed the 3D ear model may be mapped to rendering the first annotated head model) for the user.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Pontoppidan to have extracting an ear morphology from the first set of image data; matching the user to a first hearing aid type in a corpus of hearing aid types, based on the hearing profile and the ear morphology; generating a first annotated head model depicting the first hearing aid positioned accurately on an ear of the user based on as taught by Jobani in order to improve the comfort and durability of the personalized earphone unit (See Jobani: Fig. 1, and [0082], “The personalized earphone unit 114 can have custom-fit ergonomics for the user ear. Shock absorbing areas that may be under a lot of stress in the earphone may be mapped, and an elastic material may be used to act as a shock absorber thereby improving the comfort and durability of the personalized earphone unit 114”). Pontoppidan teaches a method and system that may optimize the setting parameters for the hearing aids using AI model to minimize the cost functions with simulating acoustic signals and user feedbacks about the sound quality in physical environments and hearing aid combinations; while Jobani teaches a system and method that may produce a personalized earphone unit forming a comfort fit with ears of a user by extracting the ear morphology and matching it with the earpieces using the 3D ear model. Therefore, it is obvious to one of ordinary skill in the art to modify Pontoppidan by Jobani to match the user ear with the 3D model and generate personalized earpieces (hearing aid). The motivation to modify Pontoppidan by Jobani is “Use of known technique to improve similar devices (methods, or products) in the same way”.
However, Pontoppidan, modified by Goodall and Jobani, fails to explicitly disclose that accessing a first fitment definition of the first hearing aid type; the first fitment definition of the first hearing aid type; and rendering the first annotated head model for the user.
However, Wu teaches that accessing a first fitment definition of the first hearing aid type (See Wu: Figs. 5-7, and [0051], “Please refer to FIG. 5. When the hearing aid compensation parameter HA outputted by the user device 10 and the user parameter USER are all unknown, the management server 20 calculates that the recommendation coefficients of the hearing aids C1, I1 and B1 are all 1. In other words, if the customer is completely unknown at the beginning, the recommendation coefficient will be the same. At this time, the recommendation list includes hearing aids C1, I1 and B1. FIG. 6 is a schematic diagram illustrating a system for recommending hearing aids according to an embodiment of the present invention. Please refer to FIG. 6. When the hearing aid compensation parameter HA outputted by the user device 10 is unknown and the user parameter USER belong to high frequency hearing loss, the management server 20 calculates that the recommendation coefficients of hearing aids C1, I1 and B1 are respectively 0.9, 0.9, and 0.2. The recommendation coefficient of the hearing aid B1 suitable for low-frequency hearing loss will be very low. At this time, the recommended list includes hearing aids C1 and I1”. Note that the hearing aid suitability to the hearing loss (based on the recommendation coefficients) is mapped to a first fitment definition of the first hearing aid type);
the first fitment definition of the first hearing aid type (See Wu: Figs. 5-7, and [0051], “FIG. 7 is a schematic diagram illustrating a system for recommending hearing aids according to another embodiment of the present invention. Please refer to FIG. 7. When the hearing aid compensation parameter HA outputted by the user device 10 is equal to the specific hearing aid compensation parameter and the user parameter USER belongs to the high frequency heavy hearing loss plus the user hearing-loss value, the management server 20 calculates the recommendation coefficients of the hearing aids C1, I1 and B1 are 0.89, 0.34 and 0, respectively. At this time, the recommended list only includes hearing aid C1. From the foregoing embodiments, it can be seen that when the data of the hearing aid compensation parameter HA and the user parameter USER are clearer, the estimated accuracy of the recommendation coefficient is also higher”. Note that the hearing aid suitability to the hearing loss (based on the recommendation coefficients) is mapped to the first fitment definition of the first hearing aid type); and
rendering the first annotated head model for the user.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Pontoppidan to have accessing a first fitment definition of the first hearing aid type; and the first fitment definition of the first hearing aid type as taught by Wu in order to effectively improve the efficiency of choosing and purchasing hearing aids, and simultaneously reduces the number of physical stores to effectively decrease costs (See Wu: Fig. 1, and [0061], “In conclusion, the present invention automatically choose suitable hearing aids used by users, effectively improve the efficiency of choosing and purchasing hearing aids, simultaneously reduce the number of physical stores to effectively decrease costs, enable the fitting staff in an environment different from an environment where the user is located to directly and automatically respond to the personal environmental needs, and adjust the hearing aid parameters at the remote end at any time, thereby reducing the time when users go to the physical store to adjust the hearing aid and effectively improving efficiency”). Pontoppidan teaches a method and system that may optimize the setting parameters for the hearing aids using AI model to minimize the cost functions with simulating acoustic signals and user feedbacks about the sound quality in physical environments and hearing aid combinations; while Wu teaches a system and method that may output audiometry information, generate and transmit an audiogram to the server and the server analyze those information and find the best fitness hearing aids to the user according to fitness definition, such as user’s hearing loss in different frequency bands. Therefore, it is obvious to one of ordinary skill in the art to modify Pontoppidan by Wu to use the hearing loss as the fitness definition to select the best hearing aid type and settings. The motivation to modify Pontoppidan by Wu is “Use of known technique to improve similar devices (methods, or products) in the same way”.
However, Pontoppidan, modified by Goodall, Jobani, and Wu, fails to explicitly disclose that rendering the first annotated head model for the user.
However, Bologna teaches that rendering the first annotated head model for the user (See Bologna: Fig. 14, and [0167], “FIG. 14 shows multiple views of a three-dimensional (3D) rendering of the body part model 120.99, 220.99, namely a head model, having a number of anthropometric points 120.64.2, 220.64.2 positioned thereon. As shown in FIG. 14, the points 120.64.2, 220.64.2 are positioned on the tip of the nose, edges of the eyes, between the eyes, the forwardmost edge of the chin, edges of the lips, and other locations. The anthropometric landmarks that are placed on the head model 120.99, 220.99 are then aligned with the anthropometric landmarks of the generic model using any of the alignment methods that are disclosed above (e.g., expectation-maximization, iterative closest point analysis, iterative closest point variant, Procrustes alignment, manifold alignment, and etc.) or methods that are known in the art”. Note that the a three-dimensional (3D) rendering of the body part model 120.99, 220.99, namely a head model) is mapped to a rendering the first annotated head model for the user);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Pontoppidan to have rendering the first annotated head model for the user as taught by Bologna in order to increase accuracy (See Bologna: Fig. 6, and [0142], “The scanning hood 110.8.2, 210.8.2 provides for increased accuracy when performing the information acquisition process by conforming to the anatomical features of the player's head H and facial region F, namely the topography and contours of the head H and facial region F while reducing effects of hair”). Pontoppidan teaches a method and system that may optimize the setting parameters for the hearing aids using AI model to minimize the cost functions with simulating acoustic signals and user feedbacks about the sound quality in physical environments and hearing aid combinations; while Bologna teaches a system and method that may render the 3D head model and fit into the helmet of the user to increase the fitting accuracy. Therefore, it is obvious to one of ordinary skill in the art to modify Pontoppidan by Bologna to render the 3D head model to have the better fitness of the user head and the earpieces as Jobani also teaches to overlay the earpieces on the 3D model display. The motivation to modify Pontoppidan by Bologna is “Simple substitution of one known element for another to obtain predictable results”.
Regarding claim 2, Pontoppidan, Goodall, Jobani, Wu, and Bologna teach all the features with respect to claim 1 as outlined above. Further, Goodall, Jobani, and Wu teach that the method of Claim 1:
wherein accessing the first set of image data depicting the head of the user (See Goodall: Figs. 43-46, and [0361], “In an aspect, method 4300 in FIG. 43 includes capturing, with image capture circuitry on the computing device, via a user-facing imager associated with a computing device, an image of a user of the computing device, as indicated at 4302; processing the image, using image processing circuitry on the computing device, to determine at least one parameter as indicated at 4304; and controlling, with neural stimulus control signal determination circuitry on the computing device, based at least in part on the at least one parameter, delivery of a stimulus to at least one nerve innervating an ear of the user with the ear stimulation device, as indicated at 4306”. Note that the images captured by using the user-facing imager to capture the user image are the first set of image data depicting the head of the user) comprises, at a mobile device (See Goodall: Figs. 2A-B, and [0131], “System 200 includes a computing device 208 in communication with wearable neural stimulation device 202 via communication link 210 (e.g., a wireless or wired communication link 210). Computing device 208 (e.g., a personal computing device) can be an audio player, a mobile phone, a computer, or any of various other devices having computing capability (e.g., microprocessor based devices) and including application software (e.g., mobile application) and/or suitable hardware for controlling operation of wearable neural stimulation device 202”. Note that the a mobile phone is mapped to a mobile device):
capturing a first sequence of photographic images of the user, via a forward-facing optical sensor arranged in the mobile device, during execution of the hearing assessment by the user (See Goodall: Fig. 47, and [0376], “In an aspect, image processing circuitry 4708 includes physiological condition module 4756, which determines a physiological condition of the user from user image 4712, based upon one or more parameter 4716. Physiological condition of the user can be inferred from eye movement, pupil dilation, heart rate, respiration rate, facial coloration, facial temperature, etc. A visible or IR image of the patient, obtained with a camera built into a computing device or operatively connected to the computing device can be used. Still or moving (video) image may be used. For example, video images of the subject may be analyzed to determine blood flow using Eulerian video magnification”. Note that the moving (video) images are mapped to a first sequence of photographic images of the use);
wherein accessing the first fitment definition of the first hearing aid type comprises accessing the first fitment definition defining (See Wu: Figs. 5-7, and [0051], “Please refer to FIG. 5. When the hearing aid compensation parameter HA outputted by the user device 10 and the user parameter USER are all unknown, the management server 20 calculates that the recommendation coefficients of the hearing aids C1, I1 and B1 are all 1. In other words, if the customer is completely unknown at the beginning, the recommendation coefficient will be the same. At this time, the recommendation list includes hearing aids C1, I1 and B1. FIG. 6 is a schematic diagram illustrating a system for recommending hearing aids according to an embodiment of the present invention. Please refer to FIG. 6. When the hearing aid compensation parameter HA outputted by the user device 10 is unknown and the user parameter USER belong to high frequency hearing loss, the management server 20 calculates that the recommendation coefficients of hearing aids C1, I1 and B1 are respectively 0.9, 0.9, and 0.2. The recommendation coefficient of the hearing aid B1 suitable for low-frequency hearing loss will be very low. At this time, the recommended list includes hearing aids C1 and I1”. Note that the hearing aid suitability to the hearing loss (based on the recommendation coefficients) is mapped to a first fitment definition of the first hearing aid type):
a target reference feature on a target head (See Goodall: Fig. 2A-B, and [0131], “FIGS. 2A and 2B depict a generalized system 200 including a wearable neural stimulation device 202 for delivering a stimulus to an ear 204 of a subject 20”. Note that the an ear 204 of a subject is mapped to a target reference feature on a target head); and
a target orientation of the first hearing aid type, on the target head, relative to the target reference feature (See Goodall: Figs. 2A-B, and [0168], “In an example, the wearable neural stimulation device 202 can exhibit a shape and fit that attaches the wearable neural stimulation device 202 to the ear 204 of the subject 206 which can allow the neural stimulator 212 and, optionally neural stimulator 222, to contact the ear 204. In an example, the wearable neural stimulation device 202 can operate while the subject 206 is moving. In an example, the wearable neural stimulation device 202 can exhibit a shape, size, or attachment mechanism that allows the wearable neural stimulation device 202 to at least one of be worn only in a certain ear 204, be worn in a certain orientation relative to the ear 204, or allow the wearable neural stimulation device 202 to be easily removed from the ear 204 (e.g., using a single action)”. Note that the in a certain orientation relative to the ear is mapped to a target orientation of the first hearing aid type, on the target head, relative to the target reference feature); and
wherein generating the first annotated head model (See Jobani: Figs. 5A-B, and [0060], “Then as is block 524, the video or the images detailing the front and/or back structure of the pair of ears of the user is processed to create a 3d model of the user's ear via a photogrammetry process involving search and matching feature point in the plurality of images, creating a coarse cloud point, creating dense cloud point, running a cloud point cleanup process in order to create a smooth mesh from the cloud point”. Note that create a 3d model of the user's ear and use the 3D ear model to create custom fit earpieces for the user is mapped to wherein generating the first annotated head model) comprises:
combining the first sequence of photographic images of the user into a three-dimensional head model (See Jobani: Figs. 1-4, and [0055], “The database 116 is coupled to server 104. The database stores and facilitates retrieval of information used by the server 104, the information may include a plurality of user information including user credentials. The database 116 may store information related to the plurality of videos and the plurality of images uploaded by individual users. This information may be used by server 104 to perform operations using the 3D modeling application 310, for generating the 3D model of the personalized earphone unit 114”. Note that 2D images captured by the user, uploaded to the servers, analyzed by the server to create 3D model, is mapped to combining the first sequence of photographic images of the user into a three-dimensional head model);
detecting a first reference feature, on the head of the user and analogous to the target reference feature, in the three-dimensional head model (See Jobani: Figs. 1-4, and [0066], “This operator can visually detect several feature points of the user's ears from different angles in the plurality of 2D captured images of the user's ears and compare these 2D images to the scanned mesh created by the photogrammetry algorithm. By identifying these feature points from the 2D images and identifying them on the scanned mesh model, the operator can refine the scanned model to improve on any areas that may not have been well scanned”. Note that detect several feature points of the user's ears from different angles is mapped to detecting a first reference feature, on the head of the user and analogous to the target reference feature, in the three-dimensional head mode); and
generating the first annotated head model by projecting the virtual representation of the first hearing aid type onto the three-dimensional head model according to the target orientation relative to the first reference feature (See Jobani: Figs. 5A-B, and [0060], “Then as is block 524, the video or the images detailing the front and/or back structure of the pair of ears of the user is processed to create a 3d model of the user's ear via a photogrammetry process involving search and matching feature point in the plurality of images, creating a coarse cloud point, creating dense cloud point, running a cloud point cleanup process in order to create a smooth mesh from the cloud point”. Note that create a 3d model of the user's ear and use the 3D ear model to create custom fit earpieces for the user is mapped to generating a first annotated head model depicting the first hearing aid positioned accurately on an ear of the user).
Regarding claim 3, Pontoppidan, Goodall, Jobani, Wu, and Bologna teach all the features with respect to claim 1 as outlined above. Further, Goodall and Bologna teach that the method of Claim 2:
wherein accessing the virtual representation of the first hearing aid type comprises accessing the virtual representation of the first hearing aid type comprising a three-dimensional computer-aided design model of the first hearing aid type (See Bologna: Fig. 14, and [0163], “For example, the body part model 120.99, 220.99 may be created using a photogrammetry method and additional information may be added to the model 120.99, 220.99 based on a contact scanning method. In a further example, the body part model 120.99, 220.99 may be created by the computerized modeling system based on the point cloud that is generated by the LiDAR sensor and additional information may be added to the body part model 120.99, 220.99 using a photogrammetry technique. It should be understood that the body part model 120.99, 220.99 may be analyzed, displayed, manipulated, or altered in any format, including a non-graphical format (e.g., spreadsheet) or a graphical format (e.g., 3D rendering of the model in a CAD program). Typically, the 3D rendering of the body part model 120.99, 220.99 is shown by a thin shell that has an outer surface, in a wire-frame form (e.g., model in which adjacent points on a surface are connected by line segments), or as a solid object”. Note that 3D rendering of the model in a CAD program is mapped to accessing the virtual representation of the first hearing aid type comprising a three-dimensional computer-aided design model of the first hearing aid type); and
wherein generating the first annotated head model comprises generating the first annotated head model by projecting the three-dimensional computer-aided design model of the first hearing aid type onto the three-dimensional head model according to the target orientation relative to the first reference feature (See Goodall: Figs. 35-36, and [0334], “FIG. 36 depicts side and top plan views of the concha insert 3510. FIG. 37 depicts side and end views of the ear canal insert 3505. In some embodiments, the base portion 3515 of the concha insert 3510 may include a throughhole 3525. The body structure of the audio headphone 3550 and/or the ear canal insert 3505 may include a projection 3530 (FIG. 35B) configured to fit through the throughhole 3525 to mate with a complementary portion 3535 of the body structure of the other to secure the ear canal insert 3505 and the concha insert 3510 to the body structure of the audio headphone 3550. In other words, the ear canal insert 3505 and audio headphone 3550 may engage each other via the throughhole 3525 of the concha insert 3510 to secure the ear canal insert 3505 and the concha insert 3510 to the audio headphone 3550. In the example depicted in FIGS. 35A-37, the projection 3530 is included with the audio headphone 3550 and the complementary portion 3535 is included with the ear canal insert 3505. In some aspects, the projection 3530 and the complementary portion 3535 mate via a threaded connection. In some embodiments, the projection 3530 and the complementary portion 3535 mate via a friction fit. In some embodiments, the projection 3530 and the complementary portion 3535 mate via a snap fit.”. Note that projecting the earpieces to the ear to adjust the 3D ear model is mapped to generating the first annotated head model by projecting the three-dimensional computer-aided design model of the first hearing aid type onto the three-dimensional head model according to the target orientation relative to the first reference feature. Note also that the 3D ear model may be generated or rendered using CAD tools as taught by Bologna, and ordinary one in the art may use CAD for 3D modeling of body parts like head or ears, etc.).
Regarding claim 15, Pontoppidan, Goodall, Jobani, Wu, and Bologna teach all the features with respect to claim 1 as outlined above. Further, Pontoppidan teaches that the method of Claim 1 further comprising:
during the second time period (See Pontoppidan: Figs. 1A-B, and [0171], “Based on the transferred data from the user's personal experience while wearing the hearing aid(s) a 2.sup.nd loop is executed by the simulation model where the logged data are used instead of or as a supplement to the predefined (general) data representing acoustic environments, user intent, etc. Thereby an optimized hearing aid setting is provided. The optimized hearing aid setting is transferred to the specific hearing aid and applied to the appropriate processing algorithms. Thereby an optimized (personalized) hearing aid is provided”; and [0172], “The 2.sup.nd loop can be repeated continuously or with a predefined frequency, or triggered by specific events (e.g. power-up, data logger full, consultation with HCP (e.g. initiated by HCP), initiated by the user via a user interface, etc.)”. Note: the 2nd t loop is mapped to the second time.):
loading the representation of the hearing profile onto a local device (See Pontoppidan: Figs. 7A-B, and [0209], “S2. Transferring the simulation-based hearing aid setting to an actual version of said specific hearing aid”. Note: the transferring the simulated setting to the actual hearing aid is mapped to the loading a representation of the hearing profile onto the first hearing aid.); and
during a third time period, after the user installs the first hearing aid (See Pontoppidan: Figs. 1A-B, and [0171], “Based on the transferred data from the user's personal experience while wearing the hearing aid(s) a 2.sup.nd loop is executed by the simulation model where the logged data are used instead of or as a supplement to the predefined (general) data representing acoustic environments, user intent, etc. Thereby an optimized hearing aid setting is provided. The optimized hearing aid setting is transferred to the specific hearing aid and applied to the appropriate processing algorithms. Thereby an optimized (personalized) hearing aid is provided”; and [0172], “The 2.sup.nd loop can be repeated continuously or with a predefined frequency, or triggered by specific events (e.g. power-up, data logger full, consultation with HCP (e.g. initiated by HCP), initiated by the user via a user interface, etc.)”. Note: the repeated 2nd t loop is mapped to the third time period.):
playing an audio clip, selected from a set of audio clips, based on the hearing profile (See Pontoppidan: Fig. 2, and [0161], “A third step may comprise that data logged from hearing aids that describe sound scenes in level, SNR, etc., are used to augment the scenes, which are used for the simulation and optimization of hearing aid settings, cf. e.g. validation step 3 in FIG. 2. This may also be extended with more descriptive classifications of sounds and sound scenes beyond quiet, speech, speech-in-noise, and noise. Hereby a set of standardized audio recordings of speech and other sounds can be remixed together with the range of parameters experienced by each individual and also beyond the scenes experienced by the individual to create simulation environments that prepare settings for unmet scenes with significant and sufficient generalizability over just the sound scenes the individual encounters and the sound scenes the individual could record and submit”. Note: the audio played to the users to validate the hearing aid setting in the repeated 2nd t loop is mapped to playing an audio clip, selected from a set of audio clips, based on the hearing profile.); and
based on the audio clip, prompting the user on the local device to adjust playback characteristics of the first hearing aid (See Pontoppidan: Figs. 1A-B, and [0184], “While Alice uses the hearing instruments, the hearing instruments and the APP (e.g. implemented on a smartphone or other appropriate processing comprising display and data entry functionality) collects data about the sound environments and possibly intents of Alice in those situations (cf. ‘Data logger’ in FIG. 1A, 1B, etc.). The APP also prompts Alice to state what she parameter she uses to optimize for in the different sound environments and situations”. Note: the prompts Alice to state what she parameter she uses to optimize for in the different sound environments and situations is mapped to t prompting the user on the local device to adjust playback characteristics of the first hearing aid.).
Regarding claim 16, Pontoppidan, Goodall, Jobani, Wu, and Bologna teach all the features with respect to claim 1 as outlined above. Further, Pontoppidan, Goodall, Jobani, Wu, and Bologna teach that a method (See Pontoppidan: Figs. 7A-B, and [0201], “FIG. 7A shows a flow diagram for an embodiment of a method of determining a parameter setting for a specific hearing aid of a particular user according to the present disclosure”) comprising:
during a first time period (See Pontoppidan: Figs. 1A-B, and [0167], “In a first loop, the recommended hearing aid setting is solely based on the simulation model (using a hearing profile of the specific user and (previously) generated hearing aid input signals corresponding to a variety of acoustic environments (signal and noise levels, noise types, user preferences, etc.), cf. arrow denoted ‘1.sup.st loop’ in FIG. 1A, 1B symbolizing at least one (but typically a multitude of runs) through the functional blocks of the model (‘AI-hearing model’->‘Audiologic profile’->‘Loudness, speech’->‘Acoustic situations and user preferences’->‘AI-hearing model’ in FIG. 1A”. Note: the 1st loop is mapped to the first time. The 1st and 2nd loops can be repeated many times depending on the final hearing ais performance for a user, and can be used for many different users as well. Thus, the first visit of a user may also be mapped to the 1st time, and the following visits of the same user may be mapped to the 2nd time), at a kiosk (See Goodall: Figs. 12, and [0256], “In an aspect, computing device 1204 is personal digital assistant 1226, a personal entertainment device 1228, a mobile phone 1230, a laptop computer 1232, a personal computer 1234 (e.g., a tablet personal computer), a wearable computing device 1236 (e.g., a fitness band, an item of clothing, attire, or eyewear incorporating computing capability), a networked computer 1238, a computing system comprising a cluster of processors 1240, a computing system comprising a cluster of servers 1242, a workstation computer 1244, a desktop computer 1246, a kiosk 1248, a mobile healthcare platform 1250, and/or an external healthcare network 1252. In various aspects, computing device 1204 includes one or more of a portable computing device, a wearable computing device, a mobile computing device, and a thin client computing device, for example. It is noted that the remote system 1224 can include the same or similar device as the computing device 1204”):
accessing a first set of image data depicting a head of a user (See Goodall: Figs. 43-46, and [0361], “In an aspect, method 4300 in FIG. 43 includes capturing, with image capture circuitry on the computing device, via a user-facing imager associated with a computing device, an image of a user of the computing device, as indicated at 4302; processing the image, using image processing circuitry on the computing device, to determine at least one parameter as indicated at 4304; and controlling, with neural stimulus control signal determination circuitry on the computing device, based at least in part on the at least one parameter, delivery of a stimulus to at least one nerve innervating an ear of the user with the ear stimulation device, as indicated at 4306”. Note that the images captured by using the user-facing imager to capture the user image are mapped to accessing a first set of image data depicting a head of the user);
extracting an ear morphology from the first set of image data (See Jobani: Figs. 5A-B, and [0059], “FIG. 5A illustrates a flow chart showing a method of producing the personalized earphone unit 114 by utilizing the mobile application 108 for communicating with the computer implemented system 100. The method starts by providing the electronic communication device 102 installed with the mobile application 108 for capturing at least one front structure and/or at least one back structure of at least one ear of a user, and possibly neck/back of ears/other anatomy of the user's head as relevant for the user's preferred earphone design, as shown in block 500”; and [0066], “The plurality of images or video detailing the structure of the ears is then processed by Photogrammetry algorithm of the 3d modeling application. The photogrammetry algorithm of the three dimensional modeling application includes four main sequential procedures for creating the three-dimensional model of the earphone unit”. Note that the ear structure (one front structure and/or at least one back structure of at least one ear) is mapped to an ear morphology);
matching the user to a first hearing aid type in a corpus of hearing aid types, based on the ear morphology (See Jobani: Figs. 1-5, and [0045], “The program may run on the server 104 and processes the plurality of received images and/or video of the structure of the ear(s), the plurality of design preferences from the users, and/or other information. The program may thereby allow the plurality of users to design their own personalized earphone units 114 that would form a comfortable fit with the ears of the user without falling off, and would be specifically designed to stay in place while the user engages in his/her desired activities while wearing the earphone(s). In some embodiments of the invention, the mobile application may suggest at least one design for the plurality of earphones and based on that suggestion, the users may create the custom fit personalized earphone units 114”; and [0059], “Then as in block 514, the three dimensional printer unit 112 is operated to print the personalized earphone unit 114. Then different audio electronic components are inserted to the personalized earphone unit 114 casing printed using the above said method for generating the personalized earphone unit 114. In some embodiments the audio electronic components were inserted to the personalized earphone unit 114 during the printing process using the 3d printer unit 112. Post-Processing Procedure: There are various possible post-processing steps, depending on user's preferences, including (1) tumble smoothing to smooth the 3D print, (2) vapor finishing, (3) PAD printing or silk printing to print graphics on the earphones, (4) Vapor Deposition to coat the printed earphone with metal coating, (5) “lost wax investment casting,” (6) 3D printing of a mold of the intended earphones in order to cast them in various materials such as resin, polyurethane, metals, rubbers, etc. i.e., materials that are not yet efficiently printed using 3D printers, and/or (7) coating, painting, or dipping the printed earphones. These various post-processing procedures may be utilized to achieve cosmetic and/or utilitarian (such as comfort, durability, heat resistance) objectives of the user”. Note that the personalized custom fit earpieces printed based on the various information including user preference and ear structures is mapped to matching the user to a first hearing aid type in a corpus of hearing aid types, based on the hearing profile and the ear morphology);
accessing a virtual representation of the first hearing aid type (See Pontoppidan: Fig. 2, and [0174], “The information in box 4, denoted ‘Big5 personality traits added to hearing profile for stratification’ is fed to the ‘Hearing diagnostics of particular user’ to provide a supplement to the possible more hearing loss dominated data of the user. The information in boxes 2 (2A, 2B) and 3 are all fed to the AI-hearing model, representing exemplary data of the acoustic environments encountered by the user when wearing the hearing aid, and the user's reaction to these environments”. Note: the AI-hearing model is mapped to a virtual representation of the first hearing aid type);
accessing a first fitment definition of the first hearing aid type (See Wu: Figs. 5-7, and [0051], “Please refer to FIG. 5. When the hearing aid compensation parameter HA outputted by the user device 10 and the user parameter USER are all unknown, the management server 20 calculates that the recommendation coefficients of the hearing aids C1, I1 and B1 are all 1. In other words, if the customer is completely unknown at the beginning, the recommendation coefficient will be the same. At this time, the recommendation list includes hearing aids C1, I1 and B1. FIG. 6 is a schematic diagram illustrating a system for recommending hearing aids according to an embodiment of the present invention. Please refer to FIG. 6. When the hearing aid compensation parameter HA outputted by the user device 10 is unknown and the user parameter USER belong to high frequency hearing loss, the management server 20 calculates that the recommendation coefficients of hearing aids C1, I1 and B1 are respectively 0.9, 0.9, and 0.2. The recommendation coefficient of the hearing aid B1 suitable for low-frequency hearing loss will be very low. At this time, the recommended list includes hearing aids C1 and I1”. Note that the hearing aid suitability to the hearing loss (based on the recommendation coefficients) is mapped to a first fitment definition of the first hearing aid type);
generating a first annotated head model depicting the first hearing aid positioned accurately on an ear of the user (See Jobani: Figs. 5A-B, and [0060], “Then as is block 524, the video or the images detailing the front and/or back structure of the pair of ears of the user is processed to create a 3d model of the user's ear via a photogrammetry process involving search and matching feature point in the plurality of images, creating a coarse cloud point, creating dense cloud point, running a cloud point cleanup process in order to create a smooth mesh from the cloud point”. Note that create a 3d model of the user's ear and use the 3D ear model to create custom fit earpieces for the user is mapped to generating a first annotated head model depicting the first hearing aid positioned accurately on an ear of the user) based on:
the first set of image data (See Goodall: Figs. 43-46, and [0361], “In an aspect, method 4300 in FIG. 43 includes capturing, with image capture circuitry on the computing device, via a user-facing imager associated with a computing device, an image of a user of the computing device, as indicated at 4302; processing the image, using image processing circuitry on the computing device, to determine at least one parameter as indicated at 4304; and controlling, with neural stimulus control signal determination circuitry on the computing device, based at least in part on the at least one parameter, delivery of a stimulus to at least one nerve innervating an ear of the user with the ear stimulation device, as indicated at 4306”. Note that the images captured by using the user-facing imager to capture the user image is analyzed to determine the position of the hearing aid (earpieces, which is mapped to the first set of image data);
the virtual representation of the first hearing aid type (See Pontoppidan: Fig. 6, and [0199], “FIG. 6 shows a fourth embodiment of a hearing system according to the present disclosure. The embodiment of a hearing system illustrated in FIG. 6 is based on a partition of the system in a hearing aid and a (e.g. handheld) processing device hosting the simulation model of the hearing aid as well as a user interface for the hearing aid (cf. arrow denoted ‘User input via APP’). The handheld processing device is indicated in FIG. 6 as Smartphone or dedicated portable processing device (comprising or having access to AI-hearing model)′. The simulation model and possibly the entire fitting system of the hearing aid may be accessible via an APP on the handheld processing device”. Note: the simulation model of the hearing aid is mapped to the virtual representation of the first hearing aid type); and
the first fitment definition of the first hearing aid type (See Wu: Figs. 5-7, and [0051], “FIG. 7 is a schematic diagram illustrating a system for recommending hearing aids according to another embodiment of the present invention. Please refer to FIG. 7. When the hearing aid compensation parameter HA outputted by the user device 10 is equal to the specific hearing aid compensation parameter and the user parameter USER belongs to the high frequency heavy hearing loss plus the user hearing-loss value, the management server 20 calculates the recommendation coefficients of the hearing aids C1, I1 and B1 are 0.89, 0.34 and 0, respectively. At this time, the recommended list only includes hearing aid C1. From the foregoing embodiments, it can be seen that when the data of the hearing aid compensation parameter HA and the user parameter USER are clearer, the estimated accuracy of the recommendation coefficient is also higher”. Note that the hearing aid suitability to the hearing loss (based on the recommendation coefficients) is mapped to the first fitment definition of the first hearing aid type);
rendering the first annotated head model (See Jobani: Fig.1, and [0065], “Also, in some in-stances, a plurality of models and designs of the earphones unit may be displayed superimposed on the 3D model of the user's ears to simulate what the earphones may look like when the user wears them on his/her ears”. Note that displayed the 3D ear model may be mapped to rendering the first annotated head model) for the user (See Bologna: Fig. 14, and [0167], “FIG. 14 shows multiple views of a three-dimensional (3D) rendering of the body part model 120.99, 220.99, namely a head model, having a number of anthropometric points 120.64.2, 220.64.2 positioned thereon. As shown in FIG. 14, the points 120.64.2, 220.64.2 are positioned on the tip of the nose, edges of the eyes, between the eyes, the forwardmost edge of the chin, edges of the lips, and other locations. The anthropometric landmarks that are placed on the head model 120.99, 220.99 are then aligned with the anthropometric landmarks of the generic model using any of the alignment methods that are disclosed above (e.g., expectation-maximization, iterative closest point analysis, iterative closest point variant, Procrustes alignment, manifold alignment, and etc.) or methods that are known in the art”. Note that the a three-dimensional (3D) rendering of the body part model 120.99, 220.99, namely a head model) is mapped to a rendering the first annotated head model for the user); and
during a second time period, at a local device, following provision of a first hearing aid of the first hearing aid type to the user (See Pontoppidan: Figs. 1A-B, and [0171], “Based on the transferred data from the user's personal experience while wearing the hearing aid(s) a 2.sup.nd loop is executed by the simulation model where the logged data are used instead of or as a supplement to the predefined (general) data representing acoustic environments, user intent, etc. Thereby an optimized hearing aid setting is provided. The optimized hearing aid setting is transferred to the specific hearing aid and applied to the appropriate processing algorithms. Thereby an optimized (personalized) hearing aid is provided”; and [0172], “The 2.sup.nd loop can be repeated continuously or with a predefined frequency, or triggered by specific events (e.g. power-up, data logger full, consultation with HCP (e.g. initiated by HCP), initiated by the user via a user interface, etc.)”. Note: the 2nd t loop is mapped to the second time.):
loading a representation of a hearing profile of the user onto the first hearing aid (See Pontoppidan: Figs. 7A-B, and [0209], “S2. Transferring the simulation-based hearing aid setting to an actual version of said specific hearing aid”. Note: the transferring the simulated setting to the actual hearing aid is mapped to the loading a representation of the hearing profile onto the first hearing aid.);
generating an ear placement instruction for the first hearing aid (See Goodall: Figs. 48-52, and [0384], “As discussed herein above, in an aspect, image detection and analysis is used to detect improper placement of one or more earpieces on the ear(s) of a user of a computing device”; and [0391], “In another aspect, method 5200 includes delivering, under control of test signal circuitry on the computing device, an audio test signal via a sound source associated with the at least one earpiece, and determining proper placement of the at least one earpiece based upon audio feedback, as indicated at 5220. In an aspect, audio feedback is determined from an audio signal detected from the earpiece, which will vary depending upon the placement of the earpiece, e.g. whether or not it is firmly seated within the ear canal. In an aspect, audio feedback is determined from the user, e.g. the user self-reporting of audio quality”. Note that image analysis is used to determine the placement of the earpieces (earpieces are mapped to the hearing aids) on the ears, which is mapped to generating an ear placement instruction for the first hearing aid) based on:
the first set of image data (See Goodall: Figs. 43-46, and [0361], “In an aspect, method 4300 in FIG. 43 includes capturing, with image capture circuitry on the computing device, via a user-facing imager associated with a computing device, an image of a user of the computing device, as indicated at 4302; processing the image, using image processing circuitry on the computing device, to determine at least one parameter as indicated at 4304; and controlling, with neural stimulus control signal determination circuitry on the computing device, based at least in part on the at least one parameter, delivery of a stimulus to at least one nerve innervating an ear of the user with the ear stimulation device, as indicated at 4306”. Note that the images captured by using the user-facing imager to capture the user image is analyzed to determine the position of the hearing aid (earpieces, which is mapped to the first set of image data); and
the virtual representation of the first hearing aid type (See Pontoppidan: Fig. 6, and [0199], “FIG. 6 shows a fourth embodiment of a hearing system according to the present disclosure. The embodiment of a hearing system illustrated in FIG. 6 is based on a partition of the system in a hearing aid and a (e.g. handheld) processing device hosting the simulation model of the hearing aid as well as a user interface for the hearing aid (cf. arrow denoted ‘User input via APP’). The handheld processing device is indicated in FIG. 6 as Smartphone or dedicated portable processing device (comprising or having access to AI-hearing model)′. The simulation model and possibly the entire fitting system of the hearing aid may be accessible via an APP on the handheld processing device”. Note: the simulation model of the hearing aid is mapped to the virtual representation of the first hearing aid type;
rendering the ear positioning instruction (See Goodall: Fig. 13, and [0272], “In an aspect, secondary signal input 1360 is adapted to receive a position signal indicative of a position of the external neural stimulator with respect to the pinna of the subject. In connection therewith, system 1300 may also include notification circuitry 1406 for delivering a notification to the subject indicating that the external neural stimulator should be repositioned. In an aspect, notification circuitry 1406 includes circuitry for delivering the notification via a graphical display 1368 of computing device 1302”. Note that the position and reposition signals are mapped to rendering the ear positioning instructions);
accessing a second set of image data depicting the head of the user with the first hearing aid located on the ear of the user (See Goodall: Figs. 45-46, and [0364], “This may include capturing the image of the user of the computing device responsive to receiving the handshake signal from the ear stimulation device control circuitry, as indicated at 4506. In an aspect, method 4500 includes sending a handshake signal to the ear stimulation device control circuitry responsive to determining the presence of the at least one earpiece located at the ear of the user in the image, as indicated at 4508”. Note that the images captured by using the user-facing imager after receiving the hand-shaking signal are mapped to accessing a second set of image data depicting the head of the user with the first hearing aid located on the ear of the user);
detecting a position of the first hearing aid, arranged on the ear of the user, in the second set of image data (See Goodall: Figs. 44-46, and [0362], “FIGS. 44-46 depict further aspects of the method of FIG. 43, wherein steps 4302, 4304, and 4306 are as depicted and described in connection with FIG. 43. As depicted in FIG. 44, in further aspects of method 4400, the at least one parameter is indicative of at least one emotion of the user 4402, is indicative of a physiological condition of the user 4404, is indicative of a medical condition of the user 4406, is indicative of an identity of the user 4408, is a heart rate of the user 4410, is related to eye position of the user 4412, is related to eye movement of the user 4414 of the user, or is indicative of a position of the earpiece with respect to the ear of the user 4416”. Note that the position of the earpiece with respect to the ear of the user is mapped to detecting a first position of the first hearing aid, arranged on the ear of the user, in the second set of image data); and
in response to the position of the first hearing aid, arranged on the ear of the user, differing from the first fitment definition of the first hearing aid type (See Goodall: Fig. 49, and [0384], “As discussed herein above, in an aspect, image detection and analysis is used to detect improper placement of one or more earpieces on the ear(s) of a user of a computing device. In some aspects, it is desirable to detect quality of electrical contact between the ear and an electrode used for delivering electrical stimuli to or sensing electrical signals from the ear”. Note that the image detection and analysis is used to detect improper placement of one or more earpieces on the ear(s) of a user is mapped to in response to the first position of the first hearing aid, arranged on the ear of the user, differing from the first fitment definition of the first hearing aid type):
generating a prompt to adjust placement of the first hearing aid on the ear of the user (See Goodall: Figs. 43-48, and [0363], “In an aspect, method 4400 further includes delivering, under control of notification circuitry on the computing device, a notification to the user informing the user of the need to adjust a position of an earpiece of the ear stimulation device with respect to the ear of the user, as indicated at 4418. In various aspects, delivering a notification includes delivering a text notification 4420, delivering a visible notification 4422, or delivering an audio notification 4424. The notification can be specific (e.g., a text or audio notification instructing the user to “push the earpiece further into the ear canal” or “move the earpiece higher up on the pinna”) or non-specific (e.g., a flashing light or beeping sound that indicates the need to reposition the earpiece without providing detail on how specifically it should be repositioned)”. Note that the specific text notification to the user to adjust the earpiece is mapped to generating an adjustment prompt to adjust placement of the first hearing aid on the ear of the use); and
presenting the prompt to the user (See Goodall: Figs. 48A-B, and [0381], “In the example of FIG. 48A, the computing device is a smart phone 4800 configured with application software that notifies the user of improper placement of the earpieces. Detection and notification is performed, e.g. as described in connection with FIGS. 43-47. Delivery of text, visible, and audio notifications to the user (e.g., as in the method of FIG. 46) are illustrated in FIG. 48A”. Note that the Delivery of text, visible, and audio notifications to the use is mapped to presenting the adjustment prompt to the user).
Regarding claim 17, Pontoppidan, Goodall, Jobani, Wu, and Bologna teach all the features with respect to claim 16 as outlined above. Further, Goodall, Jobani, and Wu teach that the method of Claim 16:
wherein accessing the first set of image data depicting the head of the user (See Goodall: Figs. 43-46, and [0361], “In an aspect, method 4300 in FIG. 43 includes capturing, with image capture circuitry on the computing device, via a user-facing imager associated with a computing device, an image of a user of the computing device, as indicated at 4302; processing the image, using image processing circuitry on the computing device, to determine at least one parameter as indicated at 4304; and controlling, with neural stimulus control signal determination circuitry on the computing device, based at least in part on the at least one parameter, delivery of a stimulus to at least one nerve innervating an ear of the user with the ear stimulation device, as indicated at 4306”. Note that the images captured by using the user-facing imager to capture the user image are the first set of image data depicting the head of the user)comprises:
capturing a first sequence of photographic images of the user, via a forward-facing optical sensor arranged in the kiosk (See Goodall: Figs. 12, and [0256], “In an aspect, computing device 1204 is personal digital assistant 1226, a personal entertainment device 1228, a mobile phone 1230, a laptop computer 1232, a personal computer 1234 (e.g., a tablet personal computer), a wearable computing device 1236 (e.g., a fitness band, an item of clothing, attire, or eyewear incorporating computing capability), a networked computer 1238, a computing system comprising a cluster of processors 1240, a computing system comprising a cluster of servers 1242, a workstation computer 1244, a desktop computer 1246, a kiosk 1248, a mobile healthcare platform 1250, and/or an external healthcare network 1252. In various aspects, computing device 1204 includes one or more of a portable computing device, a wearable computing device, a mobile computing device, and a thin client computing device, for example. It is noted that the remote system 1224 can include the same or similar device as the computing device 1204”), during execution of a hearing assessment by the user (See Goodall: Fig. 47, and [0376], “In an aspect, image processing circuitry 4708 includes physiological condition module 4756, which determines a physiological condition of the user from user image 4712, based upon one or more parameter 4716. Physiological condition of the user can be inferred from eye movement, pupil dilation, heart rate, respiration rate, facial coloration, facial temperature, etc. A visible or IR image of the patient, obtained with a camera built into a computing device or operatively connected to the computing device can be used. Still or moving (video) image may be used. For example, video images of the subject may be analyzed to determine blood flow using Eulerian video magnification”. Note that the moving (video) images are mapped to a first sequence of photographic images of the use);
wherein accessing the first fitment definition of the first hearing aid type comprises accessing the first fitment definition defining (See Wu: Figs. 5-7, and [0051], “Please refer to FIG. 5. When the hearing aid compensation parameter HA outputted by the user device 10 and the user parameter USER are all unknown, the management server 20 calculates that the recommendation coefficients of the hearing aids C1, I1 and B1 are all 1. In other words, if the customer is completely unknown at the beginning, the recommendation coefficient will be the same. At this time, the recommendation list includes hearing aids C1, I1 and B1. FIG. 6 is a schematic diagram illustrating a system for recommending hearing aids according to an embodiment of the present invention. Please refer to FIG. 6. When the hearing aid compensation parameter HA outputted by the user device 10 is unknown and the user parameter USER belong to high frequency hearing loss, the management server 20 calculates that the recommendation coefficients of hearing aids C1, I1 and B1 are respectively 0.9, 0.9, and 0.2. The recommendation coefficient of the hearing aid B1 suitable for low-frequency hearing loss will be very low. At this time, the recommended list includes hearing aids C1 and I1”. Note that the hearing aid suitability to the hearing loss (based on the recommendation coefficients) is mapped to a first fitment definition of the first hearing aid type):
a target reference feature on a target head (See Goodall: Fig. 2A-B, and [0131], “FIGS. 2A and 2B depict a generalized system 200 including a wearable neural stimulation device 202 for delivering a stimulus to an ear 204 of a subject 20”. Note that the an ear 204 of a subject is mapped to a target reference feature on a target head); and
a target orientation of the first hearing aid type, on the target head, relative to the target reference feature (See Goodall: Figs. 2A-B, and [0168], “In an example, the wearable neural stimulation device 202 can exhibit a shape and fit that attaches the wearable neural stimulation device 202 to the ear 204 of the subject 206 which can allow the neural stimulator 212 and, optionally neural stimulator 222, to contact the ear 204. In an example, the wearable neural stimulation device 202 can operate while the subject 206 is moving. In an example, the wearable neural stimulation device 202 can exhibit a shape, size, or attachment mechanism that allows the wearable neural stimulation device 202 to at least one of be worn only in a certain ear 204, be worn in a certain orientation relative to the ear 204, or allow the wearable neural stimulation device 202 to be easily removed from the ear 204 (e.g., using a single action)”. Note that the in a certain orientation relative to the ear is mapped to a target orientation of the first hearing aid type, on the target head, relative to the target reference feature); and
wherein generating the first annotated head model (See Jobani: Figs. 5A-B, and [0060], “Then as is block 524, the video or the images detailing the front and/or back structure of the pair of ears of the user is processed to create a 3d model of the user's ear via a photogrammetry process involving search and matching feature point in the plurality of images, creating a coarse cloud point, creating dense cloud point, running a cloud point cleanup process in order to create a smooth mesh from the cloud point”. Note that create a 3d model of the user's ear and use the 3D ear model to create custom fit earpieces for the user is mapped to wherein generating the first annotated head model) comprises:
combining the first sequence of photographic images of the user into a three-dimensional head model (See Jobani: Figs. 1-4, and [0055], “The database 116 is coupled to server 104. The database stores and facilitates retrieval of information used by the server 104, the information may include a plurality of user information including user credentials. The database 116 may store information related to the plurality of videos and the plurality of images uploaded by individual users. This information may be used by server 104 to perform operations using the 3D modeling application 310, for generating the 3D model of the personalized earphone unit 114”. Note that 2D images captured by the user, uploaded to the servers, analyzed by the server to create 3D model, is mapped to combining the first sequence of photographic images of the user into a three-dimensional head model);
detecting a first reference feature, on the head of the user and analogous to the target reference feature, in the three-dimensional head model (See Jobani: Figs. 1-4, and [0066], “This operator can visually detect several feature points of the user's ears from different angles in the plurality of 2D captured images of the user's ears and compare these 2D images to the scanned mesh created by the photogrammetry algorithm. By identifying these feature points from the 2D images and identifying them on the scanned mesh model, the operator can refine the scanned model to improve on any areas that may not have been well scanned”. Note that detect several feature points of the user's ears from different angles is mapped to detecting a first reference feature, on the head of the user and analogous to the target reference feature, in the three-dimensional head mode); and
generating the first annotated head model by projecting the virtual representation of the first hearing aid type onto the three-dimensional head model according to the target orientation relative to the first reference feature (See Jobani: Figs. 5A-B, and [0060], “Then as is block 524, the video or the images detailing the front and/or back structure of the pair of ears of the user is processed to create a 3d model of the user's ear via a photogrammetry process involving search and matching feature point in the plurality of images, creating a coarse cloud point, creating dense cloud point, running a cloud point cleanup process in order to create a smooth mesh from the cloud point”. Note that create a 3d model of the user's ear and use the 3D ear model to create custom fit earpieces for the user is mapped to generating a first annotated head model depicting the first hearing aid positioned accurately on an ear of the user).
Claims 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Pontoppidan, etc. (US 20230037356 A1) in view of Goodall, etc. (US 20190046794 A1), further in view of Jobani (US 20150382123 A1), Wu, etc. (US 20240296931 A1), Bologna, etc. (US 20200100554 A1), and Hughes, etc. (US 20100013661 A1).
Regarding claim 6, Pontoppidan, Goodall, Jobani, Wu, and Bologna teach all the features with respect to claim 1 as outlined above. Further, Pontoppidan, Goodall, Jobani, Wu, and Bologna teach that the method of Claim 1, further comprising, during the first time period:
matching the user to a second hearing aid type in the corpus of hearing aid types (See Wu: Figs. 5-7, and [0051], “FIG. 7 is a schematic diagram illustrating a system for recommending hearing aids according to another embodiment of the present invention. Please refer to FIG. 7. When the hearing aid compensation parameter HA outputted by the user device 10 is equal to the specific hearing aid compensation parameter and the user parameter USER belongs to the high frequency heavy hearing loss plus the user hearing-loss value, the management server 20 calculates the recommendation coefficients of the hearing aids C1, I1 and B1 are 0.89, 0.34 and 0, respectively. At this time, the recommended list only includes hearing aid C1. From the foregoing embodiments, it can be seen that when the data of the hearing aid compensation parameter HA and the user parameter USER are clearer, the estimated accuracy of the recommendation coefficient is also higher”. Note that Hearing aid C1, I1, and B1 is mapped to the corpus of hearing aid types, and when the second I1 hearing type is evaluated, it is mapped to the second hearing type), based on the hearing profile and the ear morphology (See Jobani: Figs. 1-5, and [0045], “The program may run on the server 104 and processes the plurality of received images and/or video of the structure of the ear(s), the plurality of design preferences from the users, and/or other information. The program may thereby allow the plurality of users to design their own personalized earphone units 114 that would form a comfortable fit with the ears of the user without falling off, and would be specifically designed to stay in place while the user engages in his/her desired activities while wearing the earphone(s). In some embodiments of the invention, the mobile application may suggest at least one design for the plurality of earphones and based on that suggestion, the users may create the custom fit personalized earphone units 114”; and [0059], “Then as in block 514, the three dimensional printer unit 112 is operated to print the personalized earphone unit 114. Then different audio electronic components are inserted to the personalized earphone unit 114 casing printed using the above said method for generating the personalized earphone unit 114. In some embodiments the audio electronic components were inserted to the personalized earphone unit 114 during the printing process using the 3d printer unit 112. Post-Processing Procedure: There are various possible post-processing steps, depending on user's preferences, including (1) tumble smoothing to smooth the 3D print, (2) vapor finishing, (3) PAD printing or silk printing to print graphics on the earphones, (4) Vapor Deposition to coat the printed earphone with metal coating, (5) “lost wax investment casting,” (6) 3D printing of a mold of the intended earphones in order to cast them in various materials such as resin, polyurethane, metals, rubbers, etc. i.e., materials that are not yet efficiently printed using 3D printers, and/or (7) coating, painting, or dipping the printed earphones. These various post-processing procedures may be utilized to achieve cosmetic and/or utilitarian (such as comfort, durability, heat resistance) objectives of the user”. Note that the personalized custom fit earpieces printed based on the various information including user preference and ear structures is mapped to matching the user to a first hearing aid type in a corpus of hearing aid types, based on the hearing profile and the ear morphology);
accessing a virtual representation of the second hearing aid type (See Pontoppidan: Fig. 6, and [0199], “FIG. 6 shows a fourth embodiment of a hearing system according to the present disclosure. The embodiment of a hearing system illustrated in FIG. 6 is based on a partition of the system in a hearing aid and a (e.g. handheld) processing device hosting the simulation model of the hearing aid as well as a user interface for the hearing aid (cf. arrow denoted ‘User input via APP’). The handheld processing device is indicated in FIG. 6 as Smartphone or dedicated portable processing device (comprising or having access to AI-hearing model)′. The simulation model and possibly the entire fitting system of the hearing aid may be accessible via an APP on the handheld processing device”. Note: the simulation model of the hearing aid generated by the second loop repeated and second I1 hearing aid type is evaluated, is mapped to t accessing a virtual representation of the second hearing aid type);
accessing a second fitment definition of the second hearing aid type (See Wu: Figs. 5-7, and [0051], “FIG. 7 is a schematic diagram illustrating a system for recommending hearing aids according to another embodiment of the present invention. Please refer to FIG. 7. When the hearing aid compensation parameter HA outputted by the user device 10 is equal to the specific hearing aid compensation parameter and the user parameter USER belongs to the high frequency heavy hearing loss plus the user hearing-loss value, the management server 20 calculates the recommendation coefficients of the hearing aids C1, I1 and B1 are 0.89, 0.34 and 0, respectively. At this time, the recommended list only includes hearing aid C1. From the foregoing embodiments, it can be seen that when the data of the hearing aid compensation parameter HA and the user parameter USER are clearer, the estimated accuracy of the recommendation coefficient is also higher”. Note that the hearing loss (high frequency hearing loss is mapped to the first fitness definition) is mapped to accessing a second fitment definition of the second hearing aid type);
generating a second annotated head model depicting the second hearing aid positioned accurately on the ear of the user (See Jobani: Figs. 5A-B, and [0060], “Then as is block 524, the video or the images detailing the front and/or back structure of the pair of ears of the user is processed to create a 3d model of the user's ear via a photogrammetry process involving search and matching feature point in the plurality of images, creating a coarse cloud point, creating dense cloud point, running a cloud point cleanup process in order to create a smooth mesh from the cloud point”. Note that create a 3d model of the user's ear and use the 3D ear model to create custom fit earpieces for the user when the second loop is repeated is mapped to generating a second annotated head model depicting the second hearing aid positioned accurately on the ear of the user) based on:
the first set of image data (See Goodall: Figs. 43-46, and [0361], “In an aspect, method 4300 in FIG. 43 includes capturing, with image capture circuitry on the computing device, via a user-facing imager associated with a computing device, an image of a user of the computing device, as indicated at 4302; processing the image, using image processing circuitry on the computing device, to determine at least one parameter as indicated at 4304; and controlling, with neural stimulus control signal determination circuitry on the computing device, based at least in part on the at least one parameter, delivery of a stimulus to at least one nerve innervating an ear of the user with the ear stimulation device, as indicated at 4306”. Note that the images captured by using the user-facing imager to capture the user image is analyzed to determine the position of the hearing aid (earpieces, which is mapped to the first set of image data);
the virtual representation of the second hearing aid type (See Pontoppidan: Fig. 6, and [0199], “FIG. 6 shows a fourth embodiment of a hearing system according to the present disclosure. The embodiment of a hearing system illustrated in FIG. 6 is based on a partition of the system in a hearing aid and a (e.g. handheld) processing device hosting the simulation model of the hearing aid as well as a user interface for the hearing aid (cf. arrow denoted ‘User input via APP’). The handheld processing device is indicated in FIG. 6 as Smartphone or dedicated portable processing device (comprising or having access to AI-hearing model)′. The simulation model and possibly the entire fitting system of the hearing aid may be accessible via an APP on the handheld processing device”. Note: the simulation model of the hearing aid when the second loop is repeated and the second hearing aid I1 is evaluated, is mapped to the virtual representation of the first hearing aid type); and
the second fitment definition of the second hearing aid type (See Wu: Figs. 5-7, and [0051], “FIG. 7 is a schematic diagram illustrating a system for recommending hearing aids according to another embodiment of the present invention. Please refer to FIG. 7. When the hearing aid compensation parameter HA outputted by the user device 10 is equal to the specific hearing aid compensation parameter and the user parameter USER belongs to the high frequency heavy hearing loss plus the user hearing-loss value, the management server 20 calculates the recommendation coefficients of the hearing aids C1, I1 and B1 are 0.89, 0.34 and 0, respectively. At this time, the recommended list only includes hearing aid C1. From the foregoing embodiments, it can be seen that when the data of the hearing aid compensation parameter HA and the user parameter USER are clearer, the estimated accuracy of the recommendation coefficient is also higher”. Note that the hearing loss (high frequency hearing loss is mapped to the first fitness definition) is mapped to accessing a second fitment definition of the second hearing aid type);
rendering the second annotated head model for the user (See Bologna: Fig. 14, and [0167], “FIG. 14 shows multiple views of a three-dimensional (3D) rendering of the body part model 120.99, 220.99, namely a head model, having a number of anthropometric points 120.64.2, 220.64.2 positioned thereon. As shown in FIG. 14, the points 120.64.2, 220.64.2 are positioned on the tip of the nose, edges of the eyes, between the eyes, the forwardmost edge of the chin, edges of the lips, and other locations. The anthropometric landmarks that are placed on the head model 120.99, 220.99 are then aligned with the anthropometric landmarks of the generic model using any of the alignment methods that are disclosed above (e.g., expectation-maximization, iterative closest point analysis, iterative closest point variant, Procrustes alignment, manifold alignment, and etc.) or methods that are known in the art”. Note that the a three-dimensional (3D) rendering of the body part model 120.99, 220.99, namely a head model) when the second loop is repeated the and second hearing aid I1 is evaluated, is mapped to rendering the second annotated head model for the user);
prompting the user to confirm a selected hearing aid type, from the first hearing aid type and the second hearing aid type (See Jobani: Fig. 1, and [0066], “Thus the video or the images detailing the front and/or back structure of the pair of ears of the user is processed to create a 3d model of the user's ear via the photogrammetry process involving search and matching feature point in the plurality of images, creating a coarse cloud point, creating dense cloud point, running a cloud point cleanup process in order to create a smooth mesh from the cloud point etc. The 3d model is sent to the server 104 and confirmation notification is sent to the user either as a text message, email or app notification as a push message to the phone. If the process fails, then the images cannot be further used for creating the 3d model of the earpiece and a notification is sent to the user to repeat the capture”. Note that confirmation notification is sent to the user either as a text message is mapped to prompting the user to confirm a selected hearing aid type, from the first hearing aid type and the second hearing aid type); and
in response to confirmation of the hearing aid type by the user, queuing delivery of a hearing aid, of the selected hearing aid type, to a location identified by the user.
However, Pontoppidan, modified by Goodall, Jobani, Wu, and Bologna, fails to explicitly disclose that in response to confirmation of the hearing aid type by the user, queuing delivery of a hearing aid, of the selected hearing aid type, to a location identified by the user.
However, Hughes teaches that in response to confirmation of the hearing aid type by the user, queuing delivery of a hearing aid, of the selected hearing aid type, to a location identified by the user (See Hughes: Fig. 1, and [0026], “In the embodiment described above where orders can be taken via a scanner, the user's order can be queued and entered in the system via the scanner. In another embodiment, the user may use a pad of paper, touch pad, or other type of ordering system in order to queue his order”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Pontoppidan to have in response to confirmation of the hearing aid type by the user, queuing delivery of a hearing aid, of the selected hearing aid type, to a location identified by the user as taught by Hughes in order to be more easily utilized by a person with a hearing impairment (See Hughes: Fig. 1, and [0034], “The present invention provides a system and method for making a service venue (i.e. a drive-through facility) accessible to the hard of hearing. The present invention also provides a kit for retrofitting/converting a service venue to one more easily used by a person with a hearing impairment. The details of the methods are apparent from a review of the foregoing description”). Pontoppidan teaches a method and system that may optimize the setting parameters for the hearing aids using AI model to minimize the cost functions with simulating acoustic signals and user feedbacks about the sound quality in physical environments and hearing aid combinations; while Hughes teaches a system and method that may queue the customer orders. Therefore, it is obvious to one of ordinary skill in the art to modify Pontoppidan by Hughes to queue the customer order of the hearing aid. The motivation to modify Pontoppidan by Hughes is “Use of known technique to improve similar devices (methods, or products) in the same way”.
Regarding claim 19, Pontoppidan, Goodall, Jobani, Wu, and Bologna teach all the features with respect to claim 16 as outlined above. Further, Pontoppidan, Goodall, Jobani, Wu, Bologna, and Hughes teach that the method of Claim 16, further comprising, during the first time period:
matching the user to a second hearing aid type in the corpus of hearing aid types (See Wu: Figs. 5-7, and [0051], “FIG. 7 is a schematic diagram illustrating a system for recommending hearing aids according to another embodiment of the present invention. Please refer to FIG. 7. When the hearing aid compensation parameter HA outputted by the user device 10 is equal to the specific hearing aid compensation parameter and the user parameter USER belongs to the high frequency heavy hearing loss plus the user hearing-loss value, the management server 20 calculates the recommendation coefficients of the hearing aids C1, I1 and B1 are 0.89, 0.34 and 0, respectively. At this time, the recommended list only includes hearing aid C1. From the foregoing embodiments, it can be seen that when the data of the hearing aid compensation parameter HA and the user parameter USER are clearer, the estimated accuracy of the recommendation coefficient is also higher”. Note that Hearing aid C1, I1, and B1 is mapped to the corpus of hearing aid types, and when the second I1 hearing type is evaluated, it is mapped to the second hearing type), based on the ear morphology (See Jobani: Figs. 1-5, and [0045], “The program may run on the server 104 and processes the plurality of received images and/or video of the structure of the ear(s), the plurality of design preferences from the users, and/or other information. The program may thereby allow the plurality of users to design their own personalized earphone units 114 that would form a comfortable fit with the ears of the user without falling off, and would be specifically designed to stay in place while the user engages in his/her desired activities while wearing the earphone(s). In some embodiments of the invention, the mobile application may suggest at least one design for the plurality of earphones and based on that suggestion, the users may create the custom fit personalized earphone units 114”; and [0059], “Then as in block 514, the three dimensional printer unit 112 is operated to print the personalized earphone unit 114. Then different audio electronic components are inserted to the personalized earphone unit 114 casing printed using the above said method for generating the personalized earphone unit 114. In some embodiments the audio electronic components were inserted to the personalized earphone unit 114 during the printing process using the 3d printer unit 112. Post-Processing Procedure: There are various possible post-processing steps, depending on user's preferences, including (1) tumble smoothing to smooth the 3D print, (2) vapor finishing, (3) PAD printing or silk printing to print graphics on the earphones, (4) Vapor Deposition to coat the printed earphone with metal coating, (5) “lost wax investment casting,” (6) 3D printing of a mold of the intended earphones in order to cast them in various materials such as resin, polyurethane, metals, rubbers, etc. i.e., materials that are not yet efficiently printed using 3D printers, and/or (7) coating, painting, or dipping the printed earphones. These various post-processing procedures may be utilized to achieve cosmetic and/or utilitarian (such as comfort, durability, heat resistance) objectives of the user”. Note that the personalized custom fit earpieces printed based on the various information including user preference and ear structures is mapped to matching the user to a first hearing aid type in a corpus of hearing aid types, based on the hearing profile and the ear morphology);
accessing a virtual representation of the second hearing aid type (See Pontoppidan: Fig. 6, and [0199], “FIG. 6 shows a fourth embodiment of a hearing system according to the present disclosure. The embodiment of a hearing system illustrated in FIG. 6 is based on a partition of the system in a hearing aid and a (e.g. handheld) processing device hosting the simulation model of the hearing aid as well as a user interface for the hearing aid (cf. arrow denoted ‘User input via APP’). The handheld processing device is indicated in FIG. 6 as Smartphone or dedicated portable processing device (comprising or having access to AI-hearing model)′. The simulation model and possibly the entire fitting system of the hearing aid may be accessible via an APP on the handheld processing device”. Note: the simulation model of the hearing aid generated by the second loop repeated and second I1 hearing aid type is evaluated, is mapped to t accessing a virtual representation of the second hearing aid type);
accessing a second fitment definition of the second hearing aid type (See Wu: Figs. 5-7, and [0051], “FIG. 7 is a schematic diagram illustrating a system for recommending hearing aids according to another embodiment of the present invention. Please refer to FIG. 7. When the hearing aid compensation parameter HA outputted by the user device 10 is equal to the specific hearing aid compensation parameter and the user parameter USER belongs to the high frequency heavy hearing loss plus the user hearing-loss value, the management server 20 calculates the recommendation coefficients of the hearing aids C1, I1 and B1 are 0.89, 0.34 and 0, respectively. At this time, the recommended list only includes hearing aid C1. From the foregoing embodiments, it can be seen that when the data of the hearing aid compensation parameter HA and the user parameter USER are clearer, the estimated accuracy of the recommendation coefficient is also higher”. Note that the hearing loss (high frequency hearing loss is mapped to the first fitness definition) is mapped to accessing a second fitment definition of the second hearing aid type);
generating a second annotated head model depicting the second hearing aid positioned accurately on the ear of the user (See Jobani: Figs. 5A-B, and [0060], “Then as is block 524, the video or the images detailing the front and/or back structure of the pair of ears of the user is processed to create a 3d model of the user's ear via a photogrammetry process involving search and matching feature point in the plurality of images, creating a coarse cloud point, creating dense cloud point, running a cloud point cleanup process in order to create a smooth mesh from the cloud point”. Note that create a 3d model of the user's ear and use the 3D ear model to create custom fit earpieces for the user when the second loop is repeated is mapped to generating a second annotated head model depicting the second hearing aid positioned accurately on the ear of the user)based on:
the first set of image data (See Goodall: Figs. 43-46, and [0361], “In an aspect, method 4300 in FIG. 43 includes capturing, with image capture circuitry on the computing device, via a user-facing imager associated with a computing device, an image of a user of the computing device, as indicated at 4302; processing the image, using image processing circuitry on the computing device, to determine at least one parameter as indicated at 4304; and controlling, with neural stimulus control signal determination circuitry on the computing device, based at least in part on the at least one parameter, delivery of a stimulus to at least one nerve innervating an ear of the user with the ear stimulation device, as indicated at 4306”. Note that the images captured by using the user-facing imager to capture the user image is analyzed to determine the position of the hearing aid (earpieces, which is mapped to the first set of image data);
the virtual representation of the second hearing aid type (See Pontoppidan: Fig. 6, and [0199], “FIG. 6 shows a fourth embodiment of a hearing system according to the present disclosure. The embodiment of a hearing system illustrated in FIG. 6 is based on a partition of the system in a hearing aid and a (e.g. handheld) processing device hosting the simulation model of the hearing aid as well as a user interface for the hearing aid (cf. arrow denoted ‘User input via APP’). The handheld processing device is indicated in FIG. 6 as Smartphone or dedicated portable processing device (comprising or having access to AI-hearing model)′. The simulation model and possibly the entire fitting system of the hearing aid may be accessible via an APP on the handheld processing device”. Note: the simulation model of the hearing aid when the second loop is repeated and the second hearing aid I1 is evaluated, is mapped to the virtual representation of the first hearing aid type); and
the second fitment definition of the second hearing aid type (See Wu: Figs. 5-7, and [0051], “FIG. 7 is a schematic diagram illustrating a system for recommending hearing aids according to another embodiment of the present invention. Please refer to FIG. 7. When the hearing aid compensation parameter HA outputted by the user device 10 is equal to the specific hearing aid compensation parameter and the user parameter USER belongs to the high frequency heavy hearing loss plus the user hearing-loss value, the management server 20 calculates the recommendation coefficients of the hearing aids C1, I1 and B1 are 0.89, 0.34 and 0, respectively. At this time, the recommended list only includes hearing aid C1. From the foregoing embodiments, it can be seen that when the data of the hearing aid compensation parameter HA and the user parameter USER are clearer, the estimated accuracy of the recommendation coefficient is also higher”. Note that the hearing loss (high frequency hearing loss is mapped to the first fitness definition) is mapped to accessing a second fitment definition of the second hearing aid type);
rendering the second annotated head model for the user (See Bologna: Fig. 14, and [0167], “FIG. 14 shows multiple views of a three-dimensional (3D) rendering of the body part model 120.99, 220.99, namely a head model, having a number of anthropometric points 120.64.2, 220.64.2 positioned thereon. As shown in FIG. 14, the points 120.64.2, 220.64.2 are positioned on the tip of the nose, edges of the eyes, between the eyes, the forwardmost edge of the chin, edges of the lips, and other locations. The anthropometric landmarks that are placed on the head model 120.99, 220.99 are then aligned with the anthropometric landmarks of the generic model using any of the alignment methods that are disclosed above (e.g., expectation-maximization, iterative closest point analysis, iterative closest point variant, Procrustes alignment, manifold alignment, and etc.) or methods that are known in the art”. Note that the a three-dimensional (3D) rendering of the body part model 120.99, 220.99, namely a head model) when the second loop is repeated the and second hearing aid I1 is evaluated, is mapped to rendering the second annotated head model for the user);
prompting the user to confirm a hearing aid type, from the first hearing aid type and the second hearing aid type (See Jobani: Fig. 1, and [0066], “Thus the video or the images detailing the front and/or back structure of the pair of ears of the user is processed to create a 3d model of the user's ear via the photogrammetry process involving search and matching feature point in the plurality of images, creating a coarse cloud point, creating dense cloud point, running a cloud point cleanup process in order to create a smooth mesh from the cloud point etc. The 3d model is sent to the server 104 and confirmation notification is sent to the user either as a text message, email or app notification as a push message to the phone. If the process fails, then the images cannot be further used for creating the 3d model of the earpiece and a notification is sent to the user to repeat the capture”. Note that confirmation notification is sent to the user either as a text message is mapped to prompting the user to confirm a selected hearing aid type, from the first hearing aid type and the second hearing aid type); and
in response to confirmation of the hearing aid type by the user, queuing delivery of a hearing aid, of the hearing aid type, to a location identified by the user (See Hughes: Fig. 1, and [0026], “In the embodiment described above where orders can be taken via a scanner, the user's order can be queued and entered in the system via the scanner. In another embodiment, the user may use a pad of paper, touch pad, or other type of ordering system in order to queue his order”).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Pontoppidan, etc. (US 20230037356 A1) in view of Goodall, etc. (US 20190046794 A1), further in view of Jobani (US 20150382123 A1), Wu, etc. (US 20240296931 A1), Bologna, etc. (US 20200100554 A1), and Lawther, etc. (US 20080298643 A1).
Regarding claim 13, Pontoppidan, Goodall, Jobani, Wu, and Bologna teach all the features with respect to claim 1 as outlined above. Further, Goodall teaches that the method of Claim 1:
wherein extracting the ear morphology from the first set of image data comprises detecting, in the first set of image data:
outer ear dimensions (See Goodall: Fig. 1, and [0130], “For reference, FIG. 1 depicts an ear 100 of a human subject, showing anatomical structures which may be referred to herein. The external portion of ear 100 is referred to as the pinna 102. FIG. 1 depicts a front/side view of ear 100, showing anterior surface of pinna 104, and a back view of ear 100, showing posterior surface of pinna 106 as well as head 108 of the subject. The surface of the head 108 adjacent the pinna 102 is indicated by shading and reference number 110. Anatomical features of the ear include external auditory meatus 112 (the external ear canal), helix 114, lobe 116, and tragus 118. Concha 120, the indented region in the vicinity of external auditory meatus 112, is comprised of cymba 122 and cavum 124, and bounded by antitragus 126 and antihelix 128. Antihelix 128 includes inferior (anterior) crus of antihelix 130 and superior (posterior) crus of antihelix 132, which bound triangular fossa 134”);
helix dimensions (See Goodall: Fig. 1, and [0130], “For reference, FIG. 1 depicts an ear 100 of a human subject, showing anatomical structures which may be referred to herein. The external portion of ear 100 is referred to as the pinna 102. FIG. 1 depicts a front/side view of ear 100, showing anterior surface of pinna 104, and a back view of ear 100, showing posterior surface of pinna 106 as well as head 108 of the subject. The surface of the head 108 adjacent the pinna 102 is indicated by shading and reference number 110. Anatomical features of the ear include external auditory meatus 112 (the external ear canal), helix 114, lobe 116, and tragus 118. Concha 120, the indented region in the vicinity of external auditory meatus 112, is comprised of cymba 122 and cavum 124, and bounded by antitragus 126 and antihelix 128. Antihelix 128 includes inferior (anterior) crus of antihelix 130 and superior (posterior) crus of antihelix 132, which bound triangular fossa 134”);
antihelix dimensions (See Goodall: Fig. 1, and [0130], “For reference, FIG. 1 depicts an ear 100 of a human subject, showing anatomical structures which may be referred to herein. The external portion of ear 100 is referred to as the pinna 102. FIG. 1 depicts a front/side view of ear 100, showing anterior surface of pinna 104, and a back view of ear 100, showing posterior surface of pinna 106 as well as head 108 of the subject. The surface of the head 108 adjacent the pinna 102 is indicated by shading and reference number 110. Anatomical features of the ear include external auditory meatus 112 (the external ear canal), helix 114, lobe 116, and tragus 118. Concha 120, the indented region in the vicinity of external auditory meatus 112, is comprised of cymba 122 and cavum 124, and bounded by antitragus 126 and antihelix 128. Antihelix 128 includes inferior (anterior) crus of antihelix 130 and superior (posterior) crus of antihelix 132, which bound triangular fossa 134”); and
concha dimensions (See Goodall: Fig. 1, and [0130], “For reference, FIG. 1 depicts an ear 100 of a human subject, showing anatomical structures which may be referred to herein. The external portion of ear 100 is referred to as the pinna 102. FIG. 1 depicts a front/side view of ear 100, showing anterior surface of pinna 104, and a back view of ear 100, showing posterior surface of pinna 106 as well as head 108 of the subject. The surface of the head 108 adjacent the pinna 102 is indicated by shading and reference number 110. Anatomical features of the ear include external auditory meatus 112 (the external ear canal), helix 114, lobe 116, and tragus 118. Concha 120, the indented region in the vicinity of external auditory meatus 112, is comprised of cymba 122 and cavum 124, and bounded by antitragus 126 and antihelix 128. Antihelix 128 includes inferior (anterior) crus of antihelix 130 and superior (posterior) crus of antihelix 132, which bound triangular fossa 134”);
external auditory canal dimensions (See Goodall: Fig. 1, and [0130], “For reference, FIG. 1 depicts an ear 100 of a human subject, showing anatomical structures which may be referred to herein. The external portion of ear 100 is referred to as the pinna 102. FIG. 1 depicts a front/side view of ear 100, showing anterior surface of pinna 104, and a back view of ear 100, showing posterior surface of pinna 106 as well as head 108 of the subject. The surface of the head 108 adjacent the pinna 102 is indicated by shading and reference number 110. Anatomical features of the ear include external auditory meatus 112 (the external ear canal), helix 114, lobe 116, and tragus 118. Concha 120, the indented region in the vicinity of external auditory meatus 112, is comprised of cymba 122 and cavum 124, and bounded by antitragus 126 and antihelix 128. Antihelix 128 includes inferior (anterior) crus of antihelix 130 and superior (posterior) crus of antihelix 132, which bound triangular fossa 134”);
pinna dimensions (See Goodall: Fig. 1, and [0130], “For reference, FIG. 1 depicts an ear 100 of a human subject, showing anatomical structures which may be referred to herein. The external portion of ear 100 is referred to as the pinna 102. FIG. 1 depicts a front/side view of ear 100, showing anterior surface of pinna 104, and a back view of ear 100, showing posterior surface of pinna 106 as well as head 108 of the subject. The surface of the head 108 adjacent the pinna 102 is indicated by shading and reference number 110. Anatomical features of the ear include external auditory meatus 112 (the external ear canal), helix 114, lobe 116, and tragus 118. Concha 120, the indented region in the vicinity of external auditory meatus 112, is comprised of cymba 122 and cavum 124, and bounded by antitragus 126 and antihelix 128. Antihelix 128 includes inferior (anterior) crus of antihelix 130 and superior (posterior) crus of antihelix 132, which bound triangular fossa 134”);
skin color; and
hair color.
However, Pontoppidan, modified by Goodall, Jobani, Wu, and Bologna, fails to explicitly disclose that skin color; and hair color.
However, Lawther teaches that skin color (See Lawther: Fig. 1, and [0060], “The local features could include a combination of several disparate feature types such as Eigenfaces, facial measurements, color/texture information, wavelet features etc. Alternatively, the local features can additionally be represented with quantifiable descriptors such as eye color, skin color, hair color/texture, and face shape”); and
hair color (See Lawther: Fig. 1, and [0060], “The local features could include a combination of several disparate feature types such as Eigenfaces, facial measurements, color/texture information, wavelet features etc. Alternatively, the local features can additionally be represented with quantifiable descriptors such as eye color, skin color, hair color/texture, and face shape”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Pontoppidan to have skin color; and hair color as taught by Lawther in order to solve the multi-class face recognition problem by using the pair-wise classification paradigm and the composite head model using various features of the human, such as size, shape, location, relationship among the features, skin color, hair color, texture, etc. (See Lawther: Fig. 2, and [0090], “In Step 222, person identification is continued using interactive person identifier 250 and person classifier 244 until all of the faces of identifiable people are classified in the collection of images taken at an event. If John and Jerome are brothers, the facial similarity can require additional analysis for person identification. In the family photo domain, the face recognition problem entails finding the right class (person) for a given face among a small (typically in the 10s) number of choices. This multi-class face recognition problem can be solved by using the pair-wise classification paradigm; where two-class classifiers are designed for each pair of classes”). Pontoppidan teaches a method and system that may optimize the setting parameters for the hearing aids using AI model to minimize the cost functions with simulating acoustic signals and user feedbacks about the sound quality in physical environments and hearing aid combinations; while Lawther teaches a system and method that may create head model and recognize persons correctly by accurately measuring the size, shape, location and spatial relationship among objects, accurately measuring the color and texture of objects, and accurately classifying the key subject matters. Therefore, it is obvious to one of ordinary skill in the art to modify Pontoppidan by Lawther to create the head models with various features extracted from the captured images including the skin color and hair color. The motivation to modify Pontoppidan by Lawther is “Use of known technique to improve similar devices (methods, or products) in the same way”.
Allowable Subject Matter
Claims 4 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The best arts search do not teach the claimed limitations of “The method of Claim 1: wherein generating the first annotated head model depicting the first hearing aid positioned accurately on an ear of the user comprises: accessing a second photographic image of the user, captured via a forward-facing optical sensor arranged in a mobile device; detecting a position of eyes of the user in the second photographic image; and defining a virtual projection plane angularly offset from the position of eyes of the user detected in the second photographic image; wherein rendering the first annotated head model for the user comprises: rendering a projection of the first annotated head model, onto the virtual projection plane, on a display integrated into the mobile device; and further comprising, during the second time period: prompting the user to confirm the adjustment prompt; and in response to confirmation of the adjustment prompt by the user: prompting the user to capture a third set of image data via the forward-facing optical sensor arranged in the mobile device; detecting a second position of the first hearing aid, arranged on the ear of the user, in the third set of image data; and in response to the second position of the first hearing aid approximating the first fitment definition of the first hearing aid, prompting the user that the first hearing aid is correctly fitted”.
Claims 5 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The best arts search do not teach the claimed limitations of “The method of Claim 1: wherein accessing the first fitment definition of the first hearing aid type comprises accessing the first fitment definition defining: a target reference feature on a target head; and a target orientation of the first hearing aid type, on the target head, relative to the target reference feature; wherein detecting the position of the first hearing aid, arranged on the ear of the user, in the second set of image data comprises: detecting a first reference feature, on the head of the user and analogous to the target reference feature, in the second set of image data; and extracting a first orientation of the first hearing aid, located on the ear of the user, relative to the first reference feature from the second set of image data; further comprising: calculating a spatial difference between the target orientation of the first hearing aid type and the first orientation of the first hearing aid; and wherein generating the adjustment prompt to adjust placement of the first hearing aid on the ear of the user comprises, in response to the spatial difference exceeding a threshold difference: generating the adjustment prompt to adjust placement of the first hearing aid on the ear of the user to reduce the spatial difference, approximating the target orientation.”
Claim 7 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The best arts search do not teach the claimed limitations of ”the method of Claim 1: wherein accessing the first set of image data comprises: prompting the user to manipulate a mobile device, with a color camera, facing a side of the head of the user; capturing a sequence of images; for each image in the sequence of images, scanning the image for the ear of the user; and in response to detecting the ear of the user in a target image, in the sequence of images, outputting haptic feedback via the mobile device; wherein generating the first annotated head model depicting the first hearing aid positioned accurately on the ear of the user comprises: finding a reference in the target image; calculating a scale value based on the reference; scaling the virtual representation of the first hearing aid type according to the scale value; and projecting of the virtual representation of the first hearing aid type, scaled according to scale value onto the target image to generate an annotated target image; and wherein rendering the first annotated head model comprises: displaying the annotated target image to the user.”
Claim 8 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The best arts search do not teach the claimed limitations of ”the method of Claim 1: wherein accessing the first set of image data comprises: accessing the first set of image data comprising a video clip; wherein generating the first annotated head model comprises: for each frame, in a sequence of frames in the video clip: detecting a reference ear morphology feature in the video frame; detecting a reference object in video frame; extracting a dimension of the reference object from the video frame; calculating a scalar value based on the dimension; scaling the virtual representation of the first hearing aid type according to the scalar value; and projecting the virtual representation of the first hearing aid type, scaled according to the scalar value, onto the video frame relative to the reference ear morphology feature to generate an annotated video frame, in a sequence of annotated video frames; and assembling the sequence of annotated video frames into the first annotated head model comprising an annotated video clip; and wherein rendering the first annotated head model for the user comprises: playing the annotated video clip.”
Claim 9 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The best arts search do not teach the claimed limitations of ”the method of Claim 1: wherein accessing the first set of image data depicting the head of the user comprises: capturing an opportunistic scan of user's head, via a color camera, while the user receives the hearing assessment, to produce a set of color images of the head of the user; wherein generating the first annotated head model depicting the first hearing aid positioned accurately on the ear of the user comprises: assembling the set of color images into a three-dimensional head model; accessing the three-dimensional head model; detecting a position of eyes of the user in the three-dimensional head model; and defining a virtual projection plane angularly offset from the position of eyes of the user detected in the three-dimensional head model; and wherein rendering the first annotated head model for the user comprises: rendering a projection of the first annotated head model, onto the virtual projection plane, on a display integrated into the mobile device.”
Claim 10 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The best arts search do not teach the claimed limitations of ”the method of Claim 1, wherein generating the hearing profile of the user based on the result of the hearing assessment comprises generating a baseline hearing profile for the user comprising: a set of gain values based on a first volume setting, each gain value in the set of gain values corresponding to a frequency band in a set of frequency bands spanning a human-audible frequency range; accessing a first stimulus comprising a first spoken phrase characterized by a first frequency spectrum predominantly within a first frequency band in the set of frequency bands; playing the first stimulus amplified by a first gain in the first frequency band; playing the first stimulus amplified by a second gain in the first frequency band different from the first gain; receiving a first preference input representing a preference of the user from amongst the first stimulus amplified by the first gain in the first frequency band and the first stimulus amplified by the second gain in the first frequency band; and modifying a first gain value, corresponding to the first frequency band, in the baseline hearing profile based on the first preference input to generate a first refined hearing profile compensating for hearing deficiency of the user.”
Claim 11 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The best arts search do not teach the claimed limitations of ”the method of Claim 1: wherein extracting the ear morphology from the first set of image data comprises detecting a set of constraining dimensions in the first set of image data of the ear of the user; further comprising accessing a set of preferences of the user; wherein matching the user to the first hearing aid type in the corpus of hearing aid types comprises: accessing a set of hearing aid types, each hearing aid type defining a form factor; identifying a subset of hearing aid types, in the set of hearing aid types, defining form factors conforming to the set of constraining dimensions; ranking the subset of hearing aid types based on the set of preferences of the user; and selecting the first hearing aid type, from the subset of hearing aid types, corresponding to a highest rank in the subset of hearing aid types; and wherein loading the representation of the hearing profile onto the first hearing aid comprises configuring the first hearing aid, of the first hearing aid type, with the hearing profile for the user.”
Claim 12 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The best arts search do not teach the claimed limitations of ”the method of Claim 1: wherein extracting the ear morphology from the first set of image data comprises: detecting a set of constraining dimensions in the first set of image data of the ear of the user; extracting a set of features from the region of the image; and interpreting the set of constraining dimensions of the ear of the user based on the set of features; wherein matching the user to the first hearing aid type in the corpus of hearing aid types comprises: accessing a set of hearing aid types, each hearing aid type defining a form factor of a hearing aid, in a set of hearing aids; identifying a subset of hearing aid types, in the set of hearing aid types, defining form factors conforming to the set of constraining dimensions; and selecting a first hearing aid type from the subset of hearing aids; and wherein loading the representation of the hearing profile onto the first hearing aid comprises configuring the first hearing aid of the first hearing aid type with the hearing profile for the user.”
Claim 14 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The best arts search do not teach the claimed limitations of ”the method of Claim 1: wherein the first time period is located at a kiosk; wherein accessing the first set of image data depicting the head of the user comprises: capturing a three-dimensional optical scan of the head of the user, via a forward-facing color camera arranged in the kiosk; wherein generating the first annotated head model depicting the first hearing aid positioned accurately on the ear of the user comprises: projecting the three-dimensional optical scan of the head of the user into a projection plane; and projecting the virtual representation of the first hearing aid type onto the three-dimensional optical scan of the head of the user, based on the first fitment definition of the first hearing aid type, in the projection plane; and wherein rendering the first annotated head model for the user comprises: rendering the first annotated head model in the projection plane.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GORDON G LIU whose telephone number is (571)270-0382. The examiner can normally be reached Monday - Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona E Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GORDON G LIU/Primary Examiner, Art Unit 2618