Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-20 are presented for examination.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, is directed to that judicial exception, an abstract idea, as it has not been integrated into practical application and the claims further do not recite significantly more than the judicial exception. Examiner has evaluated the claims under the framework provided in the 2019 Patent Eligibility Guidance published in the Federal Register 01/07/2019 and has provided such analysis below.
Step 1: Claims 1-18 and 20 are directed to method and claim 19 is directed to a system and fall within the statutory category of process and machine, respectively. Therefore, “Are the claims to a process, machine, manufacture or composition of matter?” Yes.
In order to evaluate the Step 2A inquiry “Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?” we must determine, at Step 2A Prong 1, whether the claim recites a law of nature, a natural phenomenon or an abstract idea and further whether the claim recites additional elements that integrate the judicial exception into a practical application.
Step 2A Prong 1:
Claims 1 and 19-20: The limitations of “process the plurality of images to determine the position of at least one anatomical landmark of the user's ear and of one or more parts of the hearing aid: (claim 19), “applying a facial landmark algorithm on the plurality of images to identify a position of an ear of the user, at least one anatomical landmark of the ear and one or more parts of the hearing aid”, “deriving, from the determined position of the at least one anatomical landmark of the ear of the user and from the one or more parts of the hearing aid, one or more features related to a relative position of the hearing aid relative to the at least one anatomical landmark of the ear of the user in the plurality of images”, “determining the correctness of a position of the hearing aid”, “differentiate between the following scenarios: a) the hearing aid is correctly positioned; b) the hearing aid is incorrectly positioned; and c) a structural element of the hearing aid needs to be changed” (claims 1 and 19), “identify whether a structural element of the hearing aid needs to be changed in order to achieve a correct fit of the hearing aid to the unique anatomy of the user's ear” (claim 20), and “if changing of a structural element of the hearing aid is required, the algorithm is configured to identify which structural element needs to be changed” (claim 20), as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can think about, observe, judge and evaluate whether a hearing aid is correctly positioned or if a structural element needs changing.
Therefore, Yes, claim 1 recites judicial exceptions.
The claims have been identified to recite judicial exceptions, Step 2A Prong 2 will evaluate whether the claims are directed to the judicial exception.
Step 2A Prong 2:
Claims 1 and 19-20: The judicial exception is not integrated into a practical application. In particular, the claim recites the following additional elements – “requesting…a hearing aid user to capture and/or upload a plurality of images, the plurality of images comprising at least one frontal face image and at least one side face image” which is merely insignificant extra solution data gathering activity (see MPEP § 2106.05(g)) which does not integrate a judicial exception into practical application. “Via a user interface”, “by applying a machine learning algorithm on the one or more features, wherein the machine learning algorithm is trained using a training set comprising a large plurality of images of ears with hearing aids and a plurality of labels associated with the large plurality of images”, and “wherein the machine algorithm is trained” are recited at a high level of generality and amounts to merely using generic computing components as a tool to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. “Wherein the plurality of images is captured while the user is wearing a hearing aid” and “each label indicating whether the hearing aid is correctly or incorrectly positioned, wherein the large plurality of images comprises images of ears of different subjects and wherein the large plurality of images comprises at least two images of each ear from different angles thereof” are merely a recitation of a field of use/technological environment (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. “Providing an indication to the user regarding the correctness of the hearing aid position, wherein if the hearing aid is correctly positioned, the indication is indicative of same; wherein if incorrect positioning is identified, the indication comprises a request to reposition the hearing aid, or wherein if changing of a structural element of the hearing aid is required, the indication comprises a request to change the structural element” (claims 1 and 19) and “to provide a request to the user to change the identified structural element” (claim 20) are merely insignificant extra solution data output activity (see MPEP § 2106.05(g)) which does not integrate a judicial exception into practical application.
Therefore, “Do the claims recite additional elements that integrate the judicial exception into a practical application? No, these additional elements do not integrate the abstract idea into a practical application and they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
After having evaluating the inquires set forth in Steps 2A Prong 1 and 2, it has been concluded that Claims 1 and 19-20 not only recite a judicial exception but that the claim is directed to the judicial exception as the judicial exception has not been integrated into practical application.
Step 2B:
Claims 1 and 19-20: The claim does not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than insignificant extra solution data gathering/output activity, using a computer as a tool to apply the abstract idea, and field of use/technological environment which do not amount to significantly more than the abstract idea. Further, the insignificant extra solution activity is also well understood, routine, and conventional, see MPEP § 2106.05(d) – The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: i. Receiving or transmitting data over a network… iv. Presenting offers and gathering statistics.
Therefore, “Do the claims recite additional elements that amount to significantly more than the judicial exception? No, these additional elements, alone or in combination, do not amount to significantly more than the judicial exception.
Having concluded analysis within the provided framework, Claims 1 and 19-20 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to further claims, claim 2 recites the additional abstract idea of “extracting the at least one anatomical landmark of the user's ear, wherein the extracting comprises applying an image analysis algorithm on the plurality of images”, claim 17 recites the additional abstract idea of “extracting a plurality of features from each of the large plurality of images”, and claim 18 recites the additional abstract idea of “selecting a subset of features from the plurality of features, which subset have a predictive value above a predetermined threshold”, which as drafted, are processes that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind.
Claim 3 recites the limitation “the at least one landmark comprises a climax of the helix, an angle of the pinna relative to the head, the crus of helix, the tragus, the intertragic notch, the antitragus, an entrance of the external auditory canal, the cavum and a d-shape of the pinna or any combination thereof”, claim 4 recites “the one or more features comprises two or more of: a distance between a climax of a helix of the ear of the user and a connection point between a body and a tube of the hearing aid, a horizontal and/or vertical distance between an upper band of the tube of the hearing aid and a crus of the helix of the ear of the user, a horizontal and/or vertical distance between a middle band of the tube of the hearing aid and the cymba of the ear of the user, a horizontal position of the hearing aid tube and/or a dome of the hearing aid relative to the concha and/or the entrance of the external auditory meatus of the ear of the user, a position of a lower part of the hearing aid tube in a vertical and/or horizontal plane relative to a tragus, antitragus and/or intertragic notch of the ear of the user”, claim 10 recites “the structural element is selected from a tube length, a tube depth, a standard silicon dome size, a standard silicon dome type, or a custom made earmold”, and claim 16 recites “the large plurality of images comprises a first image of an ear with a correctly positioned hearing aid and a second image of the same ear with an incorrectly positioned hearing aid”, which are merely recitations of a field of use/technological environment.
Claim 5 recites “the at least one side face image comprises at least one left-side face image and at least one right-side face image”, claim 6 recites “the plurality of images are still images”, and claim 7 recites “the plurality of images are derived from a video”, which are merely recitations of a field of use/technological environment and also part of the data gathered from the insignificant extra solution data gathering activity. See above regarding data gathering being “well understood, routine, and conventional”.
Claim 8 recites “the request to reposition the hearing aid comprises instruction regarding how to reposition”, claim 9 recites “the instructions comprises instructions to change an angle of a body of the hearing aid, instruction to position the hearing aid lower or higher than a current position, instruction regarding positioning of a wire/tube on the pinna, instructions regarding position and depth of a receiver/tube inside the ear and/or ear canal, instructions to change a dome of the hearing aid, instructions to change a length of a hearing aid tube and/or a receiver wire or any combination thereof”, which are merely insignificant extra solution data output activity. See above regarding output activity being “well understood, routine, and conventional”.
Claim 11 recites “the method is executed via an App and wherein the capturing of the plurality of images is carried out using a camera of a mobile phone or tablet installed with the App”, which is merely using generic computing components as a tool to apply the abstract idea.
Claim 12 recites “guiding the capturing of the plurality of images”, claim 13 recites “the guiding comprises instructing the user to position the camera for capturing a frontal face image and determining correct face position relative to an image frame of the camera by applying a face recognition tool”, and claim 14 recites “the guiding further comprises instructing the user to turn the face sideways and determining correct face position based on automatic identification of the ear of the user”, which are merely insignificant extra-solution activity. See above regarding data gathering and data output activity being “well understood, routine, and conventional”.
Claim 15 recites “an initial step of guided insertion/positioning of a hearing aid”, which is pre-solution activity as well as insignificant extra-solution activity. The “guided” aspect implies instructions issued relating to the data gathering step. See above regarding data gathering and data output activity being “well understood, routine, and conventional”. The “initial step” means that the activity must be pr-solution activity. It is clear that a hearing aid must initially be positioned before any method for “determining hearing aid position correctness” could be implemented.
With regard to integration into practical application and whether additional elements amount to significantly more, claims 2-18 fail Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more
Therefore, Claims 2-18 do not recite patent eligible subject matter under 35 U.S.C. § 101.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Referring to claims 1-20, claims 1 and 19-20 recite limitation “each label indicating whether the hearing aid is correctly or incorrectly positioned.” There is insufficient antecedent basis for this limitation in the claim. Examiner interprets as the machine learning algorithm is trained using a training set comprising a large plurality of images of ears with hearing aids and a plurality of labels respectively associated with each image of the large plurality of images, each label of the plurality of labels indicating whether the respective hearing aid of the respective image is correctly or incorrectly positioned. Claims 2-18 depend from claim 1, therefore, they are rejected for the same reasons.
Further, the claims 1 and 19 state that the machine learning algorithm is trained using labels on images to identify “correctly positioned” or “incorrectly positioned” hearing aids. However, the claims then say that the algorithm is trained to differentiate between 3 different scenarios. It is unclear how the algorithm can differentiate between a) correctly positioned, b) incorrectly positioned and c) a need for a change of a structural element when the images in the training set only had a label of correctly or incorrectly positioned. Examiner interprets as the machine learning algorithm is trained using a training set comprising a large plurality of images of ears with hearing aids and a first plurality of labels respectively associated with each image of the large plurality of images, each label of the first plurality of labels indicating whether the respective hearing aid of the respective image is correctly or incorrectly positioned and a second plurality of labels respectively associated with each image of the large plurality of images, each label of the second plurality of labels indicating whether the respective hearing aid of the respective image has a need for a change in a structural element or does not have a need for the change in the structural element…wherein the machine algorithm is trained to differentiate between: a) the hearing aid is correctly positioned; and b) the hearing aid is incorrectly positioned; and between: c) a structural element of the hearing aid needs to be changed; and d) the structural element of the hearing aid does not need to be changed. Once again claims 2-18 depend from claim 1, therefore, they are rejected for the same reasons.
Referring to claims 1-20, claims 1 and 19-20 recite limitation “the unique anatomy.” There is insufficient antecedent basis for this limitation in the claim. Examiner interprets as a unique anatomy. Claims 2-18 depend from claim 1, therefore, they are rejected for the same reasons.
Referring to claim 9, claim 9 recites limitation “the instruction.” There is insufficient antecedent basis for this limitation in the claim. Examiner interprets claim 8 as saying comprises instructions.
Referring to claim 19, claim 19 recites limitation “the position” in line 4 of page 36. There is insufficient antecedent basis for this limitation in the claim. Examiner interprets as a position.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 8-9, and 11-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. US Publication No. 20210089773 (from IDS) in view of Pedersen US Publication No. 20130169779 (from IDS) and Nishimuta et al. US Publication No. 20230159716.
Referring to claim 1, Yang et al. teaches a computer implemented method (para 0017: “The application is to be executed by an electronic device having a camera. The application or app or application program or software application may be a computer program or computer software designed to be executed by an electronic device.”) for determining hearing aid position correctness with respect to the unique anatomy of a specific user’s ear (para 0125: “If the comparison shows that the hearing device is correctly arranged, then the flow continues to step 108. If the comparison shows that the hearing device is not correctly arranged, then the flow continues to step 110”), the method comprising: requesting, via a user interface, a hearing aid user (para 0094: “The application (2) is configured for providing instructions (36) and/or feedback (36) for assisting a user, such as the hearing device wearer (8), in using the application (2).”; para 0032: “the application is configured for providing a first visual guide in the viewfinder, the first visual guide is for assisting the wearer in obtaining a first predetermined placement of his/her head and/or ear relative to the camera, when capturing the first image.”) to capture and/or upload an image, the image comprising at least one face image, wherein the image is captured while the user is wearing a hearing aid (para 0030: “The application is configured for capturing the first image with the camera. Thus, the application may be configured for capturing the first image with the camera application. The first image shows at least the ear of the wearer with the hearing device arranged in and/or at the ear of the wearer.”), applying a facial landmark algorithm on the image to identify a position of an ear of the user, at least one anatomical landmark of the ear and one or more parts of the hearing aid; deriving from the determined position of the at least one anatomical landmark of the ear of the user and from the one or more parts of the hearing aid, one or more features related to a relative position of the hearing aid relative to the at least one anatomical landmark of the ear of the user in the image (para 0036: “the image recognition functionalities, methods, or software utilise image landmarks or features identified, determined or detected on the first image and/or on the reference image. The image landmarks or features may be landmarks or features such as a point, distance, angle, edge, pixel by pixel, and/or gradient. The image landmarks or features may be a location or an area of the image of interest, such as highlighted by the HCP to pay attention to, e.g. where a receiver tube goes over the top of the pinna and/or the orientation of the receiver itself prior to insertion in or at the ear.”); determining the correctness of a position of the hearing aid by applying algorithm on the one or more features, wherein the algorithm uses image of ear with hearing aid (para 0035: “The image recognition functionalities, methods or software, such as computer vision, may use machine learning, neural networks or artificial intelligence for the comparison.”; para 0036: “he image landmarks or features may be a location or an area of the image of interest, such as highlighted by the HCP to pay attention to, e.g. where a receiver tube goes over the top of the pinna and/or the orientation of the receiver itself prior to insertion in or at the ear…use image landmarks or features in the comparison between the first image and the reference image”; para 0037: “the application is configured for providing suggestions or instructions to the wearer for assisting the wearer in adjusting the position of the hearing device arranged in and/or at the ear based on the comparison between the first image and the reference image.”); wherein the algorithm is to differentiate between the following scenarios: a) the hearing aid is correctly positioned; b) the hearing aid is incorrectly positioned; providing an indication to the user regarding the correctness of the hearing aid position, wherein if the hearing aid is correctly positioned, the indication is indicative of same (para 0037: “The suggestions or instructions may comprise information regarding whether or not the hearing device is correctly arranged in and/or at the ear, such as an indication that the hearing device is correctly or in-correctly arranged in and/or at the ear of the hearing device wearer.”); wherein if incorrect positioning is identified, the indication comprises a request to reposition the hearing aid (para 0127: “In step 110 the hearing device is adjusted, if the comparison in step (106) of the first image and the reference image shows that the hearing device is not correctly arranged in and/or at the ear of the wearer. The application provides suggestions and/or instructions for assisting in adjusting the position of the hearing device. The suggestions and/or instructions are based on the comparison (106) between the first image and the reference image. After the hearing device has been adjusted, the flow may be iterated or repeated by using the application to capture a new first image in step (104) and perform a comparison in step (106) between the new first image and the reference image.”).
However, Yang et al. does not teach capturing front and side images of the user per se or capturing images from different subjects, but Pedersen teaches to capture and/or upload a plurality of images, the plurality of images comprising at least one frontal face image and at least one side face image (para 0092: “the device 12 may be used to obtain two input images 20, with the first input image 20 being a front view of a subject that includes the head, ears, and at least part of the torso, and the second input image 20 being a side view of the subject that includes the head, the ear, and at least part of the torso.”), the plurality of images comprises images of ears of different subjects (para 0075: “the reference image…are collected over time from different fitting procedures of different subjects”), and wherein the plurality of images comprises at least two images of each ear from different angles thereof (para 0092: “two sets of reference images 300 are stored at the second device 14, with the first set of reference images 300 being front images of different persons, and the second set of reference images 300 being side images of the different persons.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to capture multiple images of the ear and from multiple users, as taught in Pedersen, in the method of Yang et al. because it allows for more angles to be compared to a larger set of ears, which allows for a more accurate comparison and analysis.
However, Yang et al. and Pedersen do not teach using machine learning to determine correctness or determining structural issues, but Nishimuta et al. teaches determining correctness by applying a machine learning algorithm on the one or more features (para 0195: “The image may be input into the dental appliance placement assessor 1450, which may use a trained machine learning model…determine whether the dental appliance was correctly placed in the holder”), wherein the machine learning algorithm is trained using a training set comprising: a large plurality of images (para 0182: “For the model training workflow 1405, a training dataset containing hundreds, thousands, tens of thousands, hundreds of thousands or more images should be used to form a training dataset.”), a plurality of labels associated with the large plurality of images, each label indicating whether something is correctly or incorrectly positioned (para 0182: “Each image may include various labels of one or more types of useful information. Each image may include, for example, data indicating whether the dental appliance was correctly placed in a holder in the image, a dental appliance type and/or object type for a dental appliance in the image, an indication as to whether an object was correctly placed against a feature of the dental appliance in the image, an indication as to whether or not the dental appliance in the image was damaged, pixel-level or patch-level segmentation of the image into various classes (e.g., bond region, not bond region, successful bond, unsuccessful bond, etc.), and so forth.”; para 0183: “The labels that are used may depend on what a particular machine learning model will be trained to do.”); wherein the machine algorithm is trained to differentiate between the following scenarios (para 0195: “The image may be input into the dental appliance placement assessor 1450, which may use a trained machine learning model…to determine whether the dental appliance was correctly placed in the holder.”): a) correctly positioned; b) incorrectly positioned; (para 0195: “This may include determining…whether an orientation of the dental appliance in the holder is correct…The dental appliance placement assessment 1452 may be a simple indication that the dental appliance was correctly placed in the holder or incorrectly placed in the holder.”) and c) a structural element needs to be changed (para 0195: “This may include determining…whether the dental appliance was placed into a correct type of holder… This information may be used to reposition the dental appliance…in a new holder” – Examiner notes that a type of holder is akin to a structural element); wherein if changing of a structural element is required, the indication comprises a request to change the structural element (para 0195: “This information may be used to reposition the dental appliance…in a new holder” – Examiner notes the “new holder” is akin to the change in structural element). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use machine learning, as taught in Nishimuta et al., in the method of Yang et al. and Pedersen because it would a allow a comparison to a much larger data set of trained images, which creates a more accurate determination as compared to a comparison with a single image, which might not always be perfect, especially if a professional is not involved. Further, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to train the machine learning algorithm about structural issues, as taught in Nishimuta et al., in the method of Yang et al. and Pedersen because it provides further ways to assist a user in having a properly positioned and effective hearing aid.
Referring to claim 2, Yang et al. teaches extracting the at least one anatomical landmark of the user's ear, wherein the extracting comprises applying an image analysis algorithm on the image (para 0036) and Pedersen teaches the plurality of images (para 0092). Motivation to combine is the same as in claim 1.
Referring to claim 3, Yang et al. teaches the at least one landmark comprises a climax of the helix, an angle of the pinna relative to the head, the crus of helix, the tragus, the intertragic notch, the antitragus, an entrance of the external auditory canal, the cavum and a d-shape of the pinna or any combination thereof (para 0036; Fig. 1b: image landmarks 38).
Referring to claim 4, Yang et al. teaches the one or more features comprises two or more of: a distance between a climax of a helix of the ear of the user and a connection point between a body and a tube of the hearing aid, a horizontal and/or vertical distance between an upper band of the tube of the hearing aid and a crus of the helix of the ear of the user, a horizontal and/or vertical distance between a middle band of the tube of the hearing aid and the cymba of the ear of the user, a horizontal position of the hearing aid tube and/or a dome of the hearing aid relative to the concha and/or the entrance of the external auditory meatus of the ear of the user, a position of a lower part of the hearing aid tube in a vertical and/or horizontal plane relative to a tragus, antitragus and/or intertragic notch of the ear of the user (para 0036; Fig. 1b: images features 38). Yang et al. teaches in para 0036 that “a point, distance, angle, edge, pixel by pixel, and/or gradient” may be determined and that examples include “where a receiver tube goes over the top of the pinna and/or the orientation of the receiver itself prior to insertion in or at the ear”, therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to determine “a point, distance, angle, edge, pixel by pixel, and/or gradient” regarding any of a plurality of anatomical features and any of a plurality of hearing aid parts, as listed in the claim above, in the method of Yang et al., Pedersen, and Nishimuta et al. because Yang et al. already has the ability to identify ear parts and hearing aid parts and their relationships.
Referring to claim 5, Yang et al. teaches the at least one side face image comprises at least one left-side face image and at least one right-side face image (para 0014).
Referring to claim 6, Yang et al. teaches the image is still image (para 0007) and Pedersen teaches the plurality of images (para 0092). Motivation to combine is the same as in claim 1.
Referring to claim 8, Yang et al. teaches the request to reposition the hearing aid comprises instruction regarding how to reposition (para 0127).
Referring to claim 9, Yang et al. teaches the instructions comprises instructions to change an angle of a body of the hearing aid, instruction to position the hearing aid lower or higher than a current position, instruction regarding positioning of a wire/tube on the pinna, instructions regarding position and depth of a receiver/tube inside the ear and/or ear canal, instructions to change a dome of the hearing aid, instructions to change a length of a hearing aid tube and/or a receiver wire or any combination thereof (para 0037).
Referring to claim 11, Yang et al. teaches the method is executed via an App and wherein the capturing of the image is carried out using a camera of a mobile phone or tablet installed with the App (para 0018) and Pedersen teaches the plurality of images (para 0092). Motivation to combine is the same as in claim 1.
Referring to claim 12, Yang et al. teaches the method further comprises guiding the capturing of the image (para 0032) and Pedersen teaches the plurality of images (para 0092). Motivation to combine is the same as in claim 1.
Referring to claim 13, Yang et al. teaches the guiding comprises instructing the user to position the camera for capturing a face image and determining correct face position relative to an image frame of the camera by applying a face recognition tool (para 0030) and Pedersen teaches
capturing a frontal face image (para 0092). Motivation to combine is the same as in claim 1.
Referring to claim 14, Yang et al. teaches the guiding further comprises instructing the user to turn the face sideways and determining correct face position based on automatic identification of the ear of the user (paras 0030, 0095).
Referring to claim 15, Yang et al. teaches an initial step of guided insertion/positioning of a hearing aid (para 0100).
Referring to claim 16, Yang et al. teaches the plurality of images comprises a first image of an ear with a correctly positioned hearing aid (para 0127: “After the hearing device has been adjusted, the flow may be iterated or repeated by using the application to capture a new first image in step (104) and perform a comparison in step (106) between the new first image and the reference image.”; para 0126: “the use of the application may be terminated, if the comparison (106) of the first image and the reference image shows that the hearing device is correctly arranged in and/or at the ear of the wearer”) and a second image of the same ear with an incorrectly positioned hearing aid (para 0124: “capture a first image, where the first image shows a current placement or arrangement of the hearing device in and/or at the ear of the wearer”; para 0127: “In step 110 the hearing device is adjusted, if the comparison in step (106) of the first image and the reference image shows that the hearing device is not correctly arranged in and/or at the ear of the wearer.”) and Nishimuta et al. teaches the large plurality of images comprises a first image with a correctly positioned and a second image with an incorrectly positioned (para 0182). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to accumulate correctly and incorrectly positioned data, as in Nishimuta et al., from a same user, as in Yang et al., in the method of Yang et al., Pedersen, and Nishimuta et al. because a user that incorrectly positions something is apt to eventually position it correctly, thus providing images for both incorrect and correct positioning, which helps to further train the machine learning algorithm.
Referring to claim 17, Yang et al. teaches extracting a plurality of features from each image (para 0036) and Nishimuta et al. teaches the large plurality of images (para 0182). Motivation to combine is the same as in claim 1.
Referring to claim 18, Yang et al. and Nishimuta et al. teaches selecting a subset of features from the plurality of features, which subset have a predictive value above a predetermined threshold (Yang: para 0036; Nishimuta et al.: paras 0146, 0190).
Referring to claim 19, Yang et al. teaches a system for determining hearing aid positioning correctness with respect to the unique anatomy of a specific user’s ear (para 0125: “If the comparison shows that the hearing device is correctly arranged, then the flow continues to step 108. If the comparison shows that the hearing device is not correctly arranged, then the flow continues to step 110”), the system comprising a processing logic configured to: request a hearing aid user (para 0094: “The application (2) is configured for providing instructions (36) and/or feedback (36) for assisting a user, such as the hearing device wearer (8), in using the application (2).”; para 0032: “the application is configured for providing a first visual guide in the viewfinder, the first visual guide is for assisting the wearer in obtaining a first predetermined placement of his/her head and/or ear relative to the camera, when capturing the first image.”) to capture an image, the image comprising at least one face image, wherein the image is captured while the user is wearing a hearing aid (para 0030: “The application is configured for capturing the first image with the camera. Thus, the application may be configured for capturing the first image with the camera application. The first image shows at least the ear of the wearer with the hearing device arranged in and/or at the ear of the wearer.”), process the image to determine the position of at least one anatomical landmark of the user's ear and of one or more parts of the hearing aid, wherein the processing comprises applying a facial landmark algorithm on the image to identify a position of an ear of the user, at least one anatomical landmark of the ear and one or more parts of the hearing aid; deriving, from the determined position of the at least one anatomical landmark of the ear of the user and from the one or more parts of the hearing aid, one or more features related to a relative position of the hearing aid relative to the at least one anatomical landmark of the ear of the user in the image (para 0036: “the image recognition functionalities, methods, or software utilise image landmarks or features identified, determined or detected on the first image and/or on the reference image. The image landmarks or features may be landmarks or features such as a point, distance, angle, edge, pixel by pixel, and/or gradient. The image landmarks or features may be a location or an area of the image of interest, such as highlighted by the HCP to pay attention to, e.g. where a receiver tube goes over the top of the pinna and/or the orientation of the receiver itself prior to insertion in or at the ear.”); determine the correctness of a position of the hearing aid by applying algorithm on the one or more derived features wherein the algorithm uses image of ear with hearing aid (para 0035: “The image recognition functionalities, methods or software, such as computer vision, may use machine learning, neural networks or artificial intelligence for the comparison.”; para 0036: “he image landmarks or features may be a location or an area of the image of interest, such as highlighted by the HCP to pay attention to, e.g. where a receiver tube goes over the top of the pinna and/or the orientation of the receiver itself prior to insertion in or at the ear…use image landmarks or features in the comparison between the first image and the reference image”; para 0037: “the application is configured for providing suggestions or instructions to the wearer for assisting the wearer in adjusting the position of the hearing device arranged in and/or at the ear based on the comparison between the first image and the reference image.”); wherein the algorithm is to differentiate between the following scenarios: a) the hearing aid is correctly positioned; b) the hearing aid is incorrectly positioned; provide an indication to the user regarding the correctness of the hearing aid position, wherein if the hearing aid is correctly positioned, the indication is indicative of same (para 0037: “The suggestions or instructions may comprise information regarding whether or not the hearing device is correctly arranged in and/or at the ear, such as an indication that the hearing device is correctly or in-correctly arranged in and/or at the ear of the hearing device wearer.”); wherein if incorrect positioning is identified, the indication comprises a request to reposition the hearing aid (para 0127: “In step 110 the hearing device is adjusted, if the comparison in step (106) of the first image and the reference image shows that the hearing device is not correctly arranged in and/or at the ear of the wearer. The application provides suggestions and/or instructions for assisting in adjusting the position of the hearing device. The suggestions and/or instructions are based on the comparison (106) between the first image and the reference image. After the hearing device has been adjusted, the flow may be iterated or repeated by using the application to capture a new first image in step (104) and perform a comparison in step (106) between the new first image and the reference image.”).
However, Yang et al. does not teach capturing front and side images of the user per se or capturing images from different subjects, but Pedersen teaches to capture a plurality of images, the plurality of images comprising at least one frontal face image and at least one side face image (para 0092: “the device 12 may be used to obtain two input images 20, with the first input image 20 being a front view of a subject that includes the head, ears, and at least part of the torso, and the second input image 20 being a side view of the subject that includes the head, the ear, and at least part of the torso.”), the plurality of images comprises images of ears of different subjects (para 0075: “the reference image…are collected over time from different fitting procedures of different subjects”), and wherein the plurality of images comprises at least two images of each ear from different angles thereof (para 0092: “two sets of reference images 300 are stored at the second device 14, with the first set of reference images 300 being front images of different persons, and the second set of reference images 300 being side images of the different persons.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to capture multiple images of the ear and from multiple users, as taught in Pedersen, in the system of Yang et al. because it allows for more angles to be compared to a larger set of ears, which allows for a more accurate comparison and analysis.
However, Yang et al. and Pedersen do not teach using machine learning to determine correctness or determining structural issues, but Nishimuta et al. teaches determine the correctness by applying a machine learning algorithm on the one or more features (para 0195: “The image may be input into the dental appliance placement assessor 1450, which may use a trained machine learning model…determine whether the dental appliance was correctly placed in the holder”), wherein the machine learning algorithm is trained using a training set comprising: a large plurality of images (para 0182: “For the model training workflow 1405, a training dataset containing hundreds, thousands, tens of thousands, hundreds of thousands or more images should be used to form a training dataset.”) and a plurality of labels associated with the large plurality of images, each label indicating whether something is correctly or incorrectly positioned (para 0182: “Each image may include various labels of one or more types of useful information. Each image may include, for example, data indicating whether the dental appliance was correctly placed in a holder in the image, a dental appliance type and/or object type for a dental appliance in the image, an indication as to whether an object was correctly placed against a feature of the dental appliance in the image, an indication as to whether or not the dental appliance in the image was damaged, pixel-level or patch-level segmentation of the image into various classes (e.g., bond region, not bond region, successful bond, unsuccessful bond, etc.), and so forth.”; para 0183: “The labels that are used may depend on what a particular machine learning model will be trained to do.”); wherein the machine algorithm is trained to differentiate between the following scenarios (para 0195: “The image may be input into the dental appliance placement assessor 1450, which may use a trained machine learning model…to determine whether the dental appliance was correctly placed in the holder.”): a) correctly positioned; b) incorrectly positioned; (para 0195: “This may include determining…whether an orientation of the dental appliance in the holder is correct…The dental appliance placement assessment 1452 may be a simple indication that the dental appliance was correctly placed in the holder or incorrectly placed in the holder.”) and c) a structural element needs to be changed (para 0195: “This may include determining…whether the dental appliance was placed into a correct type of holder… This information may be used to reposition the dental appliance…in a new holder” – Examiner notes that a type of holder is akin to a structural element); wherein if changing of a structural element is required, the indication comprises a request to change the structural element (para 0195: “This information may be used to reposition the dental appliance…in a new holder” – Examiner notes the “new holder” is akin to the change in structural element). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use machine learning, as taught in Nishimuta et al., in the method of Yang et al. and Pedersen because it would a allow a comparison to a much larger data set of trained images, which creates a more accurate determination as compared to a comparison with a single image, which might not always be perfect, especially if a professional is not involved. Further, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to train the machine learning algorithm about structural issues, as taught in Nishimuta et al., in the system of Yang et al. and Pedersen because it provides further ways to assist a user in having a properly positioned and effective hearing aid.
Referring to claim 20, Yang et al. teaches a computer implemented method (para 0017: “The application is to be executed by an electronic device having a camera. The application or app or application program or software application may be a computer program or computer software designed to be executed by an electronic device.”) for determining hearing aid position correctness with respect to the unique anatomy of a specific user’s ear (para 0125: “If the comparison shows that the hearing device is correctly arranged, then the flow continues to step 108. If the comparison shows that the hearing device is not correctly arranged, then the flow continues to step 110”), the method comprising: requesting, via a user interface, a hearing aid user (para 0094: “The application (2) is configured for providing instructions (36) and/or feedback (36) for assisting a user, such as the hearing device wearer (8), in using the application (2).”; para 0032: “the application is configured for providing a first visual guide in the viewfinder, the first visual guide is for assisting the wearer in obtaining a first predetermined placement of his/her head and/or ear relative to the camera, when capturing the first image.”) to capture and/or upload an image, the image comprising at least one face image, wherein the image is captured while the user is wearing a hearing aid (para 0030: “The application is configured for capturing the first image with the camera. Thus, the application may be configured for capturing the first image with the camera application. The first image shows at least the ear of the wearer with the hearing device arranged in and/or at the ear of the wearer.”), applying a facial landmark algorithm on the image to identify a position of an ear of the user, at least one anatomical landmark of the ear and one or more parts of the hearing aid; deriving from the determined position of the at least one anatomical landmark of the ear of the user and from the one or more parts of the hearing aid, one or more features related to a relative position of the hearing aid relative to the at least one anatomical landmark of the ear of the user in the image (para 0036: “the image recognition functionalities, methods, or software utilise image landmarks or features identified, determined or detected on the first image and/or on the reference image. The image landmarks or features may be landmarks or features such as a point, distance, angle, edge, pixel by pixel, and/or gradient. The image landmarks or features may be a location or an area of the image of interest, su