Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-51 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without integration into a practical application or recitation of significantly more.
In the analysis below, the method of independent claim 1 is considered representative of independent claims 1, 26 and 51 since all of the independent claims recite identical steps despite being directed to different statutory matter. Furthermore, each of independent claims 1, 26, and 51 are directed to one of the four statutory categories of eligible subject matter; thus, the claims pass Step 1 of the Subject Matter Eligibility Test (See flowchart in MPEP 2106).
Step 2A, prong 1: Yes
The independent claims are directed to
PNG
media_image1.png
260
672
media_image1.png
Greyscale
When viewed under the broadest most reasonable interpretation, the instant claims are directed to Judicial Exception – an abstract idea belong to the group of mental process. Particularly, steps 3, 4, 5 and 6 can be performed mentally. For example a person can look at the camera of an electronic device, determine if the head of the person wearing a hearing aid is aligned with the camera, determine if the head has begun to rotate, and capture a sequence of images as the wearer rotates.
Reference may be made to the July 2024 PEG and those various limitations drawn to the mental processes grouping(s), to include those of Example 47 claim 2. The claims/limitations in question are recited at a high level of generality and lack any specifics precluding such ‘accessing’, ‘determining’, ‘initiating’, etc., from being interpreted under the mental processes grouping practically performed in the mind. As identified in the most recent PEG, even a form of automating that broadly/generically involves the use of a machine learning model, would fail to preclude the limitations in question from being drawn to the mental processes grouping (see guidance with respect to ‘apply it’ consideration of MPEP 2106.05(f)). Hence, the limitations 3, 4, 5, and 6 are interpreted as mental steps. Dependent claims similarly analyzed, further limit said ‘executing’ second action, but not in such a manner so as to preclude an interpretation directed to the identified exception.
Additional elements
The additional elements recited in each of the independent claims are a camera a display and a processing unit.
Step 2A, prong 2: No
The above-identified additional elements do not integrate the judicial exception into a practical application.
The steps of 1 and 2 amounts to merely using a generic computer as a tool to perform the claimed mental process. Implementing an abstract idea on a computer does not integrate a judicial exception into a practical application (See MPEP 2106.05(f)).
Moreover, the additional elements of the claims do not recite an improvement in the functioning of a computer or other technology or technical field, the claimed steps are not performed using a particular machine, the claimed steps do not effect a transformation, and the claims do not apply the judicial exception in any meaningful way beyond generically linking the use of the judicial exception to a particular technological environment (See MPEP 2106.04(d)). Therefore, the analysis under prong two of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
Step 2B: No
The pending claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explained above in Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer. Each of the additional elements are generic computer features which perform generic computer functions that are well-understood, routine, and conventional and do not amount to more than implementing the abstract idea with a computerized system.
Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation, and mere implementation on a generic computer does not add significantly more to the claims. Accordingly, the analysis under step 2B of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
The dependent claims are similarly analyzed and rejected as mental steps.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-18,20-43, and 45-51 are rejected under 35 U.S.C. 103 as being unpatentable over Gavish et al. US 2023/0362560 in view of Pedersen et al. US 2024/0122543 (hereinafter “Pedersen”).
Regarding claim 1, Gavish discloses an electronic device (see figure 1 method/system for verifying correct hearing aid positioning, note that an electronic device is used for the image processing)
PNG
media_image2.png
50
340
media_image2.png
Greyscale
PNG
media_image3.png
494
364
media_image3.png
Greyscale
comprising: a camera, a display, and a processing unit (see figure 1, and paragraph 0019, a plurality of images are captured using a camera of a mobile phone which has a display and processing unit)
PNG
media_image4.png
66
364
media_image4.png
Greyscale
; wherein the electronic device is configured to assist a wearer of a hearing aid to correctly arrange a hearing aid in and/or at an ear of the wearer of the hearing aid (see paragraph 0001)
PNG
media_image5.png
84
352
media_image5.png
Greyscale
; accessing the camera of the electronic device (see above paragraph 0019); determining if a head of the wearer is aligned with the camera (see paragraph 0021, guiding the user to position the camera for capturing a face image and determining a correct face position relative to the cameras image frame by applying a face recognition tool)
PNG
media_image6.png
78
356
media_image6.png
Greyscale
instructing the user to rotate relative to the camera (see paragraph 0063, instructing the user to turn the face sideways)
PNG
media_image7.png
276
346
media_image7.png
Greyscale
; and initiating a capture sequence capturing a plurality of images, the plurality of images showing at least a part of the ear of the wearer with the hearing aid arranged in and/or at the ear as the head of the wearer rotates (see above paragraph 0063 guiding the capture of a plurality of images).
While Gavish discloses instructing the user to rotate their face, he does not explicitly disclose determining if the head of the wearer has begun to rotate. However it would be obvious to do so in order to determine if the user is complying with the instructions.
Pedersen discloses optimizing positioning and configuration of a hearing aid and further discloses that it may be an advantage to place an accelerometer and gyroscope in the hearing aid to determine the rotational movement of the head (see paragraph 0090).
PNG
media_image8.png
126
356
media_image8.png
Greyscale
Gavish and Pedersen are analogous art because they are from the same field of endeavor of optimizing placement of a hearing aid.
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to combine Gavish and Pedersen to determine if the head is rotating as instructed to do so. The motivation would be to determine if the user is complying with the instructions to improve the overall result.
Regarding claim 2, as discussed above Gavish discloses the instructions to the user to rotate their face and captures images and in line with the combination above Pedersen detects the rotation of the face thus the same combination addressed above would result in the images captured when determining that the users head is rotating.
Regarding claim 3, Gavish discloses a threshold for features for selecting images (see paragraph 0029)
PNG
media_image9.png
62
332
media_image9.png
Greyscale
Regarding claims 4, Gavish discloses wherein the electronic device is configured to determine if the ear of the wearer is captured by the camera (see paragraph 0022)
PNG
media_image10.png
62
330
media_image10.png
Greyscale
Regarding claim 5, Gavish discloses that the ear is detected and that multiple images can be captured thus it would be obvious to capture the sequence to ensure that the ear is present since that is what is being analyzed.
Regarding claim 6, the electronic device provides a user interface (see figure 1).
Regarding claim 7, the electronic device is configured to automatically initiate the capture sequence upon receiving a user input (see step 220 of figure 2, paragraph 0085)
PNG
media_image11.png
60
322
media_image11.png
Greyscale
Regarding claim 8, although not explicitly disclosed the viewing of images from the cell phone of figure 1 would indicate to the wearer that the images have been captured.
Regarding claim 9, Gavish discloses wherein the electronic device is configured to provide a first visual guide to assist the wearer in aligning his/her face with the camera (see above paragraph 0063).
Regarding claim 10, Gavish discloses wherein the electronic device is configured to provide a second visual guide to assist the wearer in rotating the head of the wearer relative to the camera (see above paragraph 0063).
Regarding claim 11, Gavish discloses that the ear is located in the image (see paragraph 0009), thus finding the bounding area encompassing the ear and cropping the ear from the image would be obvious in order to focus on the important part of the image to which the Examiner declares official notice.
Regarding claims 12-13, Gavish does not explicitly disclose determining the image quality of one of the images, or the image similarities, however it is well known to filter out poor quality images to improve processing to which the Examiner declares official notice. The motivation would be to improve the processing by using only better quality images.
Regarding claim 14, Gavish discloses provide feedback if the head of the wearer rotates and/or moves relative to the camera at a rotation speed above or below a threshold and/or outside a predefined position/area (see above paragraph 0063).
Regarding claim 15, an image is displayed after capture on the cell phone as seen in figure 1.
Regarding claim 16, although Gavish does not explicitly disclose determining rotation angle images in response to a user he does disclose guiding the user on how to capture images (see paragraph 0063). It would be obvious to one of ordinary skill in the art to allow a user to select which images best display the hearing aid while taking the images thus taking advantage of human intelligence. Thus it would be obvious to one of ordinary skill in the art to modify Gavish to choose the best images to process.
Regarding claims 17-18, Gavish does not explicitly disclose gesture recognition, however it is well known to use gesture recognition to allow for user interaction to which the Examiner declares official notice. The motivation would be to make user interaction easier.
Regarding claim 20, Gavish discloses the camera is a digital camera (see figure 1).
Regarding claim 21, Gavish discloses the camera is a front facing camera (see figure 1).
Regarding claim 22, Gavish discloses that instructions to reposition the device may include depth information (see paragraph 0081) but does not explicitly disclose a 3D depth sensor. However 3D depth sensors are well known and it would be obvious to utilize one to determine the correct depth that the hearing aid should be placed at. The motivation would be to allow more precise instructions to reposition the device.
Regarding claims 23-25, Gavish does not explicitly disclose allowing communication between the wearer of a hearing aid and a hearing care professional during a tele-audiology session, however it would be obvious that the invention of Gavish could be used in the realm of well known tele-medical sessions to provide expert guidance if needed by a professional.
Claim 26-43, and 45-50 are similarly analyzed to claims 1-18, and 20-25.
Claim 51 is similarly analyzed to claim 1.
Claims 19 and 44 are rejected under 35 U.S.C. 103 as being unpatentable over Gavish in view of Pedersen and further in view of Geng US 2005/0088435.
Regarding claim 19, as discussed Gavish and Pedersen disclose the invention of claim 1.
Gavish nor Pedersen do not explicitly disclose producing a 3D image based on the plurality of images/
Geng discloses a 3D imaging device for making custom fit hearing devices (see figure 1-1).
PNG
media_image12.png
528
596
media_image12.png
Greyscale
Geng and Gavish are analogous art because they are from the same field of endeavor of fitting hearing devices.
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to combine Geng, with Gavish and Pedersen to produce a 3D image from the captured images. The motivation would be to get a more accurate picture of the patient’s ear.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see attached 892 notice of references cited.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN B STREGE whose telephone number is (571)272-7457. The examiner can normally be reached M-F 9-5 (PST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571)272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHN B STREGE/ Primary Examiner, Art Unit 2669