DETAILED ACTION
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement submitted on 12/28/2023 has been considered by the Examiner and made of record in the application file.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2 and 6-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jay (US 2023/0029766 A1) in view of Wang (US 2019/0357769 A1).
Regarding claim 1, Jay discloses an eye image capturing and processing device, comprising: (paragraph 37: Some implementations of the current subject matter include non-invasive systems and methods of detecting hemoglobin based on digital images that can be obtained from a mobile device.)
a portable user terminal body (paragraph 37: mobile device) having thereon an image capturing module (paragraph 37: camera) which captures an eye image of a user to generate instant (real time) image data; and (paragraphs 44-45, 73-75 and 120-127: The current subject matter can provide systems and methods of affordable and non-invasive measurement of Hb concentration using mobile device digital photography of the palpebral conjunctiva. The interactive application can allow the user to take pictures of the conjunctiva (e.g., inner layer of eyelids). The image analysis algorithm can process the RAW images captured by the camera to improve the quality of the captured image presented to the user. The image analysis algorithm can be processed in real-time to generate a processed image (based on processed RAW image data). The image analysis algorithm can account for white balance, ambient lighting, glare, pigmentation of the surrounding skin, and detect borders of conjunctiva separated by other anatomical features in the image such as the sclera (white), pupil (black), edges of the eyelid, etc.)
an application program, installed in the portable user terminal body, for receiving the instant image data (figure 23: application on iPhone) and performing a data processing procedure, (paragraphs 54-55 and 73: A mobile device can obtain and process digital images of the palpebral conjunctiva to predict Hb concentration. In another example, the received data characterizing the plurality of images can be unprocessed sensor data. The unprocessed sensor data can be processed by applying one or more of linearization, white balancing, demosaicing, color space correction and brightness contrast control) thereby generating a to-be-diagnosed image corresponding to the eye of the user. (paragraph 44: The current subject matter can provide measurement of Hb concentration using mobile device digital photography of the palpebral conjunctiva. For example, by eversion of the bottom eyelid for exposure of the conjunctiva, and capturing the image of the exposed conjunctiva by a user device by image capture minimizing motion, shadow and glare, etc. subsequent rapid, real-time computation using an image analysis algorithm can be processed on a smartphone utilizing an on-device application. The user device can include an interactive application that can allow for detection of Hb concentration based on the captured image and selected region for analysis.)
Jay further discloses taking an automatic selfie or generating an indication signal to instruct the user to take a manual selfie (paragraph 45: The interactive application can allow the user to take pictures of the conjunctiva. The application can guide the user to capture desirable images. For example, the application can provide instructions to the user to reduce the motion of the camera capturing the image, maximize image focus and reducing shadow/glare on the exposed conjunctiva or guide them to a desirable region on the conjunctiva.) However, Jay fails to disclose an automatic selfie or instruction to the user to take a manual selfie when a preset condition is satisfied.
In related art, Wang discloses taking an automatic selfie or generating an indication signal to instruct the user to take a manual selfie when a preset condition is satisfied. (figure 36; abstract, paragraph 85: Automatically initiate fundus image capture with the camera when the light source reflection is within the target element on the display. The entire image capture method can also be triggered by passive eye tracking and automatically capture.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Wang into the teachings of Jay to effectively improve user experience and reduce user error.
Regarding claim 2, Jay, as modified by Wang, discloses the claimed invention wherein the application program includes a color correction algorithm for transforming an original image obtained through the selfie into the to-be-diagnosed image with true color. (Jay: paragraphs 51-57: RAW image capture and processing to represent conjunctival color)
Regarding claim 6, Jay, as modified by Wang, discloses the claimed invention wherein receiving the instant image data; determining whether the eyelid conjunctiva and the sclera of the user are correctly positioned in the instant image data; determining whether the user pull down the eyelid properly; and generating the indication signal or taking an automatic selfie when the two determination steps are satisfied. (Jay: paragraphs 44, 48-49 and 58-68: uses image content (conjunctiva vs sclera) to decide valid ROI and gives instructions to user about pulling the eyelid ; Wang: paragraphs 102-104 and 191-196: provides the position detection and auto-capture)
Regarding claim 7, Jay, as modified by Wang, discloses the claimed invention wherein the indication signal controls the portable user terminal body to make a sound or vibrate to instruct the user to take the manual selfie. (Wang: paragraph 175: sound)
Regarding claim 8, Jay, as modified by Wang, discloses the claimed invention wherein the application program comprises a guiding method, comprising steps of: receiving the instant image data; outputting a guiding signal according to positions of the eyelid conjunctiva and the sclera of the user in the instant image data to instruct the user to adjust relative positions between the portable user terminal body and the face of the user and adjust eyelid opening degree; and generating the indication signal or taking the automatic selfie only when the relative positions and the eyelid opening degree are satisfied. (Jay: paragraphs 44-49: guides the user to reduce motion, maximize focus, reduce shadow and glare and properly exposes conjunctiva by pulling down eyelid; Wang: paragraphs 187-196: automatic capture when reflection is within the target)
Regarding claim 9, Jay, as modified by Wang, discloses the claimed invention wherein the guiding signal is a speech signal, a light flash signal, a vibration signal or a screen display signal for guiding the user. (Wang: paragraphs 173-176)
Regarding claim 10, Jay, as modified by Wang, discloses the claimed invention wherein the eye image capturing and processing device is applied to a telehealth system and further comprises a network module transmitting the to-be diagnosed image to the telehealth system to perform a determination procedure, the determination procedure comprising steps of: the telehealth system using an AI engine to perform disease symptom identification based on the to-be-diagnosed image; and the telehealth system outputting a warning signal to the network module or a default emergency contact person or organization when a disease status is diagnosed in the disease symptom identification. (Jay: paragraphs 44, 69-73, 77-84, 129-135 and 153-159: discloses smartphone-based Hb estimation and remote, home-based uses; also discloses algorithms for Hb prediction; Wang: paragraphs 130-142: cloud-based notifications about image results)
Regarding claim 11, Jay, as modified by Wang, discloses the claimed invention wherein the telehealth system uploads the diagnosed disease status of the user to a cloud storage space wherein the diagnosed disease status comprises abnormal physiological data of the user; the warning signal includes a link allowing access to the cloud storage space for patient information of the user; and disease diagnosis time and contact information are stored in the cloud storage space for later tracking and process optimization. (see the rejection of claim 10)
Regarding claim 12, Jay, as modified by Wang, discloses the claimed invention wherein the disease symptom identification checks symptoms of anemia or jaundice, and the diagnosed disease status comprises the anemia or the jaundice. (Jay: paragraphs 36-43)
Regarding claim 13, Jay, as modified by Wang, discloses the claimed invention wherein a determination module, in communication with the application program, for performing disease symptom identification based on the to-be-diagnosed image, wherein when a disease status is diagnosed in the disease symptom identification, a warning signal is outputted to the portable user terminal body or a default emergency contact person or organization; the diagnosed disease status is uploaded to a cloud storage space; and the warning signal includes a link allowing access to the cloud storage space for patient information of the user. (Jay: paragraphs 69-73, 77-85 and 129-135: discloses a smartphone app computes Hb from conjunctival images using regression models and presents a result locally on the smartphone; Wang: paragraphs 130-133 and 139-141: see the cloud system of Wang and notifications)
Regarding claim 14, Jay, as modified by Wang, discloses the claimed invention wherein the disease symptom identification checks symptoms of anemia or jaundice, and the diagnosed disease status comprises the anemia or the jaundice. (see the rejections of claims 12 and 13)
Regarding claim 15, Jay, as modified by Wang, discloses the claimed invention wherein the image capturing module and a display of the portable user terminal body are arranged on a first surface and a second surface of the portable user terminal body, respectively, and the first surface is opposite to the second surface. (Jay: paragraphs 37, 44 and 73)
Claims 3 and 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jay in view of Wang and in further view of Bhat (US 2020/0074959 A1).
Regarding claim 3, Jay, as modified by Wang, discloses the claimed invention except for wherein further comprising a spectrum sensor chip, in communication with the portable user terminal body, for collecting spectrum distribution of ambient light, the color correction algorithm being a method for spectrally eliminating ambient interference, the method comprising steps of: receiving the original image obtained through the selfie; receiving the spectrum distribution collected by the spectrum sensor chip; and adjusting the original image based on the spectrum distribution to eliminate color bias of the original image captured in the ambient light, thereby transforming the original image into a true-color to-be-diagnosed image.
However, in related art, Bhat discloses spectrum sensor chip, in communication with the portable user terminal body, for collecting spectrum distribution of ambient light, the color correction algorithm being a method for spectrally eliminating ambient interference, the method comprising steps of: (paragraphs 20-22) receiving the original image obtained through the selfie; (paragraphs 22 and 48) receiving the spectrum distribution collected by the spectrum sensor chip; (paragraphs 2022 and 48) and adjusting the original image based on the spectrum distribution to eliminate color bias of the original image captured in the ambient light, thereby transforming the original image into a true-color to-be-diagnosed image. (paragraphs 22 and 52)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Bhat into the teachings of Jay and Wang to effectively improve the color accuracy of the ambient light sensor.
Regarding claim 4, Jay, as modified by Wang and Bhatt, discloses the claimed invention wherein further comprising a light source, in communication with the portable user terminal body, for selectively emitting light towards the eye of the user, and the spectrum sensor chip or the image capturing module collects the spectrum distribution at least in a first mode and a second mode, wherein in the first mode, the spectrum distribution is collected to obtain first image information when the light source emits the light towards the eye of the user, in the second mode, the spectrum distribution is collected to obtain second image information when the light source does not emit the light, and the original image is adjusted to eliminate color bias interference so as to transform the original image into the true-color to-be-diagnosed image, wherein the color bias interference is obtained by subtracting the second image information from the first image information. (Bhatt: paragraphs 34-35, 43-45 and 48-52)
Claim 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jay in view of Wang and in further view of Bloom (US 2006/0245643 A1).
Regarding claim 5, Jay, as modified by Wang, discloses the claimed invention except for wherein the color correction algorithm comprises steps of: receiving the original image obtained through the selfie; and adjusting the original image based on standard eye color or iris color to eliminate color bias of the original image captured in ambient light, thereby transforming the original image into a true-color to-be-diagnosed image.
In related art, Bloom discloses color correction algorithm comprises steps of: receiving the original image obtained through the selfie; and adjusting the original image based on standard eye color or iris color to eliminate color bias of the original image captured in ambient light, thereby transforming the original image into a true-color to-be-diagnosed image. (paragraphs 17-18)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Bloom into the teachings of Jay and Wang to effectively improve red-eye (color bias) of the captured image.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BOBBAK SAFAIPOUR whose telephone number is (571)270-1092. The examiner can normally be reached Monday - Friday, 8:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BOBBAK SAFAIPOUR/Primary Examiner, Art Unit 2665