DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. Claims 1-36 are pending in this application.
Priority
3. Applicant’s claim for domestic priority under 35 U.S.C.119 (e) is acknowledged based on the provisional application 63/476,636 filed on 12/21/2022.
Drawings
4. The drawing has been filed on 12/13/2023 are acceptable for examination purpose.
. Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1-2, 5-9, 13-14, 17-18, 21-22, 25-29, 33 are rejected under 35 U.S.C. 103 as being unpatentable over Kuramoto et al. [hereafter Kuramoto], US Pub 2016/0171718 in view of Guo et al. [hereafter Guo], CN Pub 110910317A.
As to claims 1, 21 [independent], Kuramoto teaches a method of enhancing medical images comprising, at a computing system [fig. 1; 0057]:
receiving a medical image [0006, 0009-0010, 0063-0064 Kuramoto teaches that medical image or endoscopic image is captured by a medical processing device];
converting the medical image into a brightness component image and two color component images [figs. 19-21; 0006, 0009-0010, 0020-0023, 0107 Kuramoto teaches that medical image is captured by a medical processing device and converting into intermediate color space such as Lab color space or HSL color space etc., where L indicative of lightness or brightness];
Kuramoto doesn’t teach explicitly but teaches implicitly the claim limitation recited, generating a reflectance image from the brightness component image using a machine learning model [in paras., 0064-0066 that generating a reflectance image from the brightness or lightness of the object/image is incident on image sensor 48 through objective lens 46]; and
generating an enhanced medical image based on the reflectance image [in fig. 2 & paras., 0072-0074 that the video signal generator 66 converts the RGB image signals into the intermediate color space such as Lab color space or HSL color space etc., which are inputted from the normal image processor 62 or the special image processor 64, into a video signal to be displayed as an enhanced normal image on the display monitor 18. Based on the video signal, the display monitor 18 displays the normal enhanced image].
Guo teaches generating a reflectance image from the brightness component image using a machine learning model [abstract Guo teaches generating or obtaining a reflectance image from the brightness or lightness of the object/image, and then generate or obtain the enhanced image based on the reflectance image]; and
generating an enhanced medical image based on the reflectance image [abstract Guo teaches generating or obtaining a reflectance image from the brightness or lightness of the object/image, and then generate or obtain the enhanced image based on the reflectance image].
Thus, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate Guo teaching to generate a reflectance image from the brightness component image using a machine learning model to modify Kuramoto’s teaching for generating an enhanced medical image based on the reflectance image using the color correction, image enhancing diagnosis decision, tongue division, colour and texture characteristic analysis processing tongue image, and disease. The image enhancement is low-level processing of the image, pre-processing stage in the image processing using the deep learning model is one or both of a pre-trained deep learning model, and a trained deep learning model. The suggestion/motivation for doing so would have been benefitted to the user to makes able to provide a technique for extraction and diagnosis of tongue body cutting, precision feature extraction operation, so as to improve the accuracy of tongue disease diagnosis.
As to claims 2, 22 [dependent from claims 1, 21 respectively], Kuramoto teaches wherein the enhanced medical image comprises at least one region that is brighter than a corresponding region of the medical image [0023, 0064-0066, 0069 Kuramoto teaches that the image sensor 48 of the an endoscope system 10 emits light to capture atleast first & second endoscopic medical images of different regions of target image with different brightness ratio and to adjust a pixel value of the second color image signal based on first brightness information calculated from the first color image signal and second brightness information calculated from the second color image signal].
As to claims 5, 25 [dependent from claims 1, 21 respectively], Kuramoto teaches wherein generating the enhanced medical image comprises applying at least one de-noising algorithm [fig. 2, element 58; 0070 Kuramoto teaches that the DSP 56 performs the gamma correction and the like on the RGB image signals, the noise remover 58 removes noise from the RGB image signals through a noise removing process (for example, a moving average method or a median filter method)].
As to claims 6, 26 [dependent from claims 1, 21 respectively], Kuramoto teaches wherein the medical image comprises a grayscale image [figs. 19-21; 0006, 0009-0010, 0020-0023, 0107 Kuramoto teaches that medical image is captured by a medical processing device and converting into intermediate color space such as Lab color space or HSL color space etc., where L indicative of lightness or brightness. The conversion from RGB Color space to Lab/HSL color space is straightforward through RGB to grayscale].
As to claims 7, 27 [dependent from claims 6, 26 respectively], Kuramoto teaches wherein the grayscale image comprises three color components [figs. 19-21; 0006, 0009-0010, 0020-0023, 0107 Kuramoto teaches that medical image is captured by a medical processing device and converting into intermediate color space such as Lab color space or HSL color space etc., where L indicative of lightness or brightness, while a & b indicative of Cr (chroma red) & Cb (chroma blue). The conversion from RGB Color space to Lab/HSL color space is straightforward through RGB to grayscale].
As to claims 8, 28 [dependent from claims 1, 21 respectively], Kuramoto teaches wherein the two color component images are a hue image and a saturation image [figs. 19-21; 0006, 0009-0010, 0020-0023, 0107 Kuramoto teaches that medical image is captured by a medical processing device and converting into intermediate color space such as Lab color space or HSL color space etc., where L indicative of lightness or brightness, while a & b indicative of Cr (chroma red) & Cb (chroma blue). The conversion from RGB Color space to Lab/HSL color space is straightforward through RGB to grayscale].
As to claims 9, 29 [dependent from claims 1, 21 respectively], Guo teaches wherein the enhanced medical image is generated based on the reflectance image and the two color component images [abstract Guo teaches generating or obtaining a reflectance image from the brightness or lightness of the object/image, and then generate or obtain the enhanced image based on the reflectance image].
Thus, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate Guo teaching to generate a reflectance image from the brightness component image using a machine learning model to modify Kuramoto’s teaching for generating an enhanced medical image based on the reflectance image using the color correction, image enhancing diagnosis decision, tongue division, colour and texture characteristic analysis processing tongue image, and disease. The image enhancement is low-level processing of the image, pre-processing stage in the image processing using the deep learning model is one or both of a pre-trained deep learning model, and a trained deep learning model. The suggestion/motivation for doing so would have been benefitted to the user to makes able to provide a technique for extraction and diagnosis of tongue body cutting, precision feature extraction operation, so as to improve the accuracy of tongue disease diagnosis.
As to claims 13, 33 [dependent from claims 1, 21 respectively], Kuramoto teaches displaying the enhanced medical image during a medical procedure [fig. 2; 0072-0074 Kuramoto teaches that the video signal generator 66 converts the RGB image signals into the intermediate color space such as Lab color space or HSL color space etc., which are inputted from the normal image processor 62 or the special image processor 64, into a video signal to be displayed as an enhanced normal image on the display monitor 18. Based on the video signal, the display monitor 18 displays the normal enhanced image].
As to claim 14 [dependent from claim 1], Kuramoto teaches generating a medical procedure report that comprises the enhanced medical image [fig. 2; 0072-0074 Kuramoto teaches that the video signal generator 66 converts the RGB image signals into the intermediate color space such as Lab color space or HSL color space etc., which are inputted from the normal image processor 62 or the special image processor 64, into a video signal to be displayed as an enhanced normal image on the display monitor 18. Based on the video signal, the display monitor 18 displays the normal enhanced image].
As to claim 17 [dependent from claim 1], Kuramoto teaches wherein the medical image comprises a video frame fig. 2; 0072-0074 Kuramoto teaches that the video signal generator 66 converts the RGB image signals into the intermediate color space such as Lab color space or HSL color space etc., which are inputted from the normal image processor 62 or the special image processor 64, into a video signal to be displayed as an enhanced normal image on the display monitor 18. Based on the video signal, the display monitor 18 displays the normal enhanced image].
As to claim 18 [dependent from claim 1], Kuramoto teaches wherein the medical image comprises an endoscopic image [0006, 0009-0010, 0063-0064 Kuramoto teaches that medical image or endoscopic image is captured by a medical processing device].
7. Claims 3, 19, 23 are rejected under 35 U.S.C. 103 as being unpatentable over Kuramoto et al. [hereafter Kuramoto], US Pub 2016/0171718 in view of Guo et al. [hereafter Guo], CN Pub 110910317 and Ward et al et al. [hereafter Ward], US Pub 2021/0192727.
As to claims 3, 23 [dependent from claims 1, 21 respectively], Kuramoto and Guo don’t teach wherein the machine learning model was trained on training images that comprise non-medical training images.
Ward teaches wherein the machine learning model was trained on training images that comprise non-medical training images [fig. 1; 0034-0036, 0037 Ward teaches that the machine learning model trained images that may also comprise non-medical training images such as laboratory records, past medical history records, physiological variables (e.g., measured oxygenation levels), etc. (e.g., 0037)].
Thus, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate Ward teaching to have the machine learning model was trained on training images that comprise non-medical training images to modify Kuramoto and Guo’s teaching to identify the areas of interest are analyzed using a deep learning model on the medical image that is a diagnostic X-ray image to a respective organ of a patient associated with the diagnostic X-ray image. The deep learning model is one or both of a pre-trained deep learning model, and a trained deep learning model. The suggestion/motivation for doing so would have been benefitted to the user to makes able to reduce the number of false positives and false negatives when diagnosing nuanced pulmonary conditions, images are automatically normalized to improve the accuracy of non-uniform real-world imaging data.
As to claim 19 [dependent from claim 1], Kuramoto and Guo don’t teach wherein the medical image comprises an open-field image.
Ward teaches wherein the medical image comprises an open-field image [fig. 1; 0034-0036, 0037 Ward teaches that the imaging source 102 is an X-ray machine. The X-ray machine may be used to generate chest X-ray images correspond to the open-field image].
Thus, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate Ward teaching to have the machine learning model was trained on training images that comprise non-medical training images to modify Kuramoto and Guo’s teaching to identify the areas of interest are analyzed using a deep learning model on the medical image that is a diagnostic X-ray image to a respective organ of a patient associated with the diagnostic X-ray image. The deep learning model is one or both of a pre-trained deep learning model, and a trained deep learning model. The suggestion/motivation for doing so would have been benefitted to the user to makes able to reduce the number of false positives and false negatives when diagnosing nuanced pulmonary conditions, images are automatically normalized to improve the accuracy of non-uniform real-world imaging data.
.
8. Claims 10-12, 30-32 are rejected under 35 U.S.C. 103 as being unpatentable over Kuramoto et al. [hereafter Kuramoto], US Pub 2016/0171718 in view of Guo et al. [hereafter Guo], CN Pub 110910317 and Westwick et al et al. [hereafter Westwick], US Pub 2022/0211258.
As to claims 10, 30 [dependent from claims 1, 21 respectively], Kuramoto and Guo don’t teach wherein the medical image is a fluorescence image and the enhanced medical image comprises a combination of a visible light image with the reflectance image.
Westwick teaches wherein the medical image is a fluorescence image and the enhanced medical image comprises a combination of a visible light image with the reflectance image [fig. 5, steps 502-510; 0114-0117 Westwick teaches that the enhanced medical image is a combination of a visible light image with the reflectance image].
Thus, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate Westwick teaching to have the fluorescence medical image & the enhanced medical image is a combination of a visible light image with the reflectance image to modify Kuramoto and Guo’s teaching to generate enhanced fluorescence image by reducing several of the intensity values that are below a threshold intensity value to increase contrast between the target region in the tissue and areas that are associated with the fluorescence agent that is not located within the target region in the tissue, and the enhanced fluorescence image is displayed. The suggestion/motivation for doing so would have been benefitted to the user to introduce method that reduces the intensity values that are below a threshold intensity value to increase contrast between the target region in the tissue and areas that are associated with the fluorescence agent that is not located within the tissue.
As to claims 11, 31 [dependent from claims 1, 21 respectively], Kuramoto and Guo don’t teach wherein the medical image comprises a fluorescence image and the enhanced medical image comprises an enhanced fluorescence image.
Westwick teaches wherein the medical image comprises a fluorescence image and the enhanced medical image comprises an enhanced fluorescence image [fig. 1, step 108; 0093].
Thus, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate Westwick teaching to have the fluorescence medical image & the enhanced medical image is a combination of a visible light image with the reflectance image to modify Kuramoto and Guo’s teaching to generate enhanced fluorescence image by reducing several of the intensity values that are below a threshold intensity value to increase contrast between the target region in the tissue and areas that are associated with the fluorescence agent that is not located within the target region in the tissue, and the enhanced fluorescence image is displayed. The suggestion/motivation for doing so would have been benefitted to the user to introduce method that reduces the intensity values that are below a threshold intensity value to increase contrast between the target region in the tissue and areas that are associated with the fluorescence agent that is not located within the tissue.
As to claims 12, 32 [dependent from claims 11, 31 respectively], Westwick teaches receiving a visible light image and combining the visible light image with the enhanced fluorescence image [fig. 5, steps 502-510; 0114-0117 Westwick teaches that the enhanced medical image is a combination of a visible light image with the reflectance image].
Thus, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate Westwick teaching to have the fluorescence medical image & the enhanced medical image is a combination of a visible light image with the reflectance image to modify Kuramoto and Guo’s teaching to generate enhanced fluorescence image by reducing several of the intensity values that are below a threshold intensity value to increase contrast between the target region in the tissue and areas that are associated with the fluorescence agent that is not located within the target region in the tissue, and the enhanced fluorescence image is displayed. The suggestion/motivation for doing so would have been benefitted to the user to introduce method that reduces the intensity values that are below a threshold intensity value to increase contrast between the target region in the tissue and areas that are associated with the fluorescence agent that is not located within the tissue.
9. Claims 20, 36 are rejected under 35 U.S.C. 103 as being unpatentable over Kuramoto et al. [hereafter Kuramoto], US Pub 2016/0171718 in view of Guo et al. [hereafter Guo], CN Pub 110910317 and Kohara et al. [hereafter Kohara], US Pub 2012/0127200.
As to claims 20, 36 [dependent from claims 1, 21 respectively], Kuramoto and Guo don’t teach receiving a user input selecting an enhancement mode, and generating the enhanced medical image in response to receiving the user input.
Kohara teaches receiving a user input selecting an enhancement mode, and generating the enhanced medical image in response to receiving the user input [figs. 2, 6; 0062, 0066-0069 Kohara teaches that the enhancement mode is selected to create the enhanced medical image].
Thus, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to incorporate Kohara teaching to receive a user input selecting an enhancement mode, and generating the enhanced medical image to modify Kuramoto and Guo’s teaching to read the medical image & generates the virtual liquid whose light transmittance is not zero, adds the virtual liquid to organ surface in the medical image and creates a projected image of the medical image to which the virtual liquid is added. The suggestion/motivation for doing so would have been benefitted to create projected image of the medical image to which the virtual liquid is added. Hence, the medical image having texture can be brought closer to an actual endoscopic image or an image obtained by directly viewing the organ of test object.
Allowable Subject Matter
10. Claims 4, 15-16, 24, 34-35 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
11. The following is an examiner’s statement of reasons for allowance:
The dependent claim 4 is allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein the machine learning model was trained on the non-medical training images in a first training stage and trained on medical training images in a second training stage that is subsequent to the first training stage”, in combination with all other limitations as claimed.
The dependent claim 15 is allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “displaying the enhanced medical image to a user; receiving at least one input from the user for labeling anatomy of interest in the medical image to generate a training medical image; and training a different machine learning model to identify the anatomy of interest based on the training medical image”, in combination with all other limitations as claimed.
The dependent claim 16 is allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein the medical image is one of a plurality of medical images captured under multiple lighting conditions, and the method comprises: generating a plurality of enhanced medical images from the plurality of medical images using the machine learning model; and training a different machine learning model based on the plurality of enhanced medical images”, in combination with all other limitations as claimed.
The dependent claim 24 is allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein the machine learning model was trained on the non-medical training images in a first training stage and trained on medical training images in a second training stage that is subsequent to the first training stage”, in combination with all other limitations as claimed.
The dependent claim 34 is allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein the one or more programs include instructions for: displaying the enhanced medical image to a user; receiving at least one input from the user for labeling anatomy of interest in the medical image to generate a training medical image; and training a different machine learning model to identify the anatomy of interest based on the training medical image”, in combination with all other limitations as claimed.
The dependent claim 35 is allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein the medical image is one of a the one or more programs include instructions for: generating a plurality of enhanced medical images from the plurality of medical images using the machine learning model; and training a different machine learning model based on the plurality of enhanced medical images”, in combination with all other limitations as claimed.
Conclusion
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARIS SABAH whose telephone number is (571)270-3917. The examiner can normally be reached on Monday/Friday from 9:00AM to 5:30PM EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Benny Tieu, can be reached on (571)272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. The Examiner’s personal fax number is (571)270-4917.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/HARIS SABAH/Examiner, Art Unit 2682