Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s Response
In Applicant’s Response dated 8/20/25, the Applicant amended Claims 1-6, 8, 10-17, added Claims 18-22 and argued Claims previously rejected in the Office Action dated 5/28/25. Claims 1-22 are pending examination.
In light of the Applicant’s amendments and remarks, the rejections under 35 USC 101 have been withdrawn.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 9-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hirasawa et al., United States Patent Publication 2020/0337537 (hereinafter “Hirasawa”), in further view of Usuda, WO 2020017213.
Claim 1:
Hirasawa discloses:
An image processing system comprising a processor comprising hardware, the processor being configured to:
input an input image, generated based on a biological image captured under a first imaging condition, to a trained model (see paragraphs [0071] and [0105]). Hirasawa teaches obtaining a biological image of a digestive organ with white light illumination; and
wherein the second imaging condition is different from the first imaging condition (see paragraph [0164]). Hirasawa teaches a white light illumination imaging condition and narrow-band light illumination imaging condition.
Hirasawa fails to expressly disclose generating a prediction image corresponding to the input image in which an object captured is to be captures under the second imaging condition.
Usuda discloses:
wherein the trained model is obtained through machine learning of a relationship between a first training image captured under the first imaging condition and a second training image captured under a second imaging condition (see page 2). Usuda teaches the learned model is created from learning a learning model to recognize images of different wavelengths.,
generate a prediction image by the trained model,
wherein the prediction image generated by the trained model corresponds to an image in which an object captured in the input image is to be captured under the second imaging condition (see page 2). Usuda teaches the generated image is based on the trained model associated with the input image that captured and object using a second wavelength;
image process one of the input image and the prediction image to determine whether a given condition is satisfied (see page 2). Usuda teaches performing image processing on the input and prediction image to determine that the accuracy/correct data is met; and
in response to determining that the given condition is satisfied, output the prediction image (see page 2). Usuda teaches generating an image based on the correct data, and
Accordingly, it would have been obvious to one having ordinary skill in the art before the effectively filing date of the claimed invention was made to modify the system disclosed by Hirasawa to include generating the prediction image based on the learned model and the input image for the purpose of a plurality of trained models for the purpose of improving the accuracy of the learned model of recognizing images, as recited by Usuda.
Claim 2:
Hirasawa discloses:
wherein the first imaging condition corresponds to an imaging condition under which white light is used to capture an image of the object (see paragraphs [0105] and [0148]). Hirasawa teaches a first imaging conditions is under white illumination light, and
wherein the second imaging condition corresponds to an imaging condition under which special light that differs in a wavelength band from the white light is used to capture an image of the object, or to an imaging condition under which pigments are to be dispersed to capture an image of the object (see paragraph [0105]). Hirasawa teaches the second imaging condition is a narrow band light or special light.
Claim 3:
Hirasawa discloses:
wherein the processor is configured to output a display image, generated based on a white light image captured under an display imaging condition in which white light is used to capture an image of the object (see paragraphs [0147] and [0148]). Hirasawa teaches outputting images under which white light is used to capture images,
wherein the first imaging condition corresponds to an imaging condition that differs in at least one of light distribution and a wavelength band of illumination light from the display imaging condition (see paragraphs [0105] and [0148]). Hirasawa teaches a first imaging conditions is under white illumination light, and
wherein the second imaging condition corresponds to an imaging condition under which special light that differs in a wavelength band from the white light is used to capture an image of the object, or to an imaging condition under which pigments are to be dispersed to capture an image of the object (see paragraph [0105]). Hirasawa teaches the second imaging condition is a narrow band light or special light.
Claim 4:
Hirasawa discloses:
wherein the first training image captured under the first imaging condition, the second training image captured under the second imaging condition, and a third training image captured under a third imaging condition that differs from both the first imaging condition and the second imaging condition (see paragraph [0105]). Hirasawa teaches the training images are based on the endoscopic images including endoscopic images captured with white light illumination on the digestive organs of the subject (image condition 1), endoscopic images captured with dyes (for example, indigo carmine or an iodine solution) applied to the digestive organs of the subject (image condition 3), and endoscopic images captured with narrow-band light (for example, NBI (Narrow Band Imaging) narrow-band light or BLI (Blue Laser Imaging) narrow-band light) illumination on the digestive organs of the subject (image condition 2). These image types are used to create association information for the model., and
Hirasawa fails to expressly disclose generating a prediction image corresponding to the input image in which an object captured is to be captures under the second imaging condition.
Usuda discloses:
wherein the trained model is obtained through machine learning of a relationship between a first training image captured under the first imaging condition and a second training image captured under a second imaging condition (see page 2). Usuda teaches the learned model is created from learning a learning model to recognize images of different wavelengths.,
Accordingly, it would have been obvious to one having ordinary skill in the art before the effectively filing date of the claimed invention was made to modify the system disclosed by Hirasawa to include generating the prediction image based on the learned model and the input image for the purpose of a plurality of trained models for the purpose of improving the accuracy of the learned model of recognizing images, as recited by Usuda.
Claim 5:
Hirasawa discloses:
wherein the first imaging condition corresponds to an imaging condition under which white light is used to capture an image of the object, wherein the second imaging condition corresponds to an imaging condition under which special light that differs in a wavelength band from the white light is used to capture an image of the object, or to an imaging condition under which pigments are to be dispersed to capture an image of the object, and wherein the third imaging condition corresponds to an imaging condition that differs in at least one of light distribution and a wavelength band of illumination light from the first imaging condition (see paragraph [0105]). Hirasawa teaches the training images are based on the endoscopic images including endoscopic images captured with white light illumination on the digestive organs of the subject (image condition 1), endoscopic images captured with dyes (for example, indigo carmine or an iodine solution) applied to the digestive organs of the subject (image condition 3), and endoscopic images captured with narrow-band light (for example, NBI (Narrow Band Imaging) narrow-band light or BLI (Blue Laser Imaging) narrow-band light) illumination on the digestive organs of the subject (image condition 2).
Claim 6:
Hirasawa discloses:
wherein the first training image and the third training image and a second trained model obtained through machine learning of a relationship between the third training image and the second training image, and wherein the processor is configured to generate, based on the input image and the first trained model, an intermediate image corresponding to an image in which the object captured in the input image is to be captured under the third imaging condition, and generate the prediction image based on the intermediate image and the second trained model (see paragraph [0105]). Hirasawa teaches the learning of multiple relationships between images of multiple imaging conditions. The model can handles learning relationships between images under all three conditions. Therefore, Hirasawa can output an image from the first and third condition.
Hirasawa fails to expressly disclose generating a prediction image corresponding to the input image in which an object captured is to be captures under the second imaging condition.
Usuda discloses:
wherein the trained model is obtained through machine learning of a relationship between a first training image captured under the first imaging condition and a second training image captured under a second imaging condition (see page 2). Usuda teaches the learned model is created from learning a learning model to recognize images of different wavelengths.,
Accordingly, it would have been obvious to one having ordinary skill in the art before the effectively filing date of the claimed invention was made to modify the system disclosed by Hirasawa to include generating the prediction image based on the learned model and the input image for the purpose of a plurality of trained models for the purpose of improving the accuracy of the learned model of recognizing images, as recited by Usuda.
Claim 7:
Hirasawa discloses:
wherein the given condition includes at least one of:
a first condition relating to detection results of a position or a size of a region of interest based on the prediction image (see paragraph [0101]). Hirasawa teaches the region of interest and size based on the result image;
a second condition relating to detection results of a type of the region of interest based on the prediction image (see paragraph [0101]). Hirasawa teaches the type of region such as early stomach cancer;
a third condition relating to certainty of the prediction image (see paragraphs [0093] and [0101]). Hirasawa teaches the certainty of the images;
a fourth condition relating to a diagnosis scene determined based on the prediction image (see paragraph [0101]). Hirasawa teaches the diagnosis such as early stage stomach cancer; and
a fifth condition relating to a part of the object captured in the input image (see paragraph [0101]). Hirasawa teaches a display of the area and the lesion.
Claim 9:
Hirasawa discloses:
wherein the prediction image is an image in which given information included in the input image is enhanced (see paragraph [0134]). Hirasawa teaches the prediction image is an image in which the input image is enhanced with a region of interest.
Claim 10:
Hirasawa discloses:
wherein the processor is configure to control a display to display at least one of a white light image captured using white light and the prediction image, or the white light image and the prediction image side by side (see paragraph [0100]). Hirasawa teaches displaying the input white light image and the result image.
Claim 11:
Hirasawa discloses:
wherein the processor is configured to image process the prediction image to detect a region of interest, and when the region of interest is detected, control a display to display information based on the prediction image. (see paragraphs [0099] and [0100]). Hirasawa teaches displaying the result image and the probability of the accuracy for the prediction image and displaying the information to the user.
Claim 12:
Hirasawa discloses:
An endoscope system comprising:
an illumination device configure to emit illumination light to irradiate an object (see paragraph [0148]). Hirasawa teaches an illumination device irradiating on an organ;
an imaging device configured to capture a biological image of the object; and in which the object is captured (see paragraphs [0071] and [0105]). Hirasawa teaches obtaining a biological image of a digestive organ; and
Hirasawa teaches the remaining limitations, they are interpreted and rejected for the same reasons as the system of Claim 1.
Claim 13, 14:
These claims are interpreted and rejected for the same reasons as the system of Claims 2 and 3, respectively.
Claim 15:
Hirasawa discloses:
wherein the illumination device is configured to irradiate the object in a first imaging frame of the imaging device, and emit the second illumination light to irradicate the object in a second imaging frame of the imaging device that differs from the first imaging frame (see figures 15A-15F and paragraph [0148]). Hirasawa teaches irradiating the object with the white light illumination in a first frame and irradiating a second illumination narrow band in a second imaging frame,
wherein the processor configured to: control a display to display a display image based on a biological image captured in the first imaging frame (see paragraph [0148]). Hirasawa teaches displaying the white light illumination biological image in the first frame; and
input the input image, generated based on the biological image captured under the first imaging condition in the second imaging frame to the trained model and the generate the prediction image by the trained model (see paragraphs [0099] and [0100]). Hirasawa teaches displaying the result image based on the input image with the probability of the accuracy for the prediction image and displaying the information to the user.
Claim 16:
Hirasawa discloses:
wherein the illumination device includes a first illumination section configured to emit the first illumination light and a second illumination section configured to emits the second illumination light (see paragraphs [0114] and [0115]). Hirasawa teaches the device able to emit second illumination lights,
wherein second illumination section is configured to emit a plurality of illumination light that differs from each other in at least one of the light distribution and the wavelength band (see paragraphs [0114] and [0115]). Hirasawa teaches the device able to emit white light and endoscopic images captured with narrow-band light (for example, NBI narrow-band light or BLI narrow-band light) illumination on the digestive organs of the subject., and
wherein the processor is configure to generate, outputting, based on the plurality of illumination light, a plurality of different kinds of the prediction image (see paragraphs [0099] and [0100]). Hirasawa teaches displaying the result image and the probability of the accuracy for the prediction image and displaying the information to the user.
Claim 17:
Although Claim 17 is a method claim, it is interpreted and rejected for the same reasons as the system of Claim 1.
Claim 18:
This claim is rejected for the same reasons as Claim 1. Usuda teaches a plurality of models and multiple image conditions.
Claim 20:
Hirasawa discloses:
wherein, in image processing the one of the input image and the one or more of the plurality of prediction images to determine whether the given condition is satisfied, the processor is configured to:
image process the plurality of the prediction images to detect a region of interest in each of the plurality of prediction images and to determine a level of certainty of detection of the region of interest in the each of the plurality of prediction images; and identify, as the given condition, one or more of the plurality of prediction images having a level of certainty of detection of the region of interest higher than a predetermined level of certainty (see paragraphs [0080]-[0083]). Hirasawa teaches calculating the probability score indicating certainty of the lesion name and location and identify those images with high certainty; and
wherein, in outputting the at least one prediction image of the plurality of prediction images, the processor is configured to output the one or more of the plurality of prediction images having the level of certainty of detection of the region of interest higher than the predetermined level of certainty (see paragraphs [0080]-[0083]). Hirasawa teaches outputting image based on the certainty of the lesion name and location.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Hirasawa and Usuda, in view of Gadermayr et al., "Narrow band imaging versus white-light: What is best for computer-assisted diagnosis of celiac disease?," 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic
Claim 8:
Hirasawa discloses:
wherein the first imaging condition includes a plurality of imaging conditions under which different illumination light with different light distribution or a wavelength band is used for imaging (see paragraph [0108]). Hirasawa teaches the first imaging condition being different illumination light or different wavelength bands,
Hirasawa and Usuda fail to expressly disclose filtering images based on severity.
Gadermayr discloses:
wherein the processor is configured to: input the input image to a plurality of trained models; generate a plurality of different kinds of the prediction image by the plurality of the trained models and the input image captured using the different illumination light, (see page 3 Section “Opposing and Combined Modalities”). Gadermayr teaches outputting prediction images based on the model trained for that illumination modality, and
control to change the illumination light based on the given condition (see page 4 Section “Conclusion”). Gadermayr allows a system to choose the modality used for the obtaining the images based on underlying features.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effectively filing date of the claimed invention was made to modify the system disclosed by Hirasawa to include a plurality of trained models for the purpose of determining which modality has the best accuracy, as recited by Gadermayr.
Claims 19 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Hirasawa and Usuda, in view of Kumar et al., United States Patent Publication 20120316421 (hereinafter “Kumar”).
Claim 19:
Usuda discloses:
wherein, in image processing the one of the input image and the one or more of the plurality of prediction images to determine whether the given condition is satisfied, (see page 2). Usuda teaches performing image processing on the input and prediction image to determine that the accuracy/correct data is met; and
Hirasawa and Usuda fail to expressly disclose a plurality of trained models based on different illumination lights.
Kumar discloses:
the processor is configured to: image process the plurality of the prediction images to identify a region of interest in each of the plurality of prediction images and to determine a severity of the region of interest in the each of the plurality of prediction images; and (see paragraph [0058]). Kumar teaches assessing the regions of interest based on the severity of the lesions found in the images, and
identify, as the given condition, one or more of the plurality of prediction images having a severity level higher than a predetermined level (see paragraph [0058]). Kumar teaches identifying the images having the highest rank of severity of the lesions in the images;
and wherein, in outputting the at least one prediction image of the plurality of prediction images, the processor is configured to output the one or more of the plurality of prediction images having the severity level higher than the predetermined level (see paragraph [0013] and [0058]). Kumar teaches outputting the image with the highest severity.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effectively filing date of the claimed invention was made to modify the system disclosed by Hirasawa and Usuda to include the images being ranked based on severity of the region of interest for the purpose of efficiently finding images in a plurality of images based on a condition such as severity, as recited by Kumar.
Claim 22:
Hirasawa and Usuda fail to expressly disclose determining features about the object once the condition is satisfied.
Kumar discloses:
wherein, in image processing the one of the input image and the one or more of the plurality of prediction images to determine whether the given condition is satisfied, the processor is configured to:
image process the input image to acquire information about a part of an object captured in the input image, and to determine whether the information acquired about the part of the object captured in the input image satisfies the given condition; and in response to determining that the given condition is satisfied, output the at least one prediction image of the plurality of prediction images. (see paragraph [0068]). Kumar teaches determining information about the object based on the input images such as summarizing appearance, shape and size.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effectively filing date of the claimed invention was made to modify the system disclosed by Hirasawa and Usuda to include determining attributes about the object based on the condition being satisfied for the purpose of efficiently identifying details about the objects within the images, as recited by Kumar.
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Hirasawa and Usuda, in view of Makino, United States Patent Publication 20210407077 (hereinafter “Makino”).
Claim 21:
Hirasawa and Usuda fail to expressly disclose a system allowing multiple diagnosis.
Makino discloses:
wherein the given condition comprises a first given condition and a second given condition, and wherein, in image processing the one of the input image and the one or more of the plurality of prediction images to determine whether the given condition is satisfied, and outputting at least one prediction image of the plurality of prediction images, the processor is configured to: image process, in a first diagnosis mode, a first prediction image of the plurality of prediction images to determine whether the first given condition is satisfied with respect to the first prediction image (see paragraph [0223]). Makino teaches having given conditions and determining the diagnosis images based on if the given condition is satisfied or not;
in response to determining that the first given condition is satisfied, output at least the first prediction image; in response to determining that the first given condition is not satisfied, image process, in a second diagnosis mode, a second prediction image of the plurality of prediction images to determine whether the second given condition is satisfied with respect to the second prediction image; and in response to determining that the second given condition is satisfied, output at least the second prediction image (see paragraph [0223]). Makino teaches having given conditions and determining multiple diagnosis images based on if the given condition is satisfied or not; and
Accordingly, it would have been obvious to one having ordinary skill in the art before the effectively filing date of the claimed invention was made to modify the system disclosed by Hirasawa and Usuda to include determining multiple diagnosis based on conditions being satisfied for the purpose of efficiently determining correct and prompt diagnosis of images, as recited by Makino.
Response to Arguments
Applicant’s arguments, see REM, filed 8/20/25, with respect to the rejections of claims 1-17 under 35 USC 102 and 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Hirasawa and Usuda.
Rejections Under 35 USC 102(a)(1)
Applicant argues Based on the foregoing discussion, it must be concluded that each and every element as set forth in claim 1 is not found, either expressly or inherently described, in Hirasawa. Therefore, claim 1 is not anticipated by Hirasawa. Claims 12 and 17 are not anticipated by Hirasawa for similar reasons.
The Examiner agrees that Hirasawa alone does not teach the features of the claim.
The Examiner introduced new art, Usuda, in combination with Hirasawa to teach the elements of the claims. See the above rejection of Claims 1, 12 and 17.
The Applicant argues Claims 2-7, 9-11 and 13-16 depend from and incorporate by reference all the elements of claims 1 and 12, respectively. Therefore, claims 2-7, 9-11 and 13-16 are not anticipated by Hirasawa for at least the reasons discussed above with respect to claims 1 and 12. Withdrawal of the rejection of claims 1-7 and 9-17 under 35 U.S.C. §102(a)(2) is respectfully requested.
The Examiner disagrees.
Based on the response to arguments above of the independent claims, the claims are rejected using the combination of arts (see above).
Claim Rejections under 35 USC 103
Applicant argues Gadermayr does not cure the above-identified deficiencies of Hirasawa. Based on the foregoing discussion, it must be concluded that the teachings of the cited references, taken individually or in combination, fail to suggest all the elements of claim 1. Therefore, claim 1 is not obvious over the cited references.
Claim 8 depends from and incorporates by reference all the elements of claim 1. Therefore, claim 8 is not obvious over the cited references for at least the reasons discussed above with respect to claim 1.
The Examiner agrees Gadermayr does not cure the deficiencies.
The Examiner introduced new art, Usuda, in combination with Hirasawa to teach the elements of the claim 1. See the above rejection of Claims 1, 12 and 17. Therefore, Claim 9 remains rejected under the new combination of art.
Rejections Under 35 USC 101
These rejections have been withdrawn, therefore the arguments are in moot.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIONNA M BURKE whose telephone number is (571)270-7259. The examiner can normally be reached M-F 8a-4p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571)272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TIONNA M BURKE/Examiner, Art Unit 2178 10/30/25
/STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178