DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Amendments to claims 21, 37 of 8/28/2025 acknowledged and entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) 21, 37 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 21, 22, 25-38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ozawa (US 20120302847 A1) in view of Ebata (US 20180214005 A1) and Park (US 20110301447 A1) and Kono (WO 2017002184 A1).
Regarding Claim 21, Ozawa teaches a medical-use image processing device comprising one or more processors,
wherein the one or more processors perform:
acquiring a medical-use image (Fig. 2, Element 34; [0055] shows the image acquisition unit operated by the processor);
receiving an operation by a user (fig. 1, element 17; [0047] discloses a selection switch for actuation by the user.);
obtaining an illumination mode in a case where the medical-use image is captured, based on the received operation (Fig. 2, Element 80, 81; [0065-68] disclose that images using normal light are processed in the normal light processing section 80, while images obtained with special light are processed by the special light imaging section 81);
performing a detection processing of detecting a lesion using the medical-use image in a case where obtainment is made that the illumination mode is a first illumination mode (Fig. 15-17, Element 100; [0018] discloses that upon reception of a normal light image, the image is enhanced using a blue light image to produce a vascular enhancement image to detect regions of interest; [0096] a pattern recognition procedure may detect suspected lesions),
causing a display device to display information indicating the first illumination mode or information indicating illumination light used in the first illumination mode (Fig. 15-17, Element 100; [0018] discloses that the normal light image is only present in the first special image display mode, indicating the first illumination mode is being used and displayed) in addition to information indicating a detection position of the lesion according to a result of the detection processing in a case where the obtainment is made that the illumination mode is the first illumination mode ([0018] recites vascular enhancement indicating regions of interest to the user);
and causing the display device to display information indicating the second illumination mode or information indicating illumination light used in the second illumination mode (Fig. 15-17, element 101; [0092], when the second narrowband illumination mode is used, oxygen saturation information is displayed instead, indicating the second mode) in addition to information indicating a classification result according to the result of the classification processing in a case where the obtainment is made that the illumination mode is the second illumination mode ([0018] discloses making a judgment upon whether the area of interest is classified as hypoxic), and
a wavelength range of second illumination light emitted in the second illumination mode is narrower than a wavelength range of first illumination light emitted in the first illumination mode ([0018], white light being total light, narrowband being a subsection of total light, hence narrowband illumination step emits a second illumination light with a narrower wavelength range than the first mode).
Ozawa does not explicitly teach performing a classification processing of classifying a type of lesion using the medical-use image in a case where obtainment is made that the illumination mode is a second illumination mode;
wherein in the detection processing, the one or more processors detect the lesion from the medical- use image by using a trained model constructed by machine learning using a plurality of images acquired in the first illumination mode and information regarding a position of the lesion in the plurality of images acquired in the first illumination mode,
in the classification processing, the one or more processors classify the medical-use image or the detected lesion by using a trained model constructed by machine learning using a plurality of images acquired in the second illumination mode and information regarding categories of the plurality of images acquired in the second illumination mode, and
in a case where the one or more processors obtain that the illumination mode is switched between the first illumination mode and the second illumination mode, the one or more processors switch recognition between the detection processing and the classification processing.
However, Ebata teaches performing a classification processing of classifying a type of lesion using the medical-use image in a case where obtainment is made that the illumination mode is a second illumination mode ([0107-110], fig. 5 determination unit 85 performs a process and displays the result on display portion 117);
However, Park teaches wherein in the detection processing, the one or more processors (fig. 1, element Multimodality EHMM, [0163], four superstates representing four modalities to include white light and narrow-band reflectance) detect the lesion from the medical- use image by using a trained model constructed by machine learning using a plurality of images acquired in the first illumination mode and information regarding a position of the lesion in the plurality of images acquired in the first illumination mode ([0057], detection and diagnosis of lesions using image frames in real time during examination),
in the classification processing, the one or more processors (fig. 1, element Multimodality EHMM, [0163], four superstates representing four modalities to include white light and narrow-band reflectance) classify the medical-use image or the detected lesion by using a trained model constructed by machine learning using a plurality of images acquired in the second illumination mode and information regarding categories of the plurality of images acquired in the second illumination mode ([0057], detection and diagnosis of lesions using image frames in real time during examination).
However, Kono teaches in a case where the one or more processors obtain (fig. 12, element 221, p. 16, para. 2, target area setting unit 221 determines either a special or nonspecial light and proceeds with steps focused on either result) that the illumination mode is switched between the first illumination mode and the second illumination mode, the one or more processors switch recognition between the detection processing and the classification processing.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the determination unit of Ozawa to diagnose a disease state as taught in Ebata in order to aid an operator in determining the state of a lesion (Ebata [0009]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the determination unit of Ozawa to detect and classify a lesion via convolutional neural network trained on both normal light images and narrow band images as taught in Park in order to determine a diagnosis based on the relationships between imaging modalities (Park [0126]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the determination unit of Ozawa to determine illumination mode in an image as taught in Kono in order to make a determination on what procedure is being performed and at which region of interest (Kono p. 6, para. 6).
Regarding Claim 22, Ozawa in view of Ebata and Park and Kono teaches the medical-use image processing device according to claim 21,
wherein Ebata further teaches the one or more processors perform classifying of the type of the lesion detected in the detection processing ([0107-110], fig. 5 determination unit 85 performs a process and displays the result on display portion 117).
Regarding Claim 25, Ozawa in view of Ebata and Park and Kono teaches the medical-use image processing device according to claim 21,
wherein Ozawa further teaches the one or more processors acquire the medical-use image in time series (Fig. 2, Element 60 whose function is described in [0060], [0066]),
and obtains the illumination mode for frames constituting the medical-use image acquired in time series (Fig. 2, Element 82-83 whose function is described in [0064-65]).
Regarding Claim 26, Ozawa in view of Ebata and Park and Kono teaches the medical-use image processing device according to claim 25,
Wherein Ozawa further teaches in a case where the obtainment is made that the illumination mode is the second illumination mode, the one or more processors cause the display device to continuously display a result of the classification processing, separately from the frames of the medical-use image obtained in time series ([0090] indicates the display of oxygen saturation information, which per [0092] is a narrowband image in a second illumination mode, continuously alongside and separately from the normal light image).
Regarding Claim 27, Ozawa in view of Ebata and Park and Kono teaches the medical-use image processing device according to claim 21,
wherein Ozawa further teaches the one or more processors accept an operation via any one of a foot switch, a microphone, a keyboard, a mouse, an illumination mode setting switch (fig. 1, element 17, [0047]), an operation by line of sight, and an operation by gesture, as the operation by the user.
Regarding Claim 28, Ozawa in view of Ebata and Park and Kono teaches the medical-use image processing device according to claim 27,
wherein Ozawa further teaches the one or more processors accept an operation via the illumination mode setting switch (fig. 1, element 17; [0047]) as the operation by the user.
Regarding Claim 29, Ozawa in view of Ebata and Park and Kono teaches the medical-use image processing device according to claim 21,
Wherein Ozawa further teaches the one or more processors comprise a first recognizer that is constructed by machine learning and performs the detection processing ([0096-97] discloses the suspected-lesion detection section 84 detects a spot larger than an accepted size via pattern matching),
and a second recognizer that is constructed by machine learning and performs the classification processing ([0096-101] discloses the second detection process of classifying detected spots as hypoxic using pattern matching).
Regarding Claim 30, Ozawa in view of Ebata and Park and Kono teaches the medical-use image processing device according to claim 29,
wherein Ozawa further teaches the first recognizer and the second recognizer have a hierarchical network structure ([0096-0098]; Fig. 20).
Regarding Claim 31, Ozawa in view of Ebata and Park and Kono teaches the medical-use image processing device according to claim 21,
Wherein Ozawa further teaches the one or more processors cause the display device to display the information indicating the detection position by at least one of: superimposing figures and symbols according to the detection position of the region of interest, displaying position coordinates numerically, and changing a color and gradation of the region of interest (fig. 20 shows superimposing of markers around spots of interest SP and SPx).
Regarding Claim 32, Ozawa in view of Ebata and Park and Kono teaches the medical-use image processing device according to claim 21,
wherein Ebata further teaches the one or more processors cause the display device to display the information indicating the classification result by using at least one of: characters, numbers, figures, symbols, and colors according to the classification result ([0107-110], fig. 5 determination unit 85 performs a process and displays the result on display portion 117).
Regarding Claim 33, Ozawa in view of Ebata and Park and Kono teaches an endoscope system comprising:
the medical-use image processing device according to claim 21;
wherein Ozawa further teaches the display device (fig. 15-20, element 14; [0046]);
an endoscope with an image sensor to acquire the medical-use image (fig. 1, element 12; [0046]);
a light source device which has the first illumination mode and the second illumination mode (fig. 1, element 11; [0046])],
the light source device emitting a first illumination light in the first illumination mode, and emitting a second illumination light in the second illumination mode ([0047-52] discloses 3 observation modes, the first being a normal light image, the latter 2 using narrowband imaging).
Regarding Claim 34, Ozawa in view of Ebata and Park and Kono teaches the endoscope system according to claim 33,
Ozawa does not explicitly recite a system wherein the light source device comprises a plurality of semiconductor light emitting elements.
However, Ebata teaches a system wherein the light source device comprises a plurality of semiconductor light emitting elements (fig. 2, element 20; [0056]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the laser light source of Ozawa to have been a semiconductor source as taught in Ebata in order to control power to the source via electrical voltage (Ebata [0056]).
Regarding Claim 35, Ozawa in view of Ebata and Park and Kono teaches the endoscope system according to claim 34.
Ozawa further teaches a system wherein the light source device emits normal light as the first illumination light,
and emits special light as the second illumination light ([0047-52] discloses 3 observation modes, the first being a normal light image, the latter 2 using narrowband imaging).
Regarding Claim 36, Ozawa in view of Ebata and Park and Kono teaches the endoscope system according to claim 35.
Ebata further teaches a system wherein the light source device emits lights at least from a light emitting element for violet light, a light emitting element for blue light, a light emitting element for green light, and a light emitting element for red light, as the normal light (col. 7, ln. 29-43 discloses that Ebata contains colored LED light sources, which form the normal light when used together),
And a system that emits lights from the semiconductor light emitting element for violet light and the semiconductor light emitting element for greenlight, or lights from the semiconductor light emitting element for green light and the semiconductor light emitting element for red light, as the special light (fig. 3-4, col. 8, ln. 4-52 shows a pattern of light used in oxygen saturation calculation and color correction wherein green and red lights are applied using red and green light sources).
Regarding Claim 37, Ozawa teaches a method executed by a medical-use image processing device comprising a one or more processors,
wherein the one or more processors perform:
acquiring a medical-use image (Fig. 2, Element 34; [0055] shows the image acquisition unit operated by the processor);
receiving an operation by a user (fig. 1, element 17; [0047] discloses a selection switch for actuation by the user);
obtaining an illumination mode in a case where the medical-use image is captured, based on the received operation (Fig. 2, Element 80, 81; [0065-68] disclose that images using normal light are processed in the normal light processing section 80, while images obtained with special light are processed by the special light imaging section 81);
performing a detection processing of detecting a lesion using the medical-use image in a case where obtainment is made that the illumination mode is a first illumination mode (Fig. 15-17, Element 100; [0018] discloses that upon reception of a normal light image, the image is enhanced using a blue light image to produce a vascular enhancement image to detect regions of interest; [0096] a pattern recognition procedure may detect suspected lesions),
and causing a display device to display information indicating the first illumination mode or information indicating illumination light used in the first illumination mode (Fig. 15-17, Element 100; [0018] discloses that the normal light image is only present in the first special image display mode, indicating the first illumination mode is being used and displayed) in addition to information indicating a detection position of the lesion according to a result of the detection processing in a case where the obtainment is made that the illumination mode is the first illumination mode ([0018] recites vascular enhancement indicating regions of interest to the user),
and causing the display device to display information indicating the second illumination mode or information indicating illumination light used in the second illumination mode (Fig. 15-17, element 101; [0092], when the second narrowband illumination mode is used, oxygen saturation information is displayed instead, indicating the second mode) in addition to information indicating a result of classification according to the result of the classifying processing in a case where the obtainment is made that the illumination mode is the second illumination mode ([0018] discloses making a judgment upon whether the area of interest is classified as hypoxic), and
a wavelength range of second illumination light emitted in the second illumination mode is narrower than a wavelength range of first illumination light emitted in the first illumination mode ([0018], white light being total light, narrowband being a subsection of total light, hence narrowband illumination step emits a second illumination light with a narrower wavelength range than the first mode).
Ozawa does not explicitly teach performing a classification processing of classifying a type of lesion using the medical-use image in a case where obtainment is made that the illumination mode is a second illumination mode;
wherein in the detection processing, the one or more processors detect the lesion from the medical- use image by using a trained model constructed by machine learning using a plurality of images acquired in the first illumination mode and information regarding a position of the lesion in the plurality of images acquired in the first illumination mode,
in the classification processing, the one or more processors classify the medical-use image or the detected lesion by using a trained model constructed by machine learning using a plurality of images acquired in the second illumination mode and information regarding categories of the plurality of images acquired in the second illumination mode, and
the one or more processors obtain that the illumination mode is switched between the first illumination mode and the second illumination mode and switch recognition between the detection processing and the classification processing.
However, Ebata teaches performing a classification processing of classifying a type of lesion using the medical-use image in a case where obtainment is made that the illumination mode is a second illumination mode ([0107-110], fig. 5 determination unit 85 performs a process and displays the result on display portion 117);
However, Park teaches wherein in the detection processing, the one or more processors (fig. 1, element Multimodality EHMM, [0163], four superstates representing four modalities to include white light and narrow-band reflectance) detect the lesion from the medical- use image by using a trained model constructed by machine learning using a plurality of images acquired in the first illumination mode and information regarding a position of the lesion in the plurality of images acquired in the first illumination mode ([0057], detection and diagnosis of lesions using image frames in real time during examination),
in the classification processing, the one or more processors (fig. 1, element Multimodality EHMM, [0163], four superstates representing four modalities to include white light and narrow-band reflectance) classify the medical-use image or the detected lesion by using a trained model constructed by machine learning using a plurality of images acquired in the second illumination mode and information regarding categories of the plurality of images acquired in the second illumination mode ([0057], detection and diagnosis of lesions using image frames in real time during examination).
However, Kono teaches the one or more processors obtain that the illumination mode is switched between the first illumination mode and the second illumination mode and switch recognition between the detection processing and the classification processing (fig. 12, element 221, p. 16, para. 2, target area setting unit 221 determines either a special or nonspecial light and proceeds with steps focused on either result).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the determination unit of Ozawa to diagnose a disease state as taught in Ebata in order to aid an operator in determining the state of a lesion (Ebata [0009]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the determination unit of Ozawa to detect and classify a lesion via convolutional neural network trained on both normal light images and narrow band images as taught in Park in order to determine a diagnosis based on the relationships between imaging modalities (Park [0126]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the determination unit of Ozawa to determine illumination mode in an image as taught in Kono in order to make a determination on what procedure is being performed and at which region of interest (Kono p. 6, para. 6).
Regarding Claim 38, Ozawa in view of Ebata and Park and Kono teach The medical-use image processing device according to claim 21,
wherein Ozawa further teaches a first illumination light (fig. 3, element WHITE LIGHT) is illuminated in the first illumination mode, and a second illumination light (fig. 3, element N1-3) which has wavelength narrower than the first illumination light is illuminated in the second illumination mode ([0048] narrowband wavelengths have both a shorter wavelength (narrower waveforms) as well as a narrower range of wavelengths than the white light illumination).
Regarding claim 39, Ozawa in view of Ebata and Park and Kono teach The medical-use image processing device according to claim 21,
Further, Ozawa teaches the device wherein the one or more processors acquire the medical-use image by illuminating a first illumination light in the first illumination mode (fig. 3, element WHITE LIGHT), and the one or more processors acquire the medical-use image by illuminating a second illumination light (fig. 3, element N1-3) which has wavelength band narrower than the first illumination light is illuminated in the second illumination mode ([0048] narrowband wavelengths have both a shorter wavelength (narrower waveforms) as well as a narrower range of wavelengths than the white light illumination).
Regarding claim 40, Ozawa in view of Ebata and Park and Kono teaches The medical-use image processing device according to claim 21,
Further, Ebata teaches the device wherein the type of the lesion is one of hyperplastic polyp, adenoma, intramucosal cancer, and invasive cancer ([0107], adenoma explicitly detected).
Regarding claim 41, Ozawa in view of Ebata and Park and Kono teaches The method according to claim 37,
wherein the type of the lesion is one of hyperplastic polyp, adenoma, intramucosal cancer, and invasive cancer ([0107], adenoma explicitly detected).
Claim(s) 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ozawa in view of Ebata and Park and Kono as applied to claim 21 in further view of Shigeta (US11399751 B2).
Regarding Claim 24, Ozawa in view of Ebata and Park and Kono teaches the medical-use image processing device according to claim 21,
Ozawa in view of Ebata and Park and Kono does not explicitly teach one wherein the one or more processors cause the display device to display information indicating reliability of the classification.
However, Shigeta teaches a device wherein the one or more processors cause the display device to display information indicating reliability of the classification (col. 17, ln. 63-col. 18, ln. 19 discloses that a notice may be given when reliability of classification is below a lower limit).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the oxygen level detection method of Ozawa to have a failsafe for low reliability calculations as taught in Shigeta in order to prevent results that are illogical biologically (Shigeta col. 17, ln. 63-col. 18, ln. 19 discloses that in the rare circumstances where a value is acquired outside of bounds, oxygen saturation would either be above 100% or below 0%, which would be impossible).
Regarding claim 42, Ozawa in view of Ebata and Park and Kono teach he medical-use image processing device according to claim 21,
Ozawa, Ebata, and Park and Kono do not teach the device wherein a number of LED light sources used to emit the first illumination light is greater than a number of LED light sources used to emit the second illumination light.
However, Shigeta teaches the device wherein a number of LED light sources used to emit the first illumination light is greater than a number of LED light sources used to emit the second illumination light (col. 7, ln. 14-28, 44-53, normal observation mode using white light is achieved via simultaneous activation of blue, green, and red light sources 20a/c/d. Narrowband illumination is achieved by activating a single of these light sources, resulting in a lower number of LED light sources being used in the second narrowband illumination mode).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the light source of Ozawa to use a combination of LEDs to form white light as taught in Shigeta in order to achieve multiple lighting configurations using the same number of light sources (Shigeta col. 7, ln. 29-col. 8, ln. 51).
Regarding claim 43, Ozawa in view of Ebata and Park and Kono teaches The medical-use image processing device according to claim 21,
Ozawa, Ebata, and Park and Kono do not explicitly teach the device wherein the wavelength range of the second illumination light includes 390 nm to 450 nm or 530 nm to 550 nm.
However, Shigeta teaches the device wherein the wavelength range of the second illumination light includes 390 nm to 450 nm or 530 nm to 550 nm (col. 7, ln. 16-28, G light source 20c emits green light having a wavelength 540 +/- 20nm, 530-550 being included within that range).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the light source of Ozawa to use a wavelength range as taught in Shigeta in order to better reconstitute white light using multiple light sources (Shigeta col. 7, ln. 29-col. 8, ln. 51).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY TUAN LUU whose telephone number is (703)756-4592. The examiner can normally be reached Monday-Tuesday, Thursday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Carey can be reached on 5712707235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TIMOTHY TUAN LUU/Examiner, Art Unit 3795
/MICHAEL J CAREY/Supervisory Patent Examiner, Art Unit 3795