Prosecution Insights
Last updated: April 19, 2026
Application No. 18/007,367

Image Processing Device, Image Processing Method, Image Processing Program, Endoscope Device, and Endoscope Image Processing System

Final Rejection §103
Filed
Jan 30, 2023
Examiner
MALDONADO, STEVEN
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Riken
OA Round
4 (Final)
30%
Grant Probability
At Risk
5-6
OA Rounds
3y 0m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
6 granted / 20 resolved
-40.0% vs TC avg
Strong +54% interview lift
Without
With
+54.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
51 currently pending
Career history
71
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
25.8%
-14.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1, 7, 11, 15, 20, and 27-29 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 7, 11, 15, 20, and 27-29 are rejected under 35 U.S.C. 103 as being unpatentable over Shelton (US20210196108A1) in view of Tan (CN107169535A) and further in view of Saito et al (US20220020496A1; hereinafter referred to as Saito). Regarding Claim 1, Shelton discloses an image processing device ("The control system includes an imaging system and a control circuit coupled to the imaging system. The imaging system includes a multispectral electromagnetic radiation (EMR) source and an image sensor." [0003]), comprising: a memory (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), and a processor coupled to the memory (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), wherein the processor is configured to (“The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]) acquire a first image captured by a camera and obtained by irradiating a gastrointestinal tract of a living body with first light representing light of a wavelength of 1050 nm to 1105 nm (“A surgical visualization system, as disclosed herein, can be employed in a number of different types of procedures for different medical specialties, such as .. bariatric/gastric,“ [0331], “An illustration of the utilization of spectral imaging techniques to visualize different tissue types and/or anatomical structures is shown in FIG. 13B. … can visualize a tumor 2332, an artery 2334, and various abnormalities” [0176], ”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178]); acquire a second image captured by the camera and obtained by irradiating the gastrointestinal tract area of the living body with second light representing light of a wavelength of 1145 nm to 1200 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) PNG media_image1.png 252 359 media_image1.png Greyscale acquire a third image captured by the camera and obtained by irradiating the gastrointestinal tract area of the living body with third light representing light of a wavelength of 1245 nm to 1260 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) and acquire a fourth image captured by the camera and obtained by irradiating the gastrointestinal tract area of the living body with fourth light representing light of a wavelength of 1350 nm to 1405 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged), and that wherein the processor is configured to determine whether or not the gastrointestinal stromal tumor is present at each point pixel (“Each of the aforementioned image portions 3072, 3074, 3076, 3078 can be fused together by the control system 133 to generate the fused image 3070 that provides for an unobstructed visualization of the tumor 3038 and any other relevant structures 3040.” [0272]) Shelton does not specifically disclose that the processor is further configured to generate an integrated image from the first image, the second image, the third image and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image and that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a gastrointestinal stromal tumor present in the gastrointestinal tract area, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the gastrointestinal tract area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model. However, in a similar field of endeavor, Tan teaches a kind of deep learning sorting technique of biological multispectral image and device [Abstract]. Tan also teaches the processor is further configured to generate an integrated image from the first image, the second image, the third image and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image (“The data preprocessing comprises the following processing procedures of converting a multispectral image into an M × N matrix, wherein M is the number of multispectral image pixels, M is the image width × image height, and N is the number of wavelength values in the multispectral image, calculating the first M wavelengths which have larger contribution to the M × N matrix by using Principal Component Analysis (PCA), wherein the calculated M wavelengths can be selectedA combination of wavelengths; in thatThe method comprises the steps of selecting the most relevant wavelength combination A from the wavelength combinations, wherein A is the wavelength combination containing n different wavelength values” [Pg. 7]) wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the gastrointestinal tract area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model (“ the contribution of each spectral wavelength value to each pixel can be calculated by a multivariate curve resolution Method (MCR) and non-negative least squares iterative optimization (NNLS) in Principal Component Analysis (PCA), so that the contribution of autofluorescence is directly eliminated from the pixels.” [Pg. 10], “Through steps S101 to S103, data of the biological multispectral image is preprocessed, a maximum correlation region is selected as an input of a deep learning network, a correlation feature filter template is obtained based on a biological pathological mechanism design, and is used as an initial training weight, learning of revealing spectral correlation features is enhanced by adjusting weights and relative layer positions of each wavelength in the template, and finally a trained CNN neural network can be obtained” [Pg. 9-10]. It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton as outlined above with the processor is further configured to generate an integrated image from the first image, the second image, the third image and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the gastrointestinal tract area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model as taught by Tan, because the learning speed is further improved, the specificity and the robustness of the deep learning are enhanced, and the multispectral classification with the maximum related connection probability is obtained [Pg. 6]. Tan does not specifically teach that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a gastrointestinal stromal tumor present in the gastrointestinal tract area. However, in the similar field of diagnostic endoscopic imaging, Saito teaches a diagnostic assistance method for a disease based on an endoscopic image of a digestive organ with use of a convolutional neural network (CNN) trains the CNN using a first endoscopic image of the digestive organ and at least one final diagnosis result of the positivity or the negativity for the disease in the digestive organ [Abstract]. Saito also teaches that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a gastrointestinal stromal tumor present in the gastrointestinal tract area (“a CNN system capable of classifying gastrointestinal images” [0214], “The CNN system was trained using endoscopic images captured daily in the clinic to which one of the inventors of the present invention belongs. The endoscope systems used included high-resolution or high-definition upper gastrointestinal endoscopes” [0290]). It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton and Tan as outlined above with the processor being further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a gastrointestinal stromal tumor present in the gastrointestinal tract area as taught by Saito, because gastric endoscopic examinations provide extremely useful information to differential diagnoses [0004]. Regarding Claim 7, Shelton discloses an image processing device ("The control system includes an imaging system and a control circuit coupled to the imaging system. The imaging system includes a multispectral electromagnetic radiation (EMR) source and an image sensor." [0003]), comprising: a memory (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), and a processor coupled to the memory (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), wherein the processor is configured to (“The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]) acquire a first image captured by a camera and obtained by irradiating a lung area of a living body with first light representing light of a wavelength of 955 nm to 1105 nm (“A surgical visualization system, as disclosed herein, can be employed in a number of different types of procedures for different medical specialties, such as .. bariatric/gastric,“ [0331], “An illustration of the utilization of spectral imaging techniques to visualize different tissue types and/or anatomical structures is shown in FIG. 13B. … can visualize a tumor 2332, an artery 2334, and various abnormalities” [0176], ”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178]); acquire a second image captured by the camera and obtained by irradiating the lung area of the living body with second light representing light of a wavelength of 1055 nm to 1135 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) PNG media_image1.png 252 359 media_image1.png Greyscale acquire a third image captured by the camera and obtained by irradiating the lung area of the living body with third light representing light of a wavelength of 1135 nm to 1295 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) acquire a fourth image captured by the camera and obtained by irradiating the lung area of the living body with fourth light representing light of a wavelength of 1295 nm to 1510 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged), acquire a fifth image captured by the camera and obtained by irradiating the lung area of the living body with fifth light representing light of a wavelength of 1510 nm to 1645 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged), and acquire a sixth image captured by the camera and obtained by irradiating the lung area of the living body with sixth light representing light of a wavelength of 1820 nm to 2020 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) and that wherein the processor is configured to determine whether or not the lung tumor is present at each pixel (“Each of the aforementioned image portions 3072, 3074, 3076, 3078 can be fused together by the control system 133 to generate the fused image 3070 that provides for an unobstructed visualization of the tumor 3038 and any other relevant structures 3040.” [0272]) Shelton does not specifically disclose that the processor is further configured to generate an integrated image from the first image, the second image, the third image, the fourth image, the fifth image and the sixth image by arranging together corresponding pixels from each of the first image, the second image, the third image, the fourth image, the fifth image, and the sixth image into an integrated pixel of the integrated image and that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a lung tumor present in the lung area, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto lung area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model. However, in a similar field of endeavor, Tan teaches the processor is further configured to generate an integrated image from the first image, the second image, the third image, the fourth image, the fifth image, and the sixth image by arranging together corresponding pixels from each of the first image, the second image, the third image, the fourth image, the fifth image, and the sixth image into an integrated pixel of the integrated image (“The data preprocessing comprises the following processing procedures of converting a multispectral image into an M × N matrix, wherein M is the number of multispectral image pixels, M is the image width × image height, and N is the number of wavelength values in the multispectral image, calculating the first M wavelengths which have larger contribution to the M × N matrix by using Principal Component Analysis (PCA), wherein the calculated M wavelengths can be selected A combination of wavelengths; in that The method comprises the steps of selecting the most relevant wavelength combination A from the wavelength combinations, wherein A is the wavelength combination containing n different wavelength values” [Pg. 7]) wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the lung area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model (“ the contribution of each spectral wavelength value to each pixel can be calculated by a multivariate curve resolution Method (MCR) and non-negative least squares iterative optimization (NNLS) in Principal Component Analysis (PCA), so that the contribution of autofluorescence is directly eliminated from the pixels.” [Pg. 10], “Through steps S101 to S103, data of the biological multispectral image is preprocessed, a maximum correlation region is selected as an input of a deep learning network, a correlation feature filter template is obtained based on a biological pathological mechanism design, and is used as an initial training weight, learning of revealing spectral correlation features is enhanced by adjusting weights and relative layer positions of each wavelength in the template, and finally a trained CNN neural network can be obtained” [Pg. 9-10]. It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton as outlined above with the processor is further configured to generate an integrated image from the first image, the second image, the third image, the fourth image, the fifth image, and the sixth image by arranging together corresponding pixels from each of the first image, the second image, the third image, the fourth image, the fifth image, and the sixth image into an integrated pixel of the integrated image, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the lung area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model as taught by Tan, because the learning speed is further improved, the specificity and the robustness of the deep learning are enhanced, and the multispectral classification with the maximum related connection probability is obtained [Pg. 6]. Tan does not specifically teach that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a lung tumor present in the lung area. However, Saito also teaches that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image a tumor (“a CNN system capable of classifying gastrointestinal images” [0214], “The CNN system was trained using endoscopic images captured daily in the clinic to which one of the inventors of the present invention belongs. The endoscope systems used included high-resolution or high-definition upper gastrointestinal endoscopes” [0290]). Saito does not specifically teach detecting, from the integrated image a lung tumor. However, It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton,Tan and Saito to detect from the integrated image a lung tumor because it would yield the predictable result of identifying the tumor. Regarding Claim 11, Shelton discloses an image processing device ("The control system includes an imaging system and a control circuit coupled to the imaging system. The imaging system includes a multispectral electromagnetic radiation (EMR) source and an image sensor." [0003]), comprising: a memory (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), and a processor coupled to the memory (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), wherein the processor is configured to (“The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]) acquire a first image captured by a camera and obtained by irradiating a stomach area of a living body with first light representing light of a wavelength of 1065 nm to 1135 nm (“A surgical visualization system, as disclosed herein, can be employed in a number of different types of procedures for different medical specialties, such as .. bariatric/gastric,“ [0331], “An illustration of the utilization of spectral imaging techniques to visualize different tissue types and/or anatomical structures is shown in FIG. 13B. … can visualize a tumor 2332, an artery 2334, and various abnormalities” [0176], ”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178]); acquire a second image captured by the camera and obtained by irradiating the stomach area of the living body with second light representing light of a wavelength of 1180 nm to 1230 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) PNG media_image1.png 252 359 media_image1.png Greyscale acquire a third image captured by the camera and obtained by irradiating the stomach area of the living body with third light representing light of a wavelength of 1255 nm to 1325 nm, (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) and acquire a fourth image captured by the camera and obtained by irradiating the stomach area of the living body with fourth light representing light of a wavelength of 1350 nm to 1425 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged), and that wherein the processor is configured to determine whether or not the stomach tumor is present at each pixel. (“Each of the aforementioned image portions 3072, 3074, 3076, 3078 can be fused together by the control system 133 to generate the fused image 3070 that provides for an unobstructed visualization of the tumor 3038 and any other relevant structures 3040.” [0272]) Shelton does not specifically disclose that the processor is further configured to generate an integrated image from the first image, the second image, the third image, and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image and that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a stomach tumor present in the stomach area, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the stomach area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model. However, in a similar field of endeavor, Tan teaches the processor is further configured to generate an integrated image from the first image, the second image, the third image, and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image (“The data preprocessing comprises the following processing procedures of converting a multispectral image into an M × N matrix, wherein M is the number of multispectral image pixels, M is the image width × image height, and N is the number of wavelength values in the multispectral image, calculating the first M wavelengths which have larger contribution to the M × N matrix by using Principal Component Analysis (PCA), wherein the calculated M wavelengths can be selected A combination of wavelengths; in that The method comprises the steps of selecting the most relevant wavelength combination A from the wavelength combinations, wherein A is the wavelength combination containing n different wavelength values” [Pg. 7]) wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto stomach area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model (“ the contribution of each spectral wavelength value to each pixel can be calculated by a multivariate curve resolution Method (MCR) and non-negative least squares iterative optimization (NNLS) in Principal Component Analysis (PCA), so that the contribution of autofluorescence is directly eliminated from the pixels.” [Pg. 10], “Through steps S101 to S103, data of the biological multispectral image is preprocessed, a maximum correlation region is selected as an input of a deep learning network, a correlation feature filter template is obtained based on a biological pathological mechanism design, and is used as an initial training weight, learning of revealing spectral correlation features is enhanced by adjusting weights and relative layer positions of each wavelength in the template, and finally a trained CNN neural network can be obtained” [Pg. 9-10]. It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton as outlined above with the processor is further configured to generate an integrated image from the first image, the second image, the third image and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto stomach area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model as taught by Tan, because the learning speed is further improved, the specificity and the robustness of the deep learning are enhanced, and the multispectral classification with the maximum related connection probability is obtained [Pg. 6]. Tan does not specifically teach that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a stomach tumor present in the stomach area. However, Saito also teaches that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a stomach tumor present in the stomach area (“a CNN system capable of classifying gastrointestinal images” [0214], “The CNN system was trained using endoscopic images captured daily in the clinic to which one of the inventors of the present invention belongs. The endoscope systems used included high-resolution or high-definition upper gastrointestinal endoscopes” [0290]). It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton and Tan as outlined above with the processor being further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a stomach tumor present in the stomach area as taught by Saito, because gastric endoscopic examinations provide extremely useful information to differential diagnoses [0004]. Regarding Claim 15, Shelton discloses an image processing device ("The control system includes an imaging system and a control circuit coupled to the imaging system. The imaging system includes a multispectral electromagnetic radiation (EMR) source and an image sensor." [0003]), comprising: a memory (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), and a processor coupled to the memory (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), wherein the processor is configured to (“The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]) acquire a first image captured by a camera and obtained by irradiating a large bowel area of a living body with first light representing light of a wavelength of 1020 nm to 1140 nm (“A surgical visualization system, as disclosed herein, can be employed in a number of different types of procedures for different medical specialties, such as .. bariatric/gastric,“ [0331], “An illustration of the utilization of spectral imaging techniques to visualize different tissue types and/or anatomical structures is shown in FIG. 13B. … can visualize a tumor 2332, an artery 2334, and various abnormalities” [0176], ”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178]); acquire a second image captured by the camera and obtained by irradiating the large bowel area of the living body with second light representing light of a wavelength of 1140 nm to 1260 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) PNG media_image1.png 252 359 media_image1.png Greyscale acquire a third image captured by the camera and obtained by irradiating the large bowel area of the living body with third light representing light of a wavelength of 1315 nm to 1430 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) and acquire a fourth image captured by the camera and obtained by irradiating the large bowel area of the living body with fourth light representing light of a wavelength of 1430 nm to 1535 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged), and that wherein the processor is configured to determine whether or not the large bowel tumor is present at each pixel (“Each of the aforementioned image portions 3072, 3074, 3076, 3078 can be fused together by the control system 133 to generate the fused image 3070 that provides for an unobstructed visualization of the tumor 3038 and any other relevant structures 3040.” [0272]) Shelton does not specifically disclose that the processor is further configured to generate an integrated image from the first image, the second image, the third image, and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image and that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a large bowel tumor present in the large bowel area, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the large bowel area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model. However, in a similar field of endeavor, Tan teaches the processor is further configured to generate an integrated image from the first image, the second image, the third image, and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image (“The data preprocessing comprises the following processing procedures of converting a multispectral image into an M × N matrix, wherein M is the number of multispectral image pixels, M is the image width × image height, and N is the number of wavelength values in the multispectral image, calculating the first M wavelengths which have larger contribution to the M × N matrix by using Principal Component Analysis (PCA), wherein the calculated M wavelengths can be selected A combination of wavelengths; in that The method comprises the steps of selecting the most relevant wavelength combination A from the wavelength combinations, wherein A is the wavelength combination containing n different wavelength values” [Pg. 7]) wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the large bowel area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model (“ the contribution of each spectral wavelength value to each pixel can be calculated by a multivariate curve resolution Method (MCR) and non-negative least squares iterative optimization (NNLS) in Principal Component Analysis (PCA), so that the contribution of autofluorescence is directly eliminated from the pixels.” [Pg. 10], “Through steps S101 to S103, data of the biological multispectral image is preprocessed, a maximum correlation region is selected as an input of a deep learning network, a correlation feature filter template is obtained based on a biological pathological mechanism design, and is used as an initial training weight, learning of revealing spectral correlation features is enhanced by adjusting weights and relative layer positions of each wavelength in the template, and finally a trained CNN neural network can be obtained” [Pg. 9-10]. It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton as outlined above with the processor is further configured to generate an integrated image from the first image, the second image, the third image and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the large bowel area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model as taught by Tan, because the learning speed is further improved, the specificity and the robustness of the deep learning are enhanced, and the multispectral classification with the maximum related connection probability is obtained [Pg. 6]. Tan does not specifically teach that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image a large bowel tumor present in the large bowel area. However, Saito also teaches that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a large bowel tumor present in the large bowel area (“a CNN system capable of classifying gastrointestinal images” [0214], “The CNN system was trained using endoscopic images captured daily in the clinic to which one of the inventors of the present invention belongs. The endoscope systems used included high-resolution or high-definition upper gastrointestinal endoscopes” [0290]). It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton and Tan as outlined above with the processor being further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a large bowel tumor present in the large bowel area as taught by Saito, because gastric endoscopic examinations provide extremely useful information to differential diagnoses [0004]. Regarding Claim 20, Shelton discloses an image processing method according to which a computer executes processing (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), the processing comprising: acquire a first image captured by a camera and obtained by irradiating a gastrointestinal tract of a living body with first light representing light of a wavelength of 1050 nm to 1105 nm (“A surgical visualization system, as disclosed herein, can be employed in a number of different types of procedures for different medical specialties, such as .. bariatric/gastric,“ [0331], “An illustration of the utilization of spectral imaging techniques to visualize different tissue types and/or anatomical structures is shown in FIG. 13B. … can visualize a tumor 2332, an artery 2334, and various abnormalities” [0176], ”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178]); acquire a second image captured by the camera and obtained by irradiating the gastrointestinal tract area of the living body with second light representing light of a wavelength of 1145 nm to 1200 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) PNG media_image1.png 252 359 media_image1.png Greyscale acquire a third image captured by the camera and obtained by irradiating the gastrointestinal tract area of the living body with third light representing light of a wavelength of 1245 nm to 1260 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) and acquire a fourth image captured by the camera and obtained by irradiating the gastrointestinal tract area of the living body with fourth light representing light of a wavelength of 1350 nm to 1405 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged), and that wherein the processor is configured to determine whether or not the gastrointestinal stromal tumor is present at each point pixel (“Each of the aforementioned image portions 3072, 3074, 3076, 3078 can be fused together by the control system 133 to generate the fused image 3070 that provides for an unobstructed visualization of the tumor 3038 and any other relevant structures 3040.” [0272]) Shelton does not specifically disclose that the processor is further configured to generate an integrated image from the first image, the second image, the third image, and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image and that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a gastrointestinal stromal tumor present in the gastrointestinal tract area, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the gastrointestinal tract area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model. However, in a similar field of endeavor, Tan teaches the processor is further configured to generate an integrated image from the first image, the second image, the third image, and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image (“The data preprocessing comprises the following processing procedures of converting a multispectral image into an M × N matrix, wherein M is the number of multispectral image pixels, M is the image width × image height, and N is the number of wavelength values in the multispectral image, calculating the first M wavelengths which have larger contribution to the M × N matrix by using Principal Component Analysis (PCA), wherein the calculated M wavelengths can be selected A combination of wavelengths; in that The method comprises the steps of selecting the most relevant wavelength combination A from the wavelength combinations, wherein A is the wavelength combination containing n different wavelength values” [Pg. 7]) wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the gastrointestinal tract area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model (“ the contribution of each spectral wavelength value to each pixel can be calculated by a multivariate curve resolution Method (MCR) and non-negative least squares iterative optimization (NNLS) in Principal Component Analysis (PCA), so that the contribution of autofluorescence is directly eliminated from the pixels.” [Pg. 10], “Through steps S101 to S103, data of the biological multispectral image is preprocessed, a maximum correlation region is selected as an input of a deep learning network, a correlation feature filter template is obtained based on a biological pathological mechanism design, and is used as an initial training weight, learning of revealing spectral correlation features is enhanced by adjusting weights and relative layer positions of each wavelength in the template, and finally a trained CNN neural network can be obtained” [Pg. 9-10]. It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton as outlined above with the processor is further configured to generate an integrated image from the first image, the second image, the third image and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the gastrointestinal tract area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model as taught by Tan, because the learning speed is further improved, the specificity and the robustness of the deep learning are enhanced, and the multispectral classification with the maximum related connection probability is obtained [Pg. 6]. Tan does not specifically teach that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a gastrointestinal stromal tumor present in the gastrointestinal tract area. However, in the similar field of diagnostic endoscopic imaging, Saito teaches a diagnostic assistance method for a disease based on an endoscopic image of a digestive organ with use of a convolutional neural network (CNN) trains the CNN using a first endoscopic image of the digestive organ and at least one final diagnosis result of the positivity or the negativity for the disease in the digestive organ [Abstract]. Saito also teaches that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a gastrointestinal stromal tumor present in the gastrointestinal tract area (“a CNN system capable of classifying gastrointestinal images” [0214], “The CNN system was trained using endoscopic images captured daily in the clinic to which one of the inventors of the present invention belongs. The endoscope systems used included high-resolution or high-definition upper gastrointestinal endoscopes” [0290]). It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton and Tan as outlined above with the processor being further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a gastrointestinal stromal tumor present in the gastrointestinal tract area as taught by Saito, because gastric endoscopic examinations provide extremely useful information to differential diagnoses [0004]. Regarding Claim 27, Shelton discloses an image processing method according to which a computer executes processing (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), the processing comprising: acquire a first image captured by a camera and obtained by irradiating a lung area of a living body with first light representing light of a wavelength of 955 nm to 1105 nm (“A surgical visualization system, as disclosed herein, can be employed in a number of different types of procedures for different medical specialties, such as .. bariatric/gastric,“ [0331], “An illustration of the utilization of spectral imaging techniques to visualize different tissue types and/or anatomical structures is shown in FIG. 13B. … can visualize a tumor 2332, an artery 2334, and various abnormalities” [0176], ”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178]); acquire a second image captured by the camera and obtained by irradiating the lung area of the living body with second light representing light of a wavelength of 1055 nm to 1135 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) PNG media_image1.png 252 359 media_image1.png Greyscale acquire a third image captured by the camera and obtained by irradiating the lung area of the living body with third light representing light of a wavelength of 1135 nm to 1295 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) acquire a fourth image captured by the camera and obtained by irradiating the lung area of the living body with fourth light representing light of a wavelength of 1295 nm to 1510 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged), acquire a fifth image captured by the camera and obtained by irradiating the lung area of the living body with fifth light representing light of a wavelength of 1510 nm to 1645 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged), and acquire a sixth image captured by the camera and obtained by irradiating the lung area of the living body with sixth light representing light of a wavelength of 1820 nm to 2020 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) and that wherein the processor is configured to determine whether or not the lung tumor is present at each pixel (“Each of the aforementioned image portions 3072, 3074, 3076, 3078 can be fused together by the control system 133 to generate the fused image 3070 that provides for an unobstructed visualization of the tumor 3038 and any other relevant structures 3040.” [0272]) Shelton does not specifically disclose that the processor is further configured to generate an integrated image from the first image, the second image, the third image, and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image and that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a lung tumor is present at each pixel in the integrated image, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto lung area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model. However, in a similar field of endeavor, Tan teaches the processor is further configured to generate an integrated image from the first image, the second image, the third image, the fourth image, the fifth image, and the sixth image by arranging together corresponding pixels from each of the first image, the second image, the third image, the fourth image, the fifth image, and the sixth image into an integrated pixel of the integrated image (“The data preprocessing comprises the following processing procedures of converting a multispectral image into an M × N matrix, wherein M is the number of multispectral image pixels, M is the image width × image height, and N is the number of wavelength values in the multispectral image, calculating the first M wavelengths which have larger contribution to the M × N matrix by using Principal Component Analysis (PCA), wherein the calculated M wavelengths can be selected A combination of wavelengths; in that The method comprises the steps of selecting the most relevant wavelength combination A from the wavelength combinations, wherein A is the wavelength combination containing n different wavelength values” [Pg. 7]) wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the lung area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model (“ the contribution of each spectral wavelength value to each pixel can be calculated by a multivariate curve resolution Method (MCR) and non-negative least squares iterative optimization (NNLS) in Principal Component Analysis (PCA), so that the contribution of autofluorescence is directly eliminated from the pixels.” [Pg. 10], “Through steps S101 to S103, data of the biological multispectral image is preprocessed, a maximum correlation region is selected as an input of a deep learning network, a correlation feature filter template is obtained based on a biological pathological mechanism design, and is used as an initial training weight, learning of revealing spectral correlation features is enhanced by adjusting weights and relative layer positions of each wavelength in the template, and finally a trained CNN neural network can be obtained” [Pg. 9-10]. It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton as outlined above with the processor is further configured to generate an integrated image from the first image, the second image, the third image, the fourth image, the fifth image, and the sixth image by arranging together corresponding pixels from each of the first image, the second image, the third image, the fourth image, the fifth image, and the sixth image into an integrated pixel of the integrated image, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the lung area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model as taught by Tan, because the learning speed is further improved, the specificity and the robustness of the deep learning are enhanced, and the multispectral classification with the maximum related connection probability is obtained [Pg. 6]. Tan does not specifically teach that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a lung tumor present in the lung area. However, Saito also teaches that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image a tumor (“a CNN system capable of classifying gastrointestinal images” [0214], “The CNN system was trained using endoscopic images captured daily in the clinic to which one of the inventors of the present invention belongs. The endoscope systems used included high-resolution or high-definition upper gastrointestinal endoscopes” [0290]). Saito does not specifically teach detecting, from the integrated image a lung tumor. However, It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton and Saito to detect from the integrated image a lung tumor because it would yield the predictable result of identifying the tumor. Regarding Claim 28, Shelton discloses an image processing method according to which a computer executes processing (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), the processing comprising: acquire a first image captured by a camera and obtained by irradiating a stomach area of a living body with first light representing light of a wavelength of 1065 nm to 1135 nm (“A surgical visualization system, as disclosed herein, can be employed in a number of different types of procedures for different medical specialties, such as .. bariatric/gastric,“ [0331], “An illustration of the utilization of spectral imaging techniques to visualize different tissue types and/or anatomical structures is shown in FIG. 13B. … can visualize a tumor 2332, an artery 2334, and various abnormalities” [0176], ”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178]); acquire a second image captured by the camera and obtained by irradiating the stomach area of the living body with second light representing light of a wavelength of 1180 nm to 1230 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) PNG media_image1.png 252 359 media_image1.png Greyscale acquire a third image captured by the camera and obtained by irradiating the stomach area of the living body with third light representing light of a wavelength of 1255 nm to 1325 nm, (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) and acquire a fourth image captured by the camera and obtained by irradiating the stomach area of the living body with fourth light representing light of a wavelength of 1350 nm to 1425 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged), and that wherein the processor is configured to determine whether or not the stomach tumor is present at each pixel (“Each of the aforementioned image portions 3072, 3074, 3076, 3078 can be fused together by the control system 133 to generate the fused image 3070 that provides for an unobstructed visualization of the tumor 3038 and any other relevant structures 3040.” [0272]) Shelton does not specifically disclose that the processor is further configured to generate an integrated image from the first image, the second image, the third image, and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image and that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a stomach tumor is present at each pixel, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the stomach area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model. However, in a similar field of endeavor, Tan teaches the processor is further configured to generate an integrated image from the first image, the second image, the third image, and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image (“The data preprocessing comprises the following processing procedures of converting a multispectral image into an M × N matrix, wherein M is the number of multispectral image pixels, M is the image width × image height, and N is the number of wavelength values in the multispectral image, calculating the first M wavelengths which have larger contribution to the M × N matrix by using Principal Component Analysis (PCA), wherein the calculated M wavelengths can be selected A combination of wavelengths; in that The method comprises the steps of selecting the most relevant wavelength combination A from the wavelength combinations, wherein A is the wavelength combination containing n different wavelength values” [Pg. 7]) wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto stomach area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model (“ the contribution of each spectral wavelength value to each pixel can be calculated by a multivariate curve resolution Method (MCR) and non-negative least squares iterative optimization (NNLS) in Principal Component Analysis (PCA), so that the contribution of autofluorescence is directly eliminated from the pixels.” [Pg. 10], “Through steps S101 to S103, data of the biological multispectral image is preprocessed, a maximum correlation region is selected as an input of a deep learning network, a correlation feature filter template is obtained based on a biological pathological mechanism design, and is used as an initial training weight, learning of revealing spectral correlation features is enhanced by adjusting weights and relative layer positions of each wavelength in the template, and finally a trained CNN neural network can be obtained” [Pg. 9-10]. It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton as outlined above with the processor is further configured to generate an integrated image from the first image, the second image, the third image and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto stomach area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model as taught by Tan, because the learning speed is further improved, the specificity and the robustness of the deep learning are enhanced, and the multispectral classification with the maximum related connection probability is obtained [Pg. 6]. Tan does not specifically teach that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a stomach tumor present in the stomach area. However, Saito also teaches that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a stomach tumor present in the stomach area (“a CNN system capable of classifying gastrointestinal images” [0214], “The CNN system was trained using endoscopic images captured daily in the clinic to which one of the inventors of the present invention belongs. The endoscope systems used included high-resolution or high-definition upper gastrointestinal endoscopes” [0290]). It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton and Tan as outlined above with the processor being further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a stomach tumor present in the stomach area as taught by Saito, because gastric endoscopic examinations provide extremely useful information to differential diagnoses [0004]. Regarding Claim 29, Shelton discloses an image processing method according to which a computer executes processing (“The control system 133 includes a control circuit 132 in signal communication with a memory 134. The memory 134 stores instructions executable by the control circuit 132 to determine and/or recognize critical structures (e.g. the critical structure 101 in FIG. 1)” [0133]), the processing comprising: acquire a first image captured by a camera and obtained by irradiating a large bowel area of a living body with first light representing light of a wavelength of 1020 nm to 1140 nm (“A surgical visualization system, as disclosed herein, can be employed in a number of different types of procedures for different medical specialties, such as .. bariatric/gastric,“ [0331], “An illustration of the utilization of spectral imaging techniques to visualize different tissue types and/or anatomical structures is shown in FIG. 13B. … can visualize a tumor 2332, an artery 2334, and various abnormalities” [0176], ”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178]); acquire a second image captured by the camera and obtained by irradiating the large bowel area of the living body with second light representing light of a wavelength of 1140 nm to 1260 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) PNG media_image1.png 252 359 media_image1.png Greyscale acquire a third image captured by the camera and obtained by irradiating the large bowel area of the living body with third light representing light of a wavelength of 1315 nm to 1430 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged) and acquire a fourth image captured by the camera and obtained by irradiating the large bowel area of the living body with fourth light representing light of a wavelength of 1430 nm to 1535 nm (”Tissues and/or structures can also be imaged or characterized according to their reflective characteristics, in addition to or in lieu of their absorptive characteristics described above with respect to FIGS. 13A and 13B, across the EMR wavelength spectrum. For example, FIGS. 13C-13E illustrate various graphs of reflectance of different types of tissues or structures across different EMR wavelengths.” [0178], Fig. 13 D shows the wavelengths that are imaged), and that wherein the processor is configured to determine whether or not the large bowel tumor is present at each pixel (“Each of the aforementioned image portions 3072, 3074, 3076, 3078 can be fused together by the control system 133 to generate the fused image 3070 that provides for an unobstructed visualization of the tumor 3038 and any other relevant structures 3040.” [0272]) Shelton does not specifically disclose that the processor is further configured to generate an integrated image from the first image, the second image, the third image, and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image and that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a large bowel tumor present in the large bowel area, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the large bowel area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model. However, in a similar field of endeavor, Tan teaches the processor is further configured to generate an integrated image from the first image, the second image, the third image, and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image (“The data preprocessing comprises the following processing procedures of converting a multispectral image into an M × N matrix, wherein M is the number of multispectral image pixels, M is the image width × image height, and N is the number of wavelength values in the multispectral image, calculating the first M wavelengths which have larger contribution to the M × N matrix by using Principal Component Analysis (PCA), wherein the calculated M wavelengths can be selected A combination of wavelengths; in that The method comprises the steps of selecting the most relevant wavelength combination A from the wavelength combinations, wherein A is the wavelength combination containing n different wavelength values” [Pg. 7]) wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the large bowel area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model (“ the contribution of each spectral wavelength value to each pixel can be calculated by a multivariate curve resolution Method (MCR) and non-negative least squares iterative optimization (NNLS) in Principal Component Analysis (PCA), so that the contribution of autofluorescence is directly eliminated from the pixels.” [Pg. 10], “Through steps S101 to S103, data of the biological multispectral image is preprocessed, a maximum correlation region is selected as an input of a deep learning network, a correlation feature filter template is obtained based on a biological pathological mechanism design, and is used as an initial training weight, learning of revealing spectral correlation features is enhanced by adjusting weights and relative layer positions of each wavelength in the template, and finally a trained CNN neural network can be obtained” [Pg. 9-10]. It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton as outlined above with the processor is further configured to generate an integrated image from the first image, the second image, the third image and the fourth image by arranging together corresponding pixels from each of the first image, the second image, the third image and the fourth image into an integrated pixel of the integrated image, wherein the wavelength of the light is a wavelength selected based on a contribution amount calculated according to a tumor discrimination rate obtained from images captured when only light of a specific wavelength band is irradiated onto the large bowel area of the living body, or based on a contribution amount calculated according to parameters of a learned model or a statistical model as taught by Tan, because the learning speed is further improved, the specificity and the robustness of the deep learning are enhanced, and the multispectral classification with the maximum related connection probability is obtained [Pg. 6]. Tan does not specifically teach that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image a large bowel tumor present in the large bowel area. However, Saito also teaches that the processor is further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a large bowel tumor present in the large bowel area (“a CNN system capable of classifying gastrointestinal images” [0214], “The CNN system was trained using endoscopic images captured daily in the clinic to which one of the inventors of the present invention belongs. The endoscope systems used included high-resolution or high-definition upper gastrointestinal endoscopes” [0290]). It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Shelton and Tan as outlined above with the processor being further configured to input the integrated image to a learned model or a statistical model for detecting, from the integrated image, a large bowel tumor present in the large bowel area as taught by Saito, because gastric endoscopic examinations provide extremely useful information to differential diagnoses [0004]. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN MALDONADO whose telephone number is 703-756-1421. The examiner can normally be reached 8:00 am-4:00 pm PST M-Th Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached on (571) 272-7230. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Steven Maldonado/ Patent Examiner, Art Unit 3797 /CHRISTOPHER KOHARSKI/Supervisory Patent Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Jan 30, 2023
Application Filed
Jul 24, 2024
Non-Final Rejection — §103
Nov 20, 2024
Applicant Interview (Telephonic)
Nov 20, 2024
Examiner Interview Summary
Nov 27, 2024
Response Filed
Feb 19, 2025
Final Rejection — §103
Jun 26, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Aug 20, 2025
Non-Final Rejection — §103
Dec 23, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12551289
Tracker-Based Surgical Navigation
2y 5m to grant Granted Feb 17, 2026
Patent 12496034
SYSTEMS AND METHODS FOR PATIENT MONITORING
2y 5m to grant Granted Dec 16, 2025
Patent 12484796
SYSTEM AND METHOD FOR MEASURING PULSE WAVE VELOCITY
2y 5m to grant Granted Dec 02, 2025
Patent 12350095
DIAGNOSTIC IMAGING CATHETER AND DIAGNOSTIC IMAGING APPARATUS
2y 5m to grant Granted Jul 08, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
30%
Grant Probability
84%
With Interview (+54.2%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month