Prosecution Insights
Last updated: April 19, 2026
Application No. 18/129,649

SYSTEMS AND METHODS FOR GENERATING BIOMARKER ACTIVATION MAPS

Final Rejection §103
Filed
Mar 31, 2023
Examiner
YANG, WEI WEN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Oregon Health & Science University
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
539 granted / 657 resolved
+20.0% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
34 currently pending
Career history
691
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
72.5%
+32.5% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 657 resolved cases

Office Action

§103
DETAILED ACTION Response to Arguments The amendments filed 11/26/2025 have been entered and made of record. Applicant's amendments and arguments filed 11/26/2025 have been fully considered but they are not persuasive: Re Claim 5, similarly as claim 8, has been amended with incorporation of the limitations of previous claim 14, which now is cancelled, however which has been rejected in the previous Non-Final Office Action of 8/26/2025; In the Applicant’s Remarks (on pages 7-8 of 9), Applicant asserts that the cited references, particularly Choi is silent regarding to the use of two distinct generators: a main generator and an assistant generator-each processing a U-shaped Neural Network, However, the Examiner disagrees, because: Choi discloses the use of two distinct generators: a main generator and an assistant generator (see Choi: e.g., Fig. 14 as reproduced below: PNG media_image1.png 455 771 media_image1.png Greyscale As demonstrated in above CHOI’s Fig. 14, Choi discloses “First Model S1031” Neural Network read on claimed “the main generator” Neural Network; while “Second Model S1033” read on claimed “the assistant generator” Neural Network; and the training, functions and algorithms of Choi’s “First Model” aligned with the claimed “the main generator”, and “Second Model” aligned with the claimed “the assistant generator” are further disclosed and found support in Choi’s disclosures: ---- [0265] The pre-processing of the first fundus image may further include, by a processing unit, applying a blood vessel emphasizing filter to the fundus image so that a blood vessel included in the first fundus image is emphasized. [0266] Serialized first fundus images may be sequentially stored in a queue, and a predetermined number of the serialized first fundus images stored in the queue may be used each time in training the first neural network model. When the capacity of the serialized first fundus images which have not been used in the training of the first neural network model is reduced to a reference amount or lower, the queue may request for supplementation of the serialized first fundus images. [0267] The first finding may be any one of a finding of retinal hemorrhage, a finding of generation of retinal exudates, a finding of opacity of crystalline lens, and a finding of diabetic retinopathy.--, in [0265]-[0267]; and, -- [0300] For example, the diagnosis assistance neural network model may predict diagnostic information (that is, information on the presence of a disease) or findings information (that is, information on the presence of abnormal findings) related to an eye disease or a systemic disease of the patient. In this case, the diagnostic information or findings information may be output in the form of a probability. For example, the probability that the patient has a specific disease or the probability that there may be a specific abnormal finding in the patient's fundus image may be output. When a diagnosis assistance neural network model provided in the form of a classifier is used, a predicted label may be determined in consideration of whether an output probability value (or predicted score) exceeds a threshold value. [0301] As a specific example, a diagnosis assistance neural network model may output a probability value with respect to the presence of diabetic retinopathy in a patient with the patient's fundus image as a diagnosis target image. When a diagnosis assistance neural network model in the form of a classifier that assumes 1 as normal is used, a patient's fundus image may be input to the diagnosis assistance neural network model, and in relation to whether the patient has diabetic retinopathy, a normal: abnormal probability value may be obtained in the form of 0.74:0.26 or the like. [0302] Although the case in which data is classified using the diagnosis assistance neural network model in the form of a classifier has been described herein, the present invention is not limited thereto, and a specific diagnosis assistance numerical value (for example, blood pressure or the like) may also be predicted using a diagnosis assistance neural network model implemented in the form of a regression model.--, in [0300]-[0302]; {so that, it is clearly disclosed in Choi’s above First Model” aligned with the claimed “the main generator”, which is trained with fundus image, and with the patient's fundus image as a diagnosis target image, to output a probability value with respect to the presence of diabetic retinopathy, with identifying a classifier that assumes 1 as normal is used}; also see Choi’s disclosures of the Second Model aligned with the claimed “the main generator” in: [0238]… Referring to FIG. 14, the training process of a neural network model according to an embodiment of the present invention may include obtaining a data set (S1011), training a first model (that is, first neural network model) and a second model (that is, second neural network model) using the obtained data (S1031, S1033), validating the trained first neural network model and second neural network model (S1051), and determining a final neural network model and obtaining parameters or variables thereof (S1072). … a plurality of sub-neural network models may obtain the same training data set and individually generate output values. In this case, an ensemble of the plurality of sub-neural network models may be determined as a final neural network model, and parameter values related to each of the plurality of sub-neural network models may be obtained as training results. An output value of the final neural network model may be set to an average value of the output values by the sub-neural network models. Alternatively, in consideration of accuracy obtained as a result of validating each of the sub-neural network models, the output value of the final neural network model may be set to a weighted average value of the output values of the sub-neural network models. [0241] As a more specific example, when a neural network model includes a first sub-neural network model and a second sub-neural network model, optimized parameter values of the first sub-neural network model and optimized parameter values of the second sub-neural network model may be obtained by machine learning. In this case, an average value of output values (for example, probability values related to specific diagnosis assistance information) obtained from the first sub-neural network model and second sub-neural network model may be determined as an output value of the final neural network model.--, in [0238]-[0241]; also see: -- [0252] The training device may obtain a second training data set which includes the plurality of fundus images and at least partially differs from the first training data set and may train a second neural network model using the second training data set. [0253] According to an embodiment of the present invention, the control method of the training device may further include pre-processing a second fundus image so that the second fundus image included in the second data training set is suitable for training the second neural network model, serializing the pre-processed second fundus image, and training the second neural network model that classifies the target fundus image as a third label or a fourth label by using the serialized second fundus image.--, in [0252-[0254], and [0257]-[0259]; In addition, Choi discloses a plurality of examples of Neural Networks appliable for above first neural network model as the claimed “main generator”, and second model as the claimed “assistant generator” in: -- [0219] A neural network model may include a convolutional neural network (CNN). As a CNN structure, at least one of AlexNet, LENET, NIN, VGGNet, ResNet, WideResnet, GoogleNet, FractaNet, DenseNet, FitNet, RitResNet, HighwayNet, MobileNet, and DeeplySupervisedNet may be used. The neural network model may be implemented using a plurality of CNN structures. [0220] For example, a neural network model may be implemented to include a plurality of VGGNet blocks. As a more specific example, a neural network model may be provided by coupling between a first structure in which a 3×3 CNN layer having 64 filters, a batch normalization (BN) layer, and a ReLu layer are sequentially coupled and a second block in which a 3×3 CNN layer having 128 filters, a ReLu layer, and a BN layer are sequentially coupled. [0221] A neural network model may include a max pooling layer subsequent to each CNN block and include a global average pooling (GAP) layer, a fully connected (FC) layer, and an activation layer (for example, sigmoid, softmax, and the like) at an end.--; [0219]-[0221]; further see: 2.3.3 Fundus Image Reconstruction [0528] A fundus image may be reconstructed for training of a heart disease diagnosis assistance neural network model or for assistance in heart disease diagnosis using the neural network model. The reconstruction of the fundus image may be performed by the above-described diagnosis assistance system, diagnostic device, client device, mobile device, or server device. The control unit or processor of each device may perform the reconstruction of the image. [0529] The reconstruction of the fundus image may include modifying the fundus image to a form in which efficiency of the training of the heart disease diagnosis assistance neural network model or the assistance in the heart disease diagnosis using the neural network model may be improved. For example, the reconstruction of the image may include blurring the fundus image or changing chromaticity or saturation of the fundus image. [0530] For example, when the size of a fundus image is reduced or a color channel thereof is simplified, since the amount of data that needs to be processed by a neural network model is reduced, accuracy of a result or a speed of obtaining a result may be improved…. [0539] According to an embodiment, reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0540] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation……, in [0528]-[0542]; Although Choi’s above disclosures may including many significant components, or characteristics of U-Shaped neural network, such as U-NET, Choi however does not U-Shaped neural network; It is well known that U-shaped neural networks have been applied in medical images processing before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, as evidenced by PANES SAAVEDRA’s disclosures: PANES SAAVEDRA discloses generating a biomarker activation map (BAM) by inputting the medical image into a trained, U-shaped neural network (NN) (see PANES SAAVEDRA: e.g., --the MRI images 150 may be processed with a neural network processing procedure 210. The neural network processing procedure 210 may be used to discriminate between different joint tissues and determine boundaries of each of the joint tissues. In some examples, the neural network processing procedure 210 may include a deep learning model based on a U-shaped convolutional neural network. The neural network may take a two-dimensional (2D) MRI image as an input with dimensions H×W×1 (where H is the height in pixels and W is the width in pixels) and output a segmented image with dimensions H×W×7. The seven (“7”) may correspond to the number of independent probability maps that are output to distinguish and/or define tissues and bones associated with the selected joint.-- , in [0064], and, -- first, a series of deterministic operations transform the independent segmentations that come from different planes into a common reference frame. These operations include a voxel isotropication process, which consists in a regular upscaling of the input images in order to generate enhanced representations with isotropic voxels, plus an image alignment process including affine image registration techniques that ultimately allows the anatomical superposition of different image plane views. Second, the multi-planar combination model comprising the application of a U-shaped fully convolutional neural network to the set of previously processed segmented planes in order to produce a unique high-resolution and quasi-isotropic volumetric representation. As a result, this multi-planar combination model may produce segmented three-dimensional (3D) images 215. Returning to the example of the knee, the segmented 3D images 215 may include a femoral bone, femoral cartilage, tibial bone, tibial cartilage, patellar bone, patellar cartilage and menisci.--, in [0068], and, -- In a typical deep-learning model, prepared or designed to solve image classification tasks, the input image is first passed through a series of sequential filtering steps, which may be referred to as convolutional layers. Note that at each convolutional layer, several filters can be applied in parallel. After the application of each filter a new matrix may be obtained. These matrices may be called feature activation maps (also referred to as activation maps or feature maps). The stacked set of features maps may function as the input for the next convolutional layer.--, in [0105]); CHOI and PANES SAAVEDRA are combinable as they are in the same field of endeavor: neural network in medical image processing and analysis and corresponding diagnosis. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify CHOI’s method using PANES SAAVEDRA’s teachings by including generating a biomarker activation map (BAM) by inputting the medical image into a trained, U-shaped neural network (NN) to CHOI’s first, second neural networks in order to solve image classification tasks (see PANES SAAVEDRA: e.g. in [0064], [0068], and [0105]); It is further pointed out that claim 5 does not limit the functions, or any characteristics of “assistant generator”: newly added claim 21 includes limitation, of “generating, using the assistant generator, a cycled image based on the forged image”, and, “generating, using the assistant generator, a preserved image based on the first medical image”, however, what is, and the specific contents of “forged image”, “cycled image”, and “a preserved image” have not been limited or defined anywhere in the set of claims. So that “forged imaged” is interpreted as “refers to the manipulation of a digital image to hide information”, “cycled image” as “labeled image”; and “a preserved image” is interpreted as refers to the manipulation of a digital image to preserve/keep information. Therefore, claims 5-13, 15-16, and 21 are still not patentably distinguishable over the prior art reference(s). Further discussions are addressed in the prior art rejection section below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5-12, 15, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over CHOI (US 20210327062 A1), and in view of PANES SAAVEDRA (US 20240268699 A1, claims priority of US-Provisional-Application US 63260550 20210825, the corresponding contents and the subject matters of disclosures that applied in this Office Action have been confirmed disclosed in US-Provisional-Application US 63260550, such as in Fig. 2, and [0066]). Re Claim 5, CHOI discloses a method, comprising: identifying a first medical image depicting at least a portion of a subject (see CHOI: e.g., Fig. 11, and, -- a method of assisting in diagnosis of a target heart disease using a retinal image, the method including: obtaining a target retinal image which is obtained by imaging a retina of a testee; on the basis of the target retinal image--, in abstract); generating a biomarker activation map (BAM) by inputting the first medical image into a main generator compring a trained, neural network (NN) (see CHOI: e.g., -- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766] {CHOI’s CAM read on claimed “biomarker activation map (BAM)”, because “Saliency map, a heat map, a feature map or a probability map” are aligned with biomarkers} see CHOI: e.g., -- [0445] Hereinafter, a system, a device, and a method for providing diagnosis assistance information related to a heart disease in order to assist in heart disease diagnosis using a fundus image will be described. The heart disease diagnosis assistance will be described below with reference to the foregoing description with reference to FIGS. 1 to 30. [0446] For management of cardiovascular diseases, biomarkers which are used directly or indirectly for disease diagnosis may be used. For management of cardiovascular diseases, a method of managing an extent of risk of a disease in consideration of an index, a score, an indicator, or the like (hereinafter referred to as “score”) related to the disease may be used. For diseases diagnosed in consideration of values such as scores, providing a score instead of the presence or absence of a disease may be more efficient because it allows a clinician to determine directly a patient's condition or treatment for the patient in consideration of the score. [0447] The heart disease described herein may refer to cerebrovascular and cardiovascular diseases. The heart disease may refer to diseases related to the brain, heart, or blood vessels including a coronary artery disease such as a heart attack or angina, a coronary heart disease, an ischemic heart disease, a congestive heart failure, a peripheral vascular disease, cardiac arrest, a valvular heart disease, a cerebrovascular disease (for example, stroke, cerebral infarction, cerebral hemorrhage, or transient ischemic attack), and a renovascular disease.--, in [0445]-[0447], and [0452]-[0455], and {as above CHOI discloses that “the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information”, so that CHOI’s “activation map” corresponding to “biomarkers”, such as “scores” of heart diseases as disclosed in [0445]-[0447], and [0452]-[0455]}); CHOI however does not explicitly disclose {above neural network} is a U-shaped neural network (NN); PANES SAAVEDRA discloses generating a biomarker activation map (BAM) by inputting the medical image into a trained, U-shaped neural network (NN) (see PANES SAAVEDRA: e.g., --the MRI images 150 may be processed with a neural network processing procedure 210. The neural network processing procedure 210 may be used to discriminate between different joint tissues and determine boundaries of each of the joint tissues. In some examples, the neural network processing procedure 210 may include a deep learning model based on a U-shaped convolutional neural network. The neural network may take a two-dimensional (2D) MRI image as an input with dimensions H×W×1 (where H is the height in pixels and W is the width in pixels) and output a segmented image with dimensions H×W×7. The seven (“7”) may correspond to the number of independent probability maps that are output to distinguish and/or define tissues and bones associated with the selected joint.-- , in [0064], and, -- first, a series of deterministic operations transform the independent segmentations that come from different planes into a common reference frame. These operations include a voxel isotropication process, which consists in a regular upscaling of the input images in order to generate enhanced representations with isotropic voxels, plus an image alignment process including affine image registration techniques that ultimately allows the anatomical superposition of different image plane views. Second, the multi-planar combination model comprising the application of a U-shaped fully convolutional neural network to the set of previously processed segmented planes in order to produce a unique high-resolution and quasi-isotropic volumetric representation. As a result, this multi-planar combination model may produce segmented three-dimensional (3D) images 215. Returning to the example of the knee, the segmented 3D images 215 may include a femoral bone, femoral cartilage, tibial bone, tibial cartilage, patellar bone, patellar cartilage and menisci.--, in [0068], and, -- In a typical deep-learning model, prepared or designed to solve image classification tasks, the input image is first passed through a series of sequential filtering steps, which may be referred to as convolutional layers. Note that at each convolutional layer, several filters can be applied in parallel. After the application of each filter a new matrix may be obtained. These matrices may be called feature activation maps (also referred to as activation maps or feature maps). The stacked set of features maps may function as the input for the next convolutional layer.--, in [0105]); CHOI and PANES SAAVEDRA are combinable as they are in the same field of endeavor: neural network in medical image processing and analysis and corresponding diagnosis. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify CHOI’s method using PANES SAAVEDRA’s teachings by including generating a biomarker activation map (BAM) by inputting the medical image into a trained, U-shaped neural network (NN) to CHOI’s main generator comprising a trained neural network in order to solve image classification tasks (see PANES SAAVEDRA: e.g. in [0064], [0068], and [0105]); and CHOI as modified by PANES SAAVEDRA further disclose outputting the BAM overlaying the first medical image, the BAM indicating at least one biomarker depicted in the first medical image that is indicative of a disease (see CHOI: e.g., -- [0300] For example, the diagnosis assistance neural network model may predict diagnostic information (that is, information on the presence of a disease) or findings information (that is, information on the presence of abnormal findings) related to an eye disease or a systemic disease of the patient. In this case, the diagnostic information or findings information may be output in the form of a probability. For example, the probability that the patient has a specific disease or the probability that there may be a specific abnormal finding in the patient's fundus image may be output. When a diagnosis assistance neural network model provided in the form of a classifier is used, a predicted label may be determined in consideration of whether an output probability value (or predicted score) exceeds a threshold value. [0301] As a specific example, a diagnosis assistance neural network model may output a probability value with respect to the presence of diabetic retinopathy in a patient with the patient's fundus image as a diagnosis target image. When a diagnosis assistance neural network model in the form of a classifier that assumes 1 as normal is used, a patient's fundus image may be input to the diagnosis assistance neural network model, and in relation to whether the patient has diabetic retinopathy, a normal: abnormal probability value may be obtained in the form of 0.74:0.26 or the like. [0302] Although the case in which data is classified using the diagnosis assistance neural network model in the form of a classifier has been described herein, the present invention is not limited thereto, and a specific diagnosis assistance numerical value (for example, blood pressure or the like) may also be predicted using a diagnosis assistance neural network model implemented in the form of a regression model.--, in [0300]-[0302]; and, -- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766]); wherein the first trained U-Shaped NN is trained by: identifying a classifier trained to identify the presence and/or absence of the disease in the first medical image (see CHOI: e.g., -- [0300] For example, the diagnosis assistance neural network model may predict diagnostic information (that is, information on the presence of a disease) or findings information (that is, information on the presence of abnormal findings) related to an eye disease or a systemic disease of the patient. In this case, the diagnostic information or findings information may be output in the form of a probability. For example, the probability that the patient has a specific disease or the probability that there may be a specific abnormal finding in the patient's fundus image may be output. When a diagnosis assistance neural network model provided in the form of a classifier is used, a predicted label may be determined in consideration of whether an output probability value (or predicted score) exceeds a threshold value. [0301] As a specific example, a diagnosis assistance neural network model may output a probability value with respect to the presence of diabetic retinopathy in a patient with the patient's fundus image as a diagnosis target image. When a diagnosis assistance neural network model in the form of a classifier that assumes 1 as normal is used, a patient's fundus image may be input to the diagnosis assistance neural network model, and in relation to whether the patient has diabetic retinopathy, a normal: abnormal probability value may be obtained in the form of 0.74:0.26 or the like. [0302] Although the case in which data is classified using the diagnosis assistance neural network model in the form of a classifier has been described herein, the present invention is not limited thereto, and a specific diagnosis assistance numerical value (for example, blood pressure or the like) may also be predicted using a diagnosis assistance neural network model implemented in the form of a regression model.--, in [0300]-[0302]; and,-- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766]; and, --[1125] For example, the outputting of the diagnosis assistance information (S1107) may include comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information and determining the output diagnosis assistance information in consideration of a result of comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information. [1126] The determining of the output diagnosis assistance information in consideration of the result of the comparison may include, when the left-eye diagnosis assistance information and the right-eye diagnosis assistance information are logically consistent, determining the left-eye diagnosis assistance information, the right-eye diagnosis assistance information, or intermediate information (a median) between the left-eye diagnosis assistance information and the right-eye diagnosis assistance information as the output diagnosis assistance information.--, in [1125]-[1127]; also see PANES SAAVEDRA: e.g., --[0105] In a typical deep-learning model, prepared or designed to solve image classification tasks, the input image is first passed through a series of sequential filtering steps, which may be referred to as convolutional layers. Note that at each convolutional layer, several filters can be applied in parallel. After the application of each filter a new matrix may be obtained. These matrices may be called feature activation maps (also referred to as activation maps or feature maps). The stacked set of features maps may function as the input for the next convolutional layer. This block of operations may be referred to as the perceptual block. Note that the matrix components that define the filtering matrices may be learned during the training process and may be called weights. In general there can be several convolutional layers before proceeding to a subsequent step of the deep-learning model, which may be called the logical block. After the input image is sequentially filtered by the elements of the perceptual block, the final activation map may be reshaped as a 1D vector. At this step, the tabulated complementary data can be added to the 1D vector by a concatenation operation. The resulting 1D vector is the input for a subsequent series of operations of the deep-learning model. Typically, this vector is passed through a series of dense layers that incorporate non-linear operations between the 1D vector components. This process ends with a final layer that contains N neurons, one for each class, that record values that can be interpreted as a discrete probability distribution. The final decision of the network is usually defined as the neuron/class with the highest probability.--, in [0105]); identifying an assistant generator comprising a second U-shaped NN; identifying training data comprising second medical images and indications of whether the second medical images depict the disease; and training the main generator, the classifier, and the assistant generator based on the training data (see Choi: e.g., Fig. 14 as reproduced below: PNG media_image1.png 455 771 media_image1.png Greyscale As demonstrated in above CHOI’s Fig. 14, Choi discloses “First Model S1031” Neural Network read on claimed “the main generator” Neural Network; while “Second Model S1033” read on claimed “the assistant generator” Neural Network; and the training, functions and algorithms of Choi’s “First Model” aligned with the claimed “the main generator”, and “Second Model” aligned with the claimed “the assistant generator” are further disclosed and found support in Choi’s disclosures: ---- [0265] The pre-processing of the first fundus image may further include, by a processing unit, applying a blood vessel emphasizing filter to the fundus image so that a blood vessel included in the first fundus image is emphasized. [0266] Serialized first fundus images may be sequentially stored in a queue, and a predetermined number of the serialized first fundus images stored in the queue may be used each time in training the first neural network model. When the capacity of the serialized first fundus images which have not been used in the training of the first neural network model is reduced to a reference amount or lower, the queue may request for supplementation of the serialized first fundus images. [0267] The first finding may be any one of a finding of retinal hemorrhage, a finding of generation of retinal exudates, a finding of opacity of crystalline lens, and a finding of diabetic retinopathy.--, in [0265]-[0267]; and, -- [0300] For example, the diagnosis assistance neural network model may predict diagnostic information (that is, information on the presence of a disease) or findings information (that is, information on the presence of abnormal findings) related to an eye disease or a systemic disease of the patient. In this case, the diagnostic information or findings information may be output in the form of a probability. For example, the probability that the patient has a specific disease or the probability that there may be a specific abnormal finding in the patient's fundus image may be output. When a diagnosis assistance neural network model provided in the form of a classifier is used, a predicted label may be determined in consideration of whether an output probability value (or predicted score) exceeds a threshold value. [0301] As a specific example, a diagnosis assistance neural network model may output a probability value with respect to the presence of diabetic retinopathy in a patient with the patient's fundus image as a diagnosis target image. When a diagnosis assistance neural network model in the form of a classifier that assumes 1 as normal is used, a patient's fundus image may be input to the diagnosis assistance neural network model, and in relation to whether the patient has diabetic retinopathy, a normal: abnormal probability value may be obtained in the form of 0.74:0.26 or the like. [0302] Although the case in which data is classified using the diagnosis assistance neural network model in the form of a classifier has been described herein, the present invention is not limited thereto, and a specific diagnosis assistance numerical value (for example, blood pressure or the like) may also be predicted using a diagnosis assistance neural network model implemented in the form of a regression model.--, in [0300]-[0302]; {so that, it is clearly disclosed in Choi’s above First Model” aligned with the claimed “the main generator”, which is trained with fundus image, and with the patient's fundus image as a diagnosis target image, to output a probability value with respect to the presence of diabetic retinopathy, with identifying a classifier that assumes 1 as normal is used}; also see Choi’s disclosures of the Second Model aligned with the claimed “the main generator” in: [0238]… Referring to FIG. 14, the training process of a neural network model according to an embodiment of the present invention may include obtaining a data set (S1011), training a first model (that is, first neural network model) and a second model (that is, second neural network model) using the obtained data (S1031, S1033), validating the trained first neural network model and second neural network model (S1051), and determining a final neural network model and obtaining parameters or variables thereof (S1072). … a plurality of sub-neural network models may obtain the same training data set and individually generate output values. In this case, an ensemble of the plurality of sub-neural network models may be determined as a final neural network model, and parameter values related to each of the plurality of sub-neural network models may be obtained as training results. An output value of the final neural network model may be set to an average value of the output values by the sub-neural network models. Alternatively, in consideration of accuracy obtained as a result of validating each of the sub-neural network models, the output value of the final neural network model may be set to a weighted average value of the output values of the sub-neural network models. [0241] As a more specific example, when a neural network model includes a first sub-neural network model and a second sub-neural network model, optimized parameter values of the first sub-neural network model and optimized parameter values of the second sub-neural network model may be obtained by machine learning. In this case, an average value of output values (for example, probability values related to specific diagnosis assistance information) obtained from the first sub-neural network model and second sub-neural network model may be determined as an output value of the final neural network model.--, in [0238]-[0241]; also see: -- [0252] The training device may obtain a second training data set which includes the plurality of fundus images and at least partially differs from the first training data set and may train a second neural network model using the second training data set. [0253] According to an embodiment of the present invention, the control method of the training device may further include pre-processing a second fundus image so that the second fundus image included in the second data training set is suitable for training the second neural network model, serializing the pre-processed second fundus image, and training the second neural network model that classifies the target fundus image as a third label or a fourth label by using the serialized second fundus image.--, in [0252-[0254], and [0257]-[0259]; In addition, Choi discloses a plurality of examples of Neural Networks appliable for above first neural network model as the claimed “main generator”, and second model as the claimed “assistant generator” in: -- [0219] A neural network model may include a convolutional neural network (CNN). As a CNN structure, at least one of AlexNet, LENET, NIN, VGGNet, ResNet, WideResnet, GoogleNet, FractaNet, DenseNet, FitNet, RitResNet, HighwayNet, MobileNet, and DeeplySupervisedNet may be used. The neural network model may be implemented using a plurality of CNN structures. [0220] For example, a neural network model may be implemented to include a plurality of VGGNet blocks. As a more specific example, a neural network model may be provided by coupling between a first structure in which a 3×3 CNN layer having 64 filters, a batch normalization (BN) layer, and a ReLu layer are sequentially coupled and a second block in which a 3×3 CNN layer having 128 filters, a ReLu layer, and a BN layer are sequentially coupled. [0221] A neural network model may include a max pooling layer subsequent to each CNN block and include a global average pooling (GAP) layer, a fully connected (FC) layer, and an activation layer (for example, sigmoid, softmax, and the like) at an end.--; [0219]-[0221]; further see: 2.3.3 Fundus Image Reconstruction [0528] A fundus image may be reconstructed for training of a heart disease diagnosis assistance neural network model or for assistance in heart disease diagnosis using the neural network model. The reconstruction of the fundus image may be performed by the above-described diagnosis assistance system, diagnostic device, client device, mobile device, or server device. The control unit or processor of each device may perform the reconstruction of the image. [0529] The reconstruction of the fundus image may include modifying the fundus image to a form in which efficiency of the training of the heart disease diagnosis assistance neural network model or the assistance in the heart disease diagnosis using the neural network model may be improved. For example, the reconstruction of the image may include blurring the fundus image or changing chromaticity or saturation of the fundus image. [0530] For example, when the size of a fundus image is reduced or a color channel thereof is simplified, since the amount of data that needs to be processed by a neural network model is reduced, accuracy of a result or a speed of obtaining a result may be improved…. [0539] According to an embodiment, reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0540] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation……, in [0528]-[0542]; so that as discussed above, CHOI’s second neural network aligned with claimed “an assistant generator” can be substituted with PANES SAAVEDRA’s teaching of U-shaped NN, as see PANES SAAVEDRA: e.g., --the MRI images 150 may be processed with a neural network processing procedure 210. The neural network processing procedure 210 may be used to discriminate between different joint tissues and determine boundaries of each of the joint tissues. In some examples, the neural network processing procedure 210 may include a deep learning model based on a U-shaped convolutional neural network. The neural network may take a two-dimensional (2D) MRI image as an input with dimensions H×W×1 (where H is the height in pixels and W is the width in pixels) and output a segmented image with dimensions H×W×7. The seven (“7”) may correspond to the number of independent probability maps that are output to distinguish and/or define tissues and bones associated with the selected joint.-- , in [0064], and, -- first, a series of deterministic operations transform the independent segmentations that come from different planes into a common reference frame. These operations include a voxel isotropication process, which consists in a regular upscaling of the input images in order to generate enhanced representations with isotropic voxels, plus an image alignment process including affine image registration techniques that ultimately allows the anatomical superposition of different image plane views. Second, the multi-planar combination model comprising the application of a U-shaped fully convolutional neural network to the set of previously processed segmented planes in order to produce a unique high-resolution and quasi-isotropic volumetric representation. As a result, this multi-planar combination model may produce segmented three-dimensional (3D) images 215. Returning to the example of the knee, the segmented 3D images 215 may include a femoral bone, femoral cartilage, tibial bone, tibial cartilage, patellar bone, patellar cartilage and menisci.--, in [0068], and, -- In a typical deep-learning model, prepared or designed to solve image classification tasks, the input image is first passed through a series of sequential filtering steps, which may be referred to as convolutional layers. Note that at each convolutional layer, several filters can be applied in parallel. After the application of each filter a new matrix may be obtained. These matrices may be called feature activation maps (also referred to as activation maps or feature maps). The stacked set of features maps may function as the input for the next convolutional layer.--, in [0105]). Re Claim 6, CHOI as modified by PANES SAAVEDRA further disclose herein the medical image comprises at least one of an x-ray image, a magnetic resonance imaging (MRI) image, a functional MRI (fMRI) image, a single- photon emission computerized tomography (SPECT) image, a positron emission tomography (PET) image, an ultrasound image, an infrared image, a computed tomography (CT) image, an optical coherence tomography (OCT) image, an OCT angiography (OCTA) image, a color fundus photograph (CFP) image, a fluorescein angiography (FA) image, or an ultra-widefield retinal image (see CHOI: e.g., Fig. 11, and, -- a method of assisting in diagnosis of a target heart disease using a retinal image, the method including: obtaining a target retinal image which is obtained by imaging a retina of a testee; on the basis of the target retinal image--, in abstract; and, fundus images used in training of a heart disease diagnosis assistance neural network model and obtaining of heart disease diagnosis assistance information through the heart disease diagnosis assistance neural network model may be understood as images in various forms that are obtained by imaging elements of a fundus. For example, fundus images may include an optical coherence tomography (OCT) image, an OCT angiography image, or a fundus angiography image. Also, various forms of fundus images described above in “Obtaining image data” section may be used as the fundus images described herein. For example, a panorama fundus image, a wide fundus image, a red-free fundus image, an infrared fundus image, an autofluorescence fundus image, or the like may be used as the fundus images described herein. [0503] In other words, the heart disease diagnosis assistance neural network model which will be described below may be trained using an OCT image, an OCT angiography image, or a fundus angiography image. Alternatively, the heart disease diagnosis assistance neural network model which will be described below may be trained using a panorama fundus image, a wide fundus image, a red-free fundus image, an infrared fundus image, an autofluorescence fundus image, or the like.--, in [0502]-[0507]; also PANES SAAVEDRA: e.g., --the MRI images 150 may be processed with a neural network processing procedure 210. The neural network processing procedure 210 may be used to discriminate between different joint tissues and determine boundaries of each of the joint tissues. In some examples, the neural network processing procedure 210 may include a deep learning model based on a U-shaped convolutional neural network. The neural network may take a two-dimensional (2D) MRI image as an input with dimensions H×W×1 (where H is the height in pixels and W is the width in pixels) and output a segmented image with dimensions H×W×7. The seven (“7”) may correspond to the number of independent probability maps that are output to distinguish and/or define tissues and bones associated with the selected joint.-- , in [0064]). Re Claim 7, CHOI as modified by PANES SAAVEDRA further disclose wherein the medical image comprises multiple channels respectively corresponding to different imaging modalities (see CHOI: e.g., Fig. 11, and, -- a method of assisting in diagnosis of a target heart disease using a retinal image, the method including: obtaining a target retinal image which is obtained by imaging a retina of a testee; on the basis of the target retinal image--, in abstract; and, fundus images used in training of a heart disease diagnosis assistance neural network model and obtaining of heart disease diagnosis assistance information through the heart disease diagnosis assistance neural network model may be understood as images in various forms that are obtained by imaging elements of a fundus. For example, fundus images may include an optical coherence tomography (OCT) image, an OCT angiography image, or a fundus angiography image. Also, various forms of fundus images described above in “Obtaining image data” section may be used as the fundus images described herein. For example, a panorama fundus image, a wide fundus image, a red-free fundus image, an infrared fundus image, an autofluorescence fundus image, or the like may be used as the fundus images described herein. [0503] In other words, the heart disease diagnosis assistance neural network model which will be described below may be trained using an OCT image, an OCT angiography image, or a fundus angiography image. Alternatively, the heart disease diagnosis assistance neural network model which will be described below may be trained using a panorama fundus image, a wide fundus image, a red-free fundus image, an infrared fundus image, an autofluorescence fundus image, or the like.--, in [0502]-[0507]; also PANES SAAVEDRA: e.g., --the MRI images 150 may be processed with a neural network processing procedure 210. The neural network processing procedure 210 may be used to discriminate between different joint tissues and determine boundaries of each of the joint tissues. In some examples, the neural network processing procedure 210 may include a deep learning model based on a U-shaped convolutional neural network. The neural network may take a two-dimensional (2D) MRI image as an input with dimensions H×W×1 (where H is the height in pixels and W is the width in pixels) and output a segmented image with dimensions H×W×7. The seven (“7”) may correspond to the number of independent probability maps that are output to distinguish and/or define tissues and bones associated with the selected joint.-- , in [0064]). Re Claim 8, CHOI as modified by PANES SAAVEDRA further disclose wherein generating the BAM by inputting the medical image into the first trained, U-shaped NN comprises: generating a first intermediary image based on the medical image; generating a second intermediary image by inputting the first intermediary image into a first residual block, the first residual block comprising at least one first convolution block; generating a third intermediary image by inputting the second intermediary image into a second residual block, the second residual block comprising at least one second convolution block generating a fourth intermediary image by inputting the third intermediary image into a deconvolution block; generating a fifth intermediary image by concatenating the second intermediary image and the fourth intermediary image; and generating the BAM based on the fifth intermediary image (see CHOI: e.g., -- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766]; and, --[1125] For example, the outputting of the diagnosis assistance information (S1107) may include comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information and determining the output diagnosis assistance information in consideration of a result of comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information. [1126] The determining of the output diagnosis assistance information in consideration of the result of the comparison may include, when the left-eye diagnosis assistance information and the right-eye diagnosis assistance information are logically consistent, determining the left-eye diagnosis assistance information, the right-eye diagnosis assistance information, or intermediate information (a median) between the left-eye diagnosis assistance information and the right-eye diagnosis assistance information as the output diagnosis assistance information.--, in [1125]-[1127]; also see PANES SAAVEDRA: e.g., --[0105] In a typical deep-learning model, prepared or designed to solve image classification tasks, the input image is first passed through a series of sequential filtering steps, which may be referred to as convolutional layers. Note that at each convolutional layer, several filters can be applied in parallel. After the application of each filter a new matrix may be obtained. These matrices may be called feature activation maps (also referred to as activation maps or feature maps). The stacked set of features maps may function as the input for the next convolutional layer. This block of operations may be referred to as the perceptual block. Note that the matrix components that define the filtering matrices may be learned during the training process and may be called weights. In general there can be several convolutional layers before proceeding to a subsequent step of the deep-learning model, which may be called the logical block. After the input image is sequentially filtered by the elements of the perceptual block, the final activation map may be reshaped as a 1D vector. At this step, the tabulated complementary data can be added to the 1D vector by a concatenation operation. The resulting 1D vector is the input for a subsequent series of operations of the deep-learning model. Typically, this vector is passed through a series of dense layers that incorporate non-linear operations between the 1D vector components. This process ends with a final layer that contains N neurons, one for each class, that record values that can be interpreted as a discrete probability distribution. The final decision of the network is usually defined as the neuron/class with the highest probability.--, in [0105]). . Re Claim 9, CHOI as modified by PANES SAAVEDRA further disclose wherein generating the BAM based on the fifth intermediary image comprises: generating a first output image based on the fifth intermediary image; generating a second output image by inputting the first output image into a third convolution block; and generating the BAM based on the second output image (see CHOI: e.g., -- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766]; and, --[1125] For example, the outputting of the diagnosis assistance information (S1107) may include comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information and determining the output diagnosis assistance information in consideration of a result of comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information. [1126] The determining of the output diagnosis assistance information in consideration of the result of the comparison may include, when the left-eye diagnosis assistance information and the right-eye diagnosis assistance information are logically consistent, determining the left-eye diagnosis assistance information, the right-eye diagnosis assistance information, or intermediate information (a median) between the left-eye diagnosis assistance information and the right-eye diagnosis assistance information as the output diagnosis assistance information.--, in [1125]-[1127]; also see PANES SAAVEDRA: e.g., --[0105] In a typical deep-learning model, prepared or designed to solve image classification tasks, the input image is first passed through a series of sequential filtering steps, which may be referred to as convolutional layers. Note that at each convolutional layer, several filters can be applied in parallel. After the application of each filter a new matrix may be obtained. These matrices may be called feature activation maps (also referred to as activation maps or feature maps). The stacked set of features maps may function as the input for the next convolutional layer. This block of operations may be referred to as the perceptual block. Note that the matrix components that define the filtering matrices may be learned during the training process and may be called weights. In general there can be several convolutional layers before proceeding to a subsequent step of the deep-learning model, which may be called the logical block. After the input image is sequentially filtered by the elements of the perceptual block, the final activation map may be reshaped as a 1D vector. At this step, the tabulated complementary data can be added to the 1D vector by a concatenation operation. The resulting 1D vector is the input for a subsequent series of operations of the deep-learning model. Typically, this vector is passed through a series of dense layers that incorporate non-linear operations between the 1D vector components. This process ends with a final layer that contains N neurons, one for each class, that record values that can be interpreted as a discrete probability distribution. The final decision of the network is usually defined as the neuron/class with the highest probability.--, in [0105]). . Re Claim 10, CHOI as modified by PANES SAAVEDRA further disclose wherein generating the BAM based on the second output image comprises: generating a third output image by performing Tanh activation on the second output image; and generating a fourth output image by adding the medical image to the third output image; and generating the BAM based on the fourth output image (see CHOI: e.g., -- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766]; and, --[1125] For example, the outputting of the diagnosis assistance information (S1107) may include comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information and determining the output diagnosis assistance information in consideration of a result of comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information. [1126] The determining of the output diagnosis assistance information in consideration of the result of the comparison may include, when the left-eye diagnosis assistance information and the right-eye diagnosis assistance information are logically consistent, determining the left-eye diagnosis assistance information, the right-eye diagnosis assistance information, or intermediate information (a median) between the left-eye diagnosis assistance information and the right-eye diagnosis assistance information as the output diagnosis assistance information.--, in [1125]-[1127]; also see PANES SAAVEDRA: e.g., --[0105] In a typical deep-learning model, prepared or designed to solve image classification tasks, the input image is first passed through a series of sequential filtering steps, which may be referred to as convolutional layers. Note that at each convolutional layer, several filters can be applied in parallel. After the application of each filter a new matrix may be obtained. These matrices may be called feature activation maps (also referred to as activation maps or feature maps). The stacked set of features maps may function as the input for the next convolutional layer. This block of operations may be referred to as the perceptual block. Note that the matrix components that define the filtering matrices may be learned during the training process and may be called weights. In general there can be several convolutional layers before proceeding to a subsequent step of the deep-learning model, which may be called the logical block. After the input image is sequentially filtered by the elements of the perceptual block, the final activation map may be reshaped as a 1D vector. At this step, the tabulated complementary data can be added to the 1D vector by a concatenation operation. The resulting 1D vector is the input for a subsequent series of operations of the deep-learning model. Typically, this vector is passed through a series of dense layers that incorporate non-linear operations between the 1D vector components. This process ends with a final layer that contains N neurons, one for each class, that record values that can be interpreted as a discrete probability distribution. The final decision of the network is usually defined as the neuron/class with the highest probability.--, in [0105]). . Re Claim 11, CHOI as modified by PANES SAAVEDRA further disclose outputting the BAM overlaying the medical image comprises causing a display to visually output the BAM overlaying the medical image (see CHOI: e.g., -- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766]). Re Claim 12, CHOI as modified by PANES SAAVEDRA further disclose predicting a level of the disease depicted by the medical image by inputting the medical image into a trained classifier; and outputting the level of the disease (see CHOI: e.g., -- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766]). Re Claim 15, CHOI as modified by PANES SAAVEDRA further disclose wherein the disease comprises diabetic retinopathy (DR), macular degeneration, a tumor, or inflammation (see CHOI: e.g., -- the fundus examination has been increasingly used because, through the fundus examination, it is able to observe not only eye diseases but also a degree of blood vessel damage caused by chronic diseases such as hypertension and diabetes by a non-invasive method.--, in [0003]; -- [0265] The pre-processing of the first fundus image may further include, by a processing unit, applying a blood vessel emphasizing filter to the fundus image so that a blood vessel included in the first fundus image is emphasized. [0266] Serialized first fundus images may be sequentially stored in a queue, and a predetermined number of the serialized first fundus images stored in the queue may be used each time in training the first neural network model. When the capacity of the serialized first fundus images which have not been used in the training of the first neural network model is reduced to a reference amount or lower, the queue may request for supplementation of the serialized first fundus images. [0267] The first finding may be any one of a finding of retinal hemorrhage, a finding of generation of retinal exudates, a finding of opacity of crystalline lens, and a finding of diabetic retinopathy.--, in [0265]-[0267]; and, -- [0300] For example, the diagnosis assistance neural network model may predict diagnostic information (that is, information on the presence of a disease) or findings information (that is, information on the presence of abnormal findings) related to an eye disease or a systemic disease of the patient. In this case, the diagnostic information or findings information may be output in the form of a probability. For example, the probability that the patient has a specific disease or the probability that there may be a specific abnormal finding in the patient's fundus image may be output. When a diagnosis assistance neural network model provided in the form of a classifier is used, a predicted label may be determined in consideration of whether an output probability value (or predicted score) exceeds a threshold value. [0301] As a specific example, a diagnosis assistance neural network model may output a probability value with respect to the presence of diabetic retinopathy in a patient with the patient's fundus image as a diagnosis target image. When a diagnosis assistance neural network model in the form of a classifier that assumes 1 as normal is used, a patient's fundus image may be input to the diagnosis assistance neural network model, and in relation to whether the patient has diabetic retinopathy, a normal: abnormal probability value may be obtained in the form of 0.74:0.26 or the like. [0302] Although the case in which data is classified using the diagnosis assistance neural network model in the form of a classifier has been described herein, the present invention is not limited thereto, and a specific diagnosis assistance numerical value (for example, blood pressure or the like) may also be predicted using a diagnosis assistance neural network model implemented in the form of a regression model.--, in [0300]-[0302]; also see PANES SAAVEDRA: e.g., --One example of a diagnosis that uses first-order and higher-order metrics is to identify regions associated with bone inflammation, such as bone edema. Bone edema may be indicated by a build-up of fluid within the bone. In one example, to diagnose bone edema, an ROI is selected. The ROI may be any feasible bone, or portion of bone, such as the femur.--, in [0099]). Re Claim 21, CHOI as modified by PANES SAAVEDRA further disclose generating, using the main generator, a forged image based on the first medical image (see CHOI: e.g., -- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766]); generating, using the assistant generator, a cycled image based on the forged image (see Choi: e.g., Fig. 14 as reproduced below: PNG media_image1.png 455 771 media_image1.png Greyscale As demonstrated in above CHOI’s Fig. 14, Choi discloses “First Model S1031” Neural Network read on claimed “the main generator” Neural Network; while “Second Model S1033” read on claimed “the assistant generator” Neural Network; and the training, functions and algorithms of Choi’s “First Model” aligned with the claimed “the main generator”, and “Second Model” aligned with the claimed “the assistant generator” are further disclosed and found support in Choi’s disclosures: ---- [0265] The pre-processing of the first fundus image may further include, by a processing unit, applying a blood vessel emphasizing filter to the fundus image so that a blood vessel included in the first fundus image is emphasized. [0266] Serialized first fundus images may be sequentially stored in a queue, and a predetermined number of the serialized first fundus images stored in the queue may be used each time in training the first neural network model. When the capacity of the serialized first fundus images which have not been used in the training of the first neural network model is reduced to a reference amount or lower, the queue may request for supplementation of the serialized first fundus images. [0267] The first finding may be any one of a finding of retinal hemorrhage, a finding of generation of retinal exudates, a finding of opacity of crystalline lens, and a finding of diabetic retinopathy.--, in [0265]-[0267]; and, -- [0300] For example, the diagnosis assistance neural network model may predict diagnostic information (that is, information on the presence of a disease) or findings information (that is, information on the presence of abnormal findings) related to an eye disease or a systemic disease of the patient. In this case, the diagnostic information or findings information may be output in the form of a probability. For example, the probability that the patient has a specific disease or the probability that there may be a specific abnormal finding in the patient's fundus image may be output. When a diagnosis assistance neural network model provided in the form of a classifier is used, a predicted label may be determined in consideration of whether an output probability value (or predicted score) exceeds a threshold value. [0301] As a specific example, a diagnosis assistance neural network model may output a probability value with respect to the presence of diabetic retinopathy in a patient with the patient's fundus image as a diagnosis target image. When a diagnosis assistance neural network model in the form of a classifier that assumes 1 as normal is used, a patient's fundus image may be input to the diagnosis assistance neural network model, and in relation to whether the patient has diabetic retinopathy, a normal: abnormal probability value may be obtained in the form of 0.74:0.26 or the like. [0302] Although the case in which data is classified using the diagnosis assistance neural network model in the form of a classifier has been described herein, the present invention is not limited thereto, and a specific diagnosis assistance numerical value (for example, blood pressure or the like) may also be predicted using a diagnosis assistance neural network model implemented in the form of a regression model.--, in [0300]-[0302]; {so that, it is clearly disclosed in Choi’s above First Model” aligned with the claimed “the main generator”, which is trained with fundus image, and with the patient's fundus image as a diagnosis target image, to output a probability value with respect to the presence of diabetic retinopathy, with identifying a classifier that assumes 1 as normal is used}; also see Choi’s disclosures of the Second Model aligned with the claimed “the main generator” in: [0238]… Referring to FIG. 14, the training process of a neural network model according to an embodiment of the present invention may include obtaining a data set (S1011), training a first model (that is, first neural network model) and a second model (that is, second neural network model) using the obtained data (S1031, S1033), validating the trained first neural network model and second neural network model (S1051), and determining a final neural network model and obtaining parameters or variables thereof (S1072). … a plurality of sub-neural network models may obtain the same training data set and individually generate output values. In this case, an ensemble of the plurality of sub-neural network models may be determined as a final neural network model, and parameter values related to each of the plurality of sub-neural network models may be obtained as training results. An output value of the final neural network model may be set to an average value of the output values by the sub-neural network models. Alternatively, in consideration of accuracy obtained as a result of validating each of the sub-neural network models, the output value of the final neural network model may be set to a weighted average value of the output values of the sub-neural network models. [0241] As a more specific example, when a neural network model includes a first sub-neural network model and a second sub-neural network model, optimized parameter values of the first sub-neural network model and optimized parameter values of the second sub-neural network model may be obtained by machine learning. In this case, an average value of output values (for example, probability values related to specific diagnosis assistance information) obtained from the first sub-neural network model and second sub-neural network model may be determined as an output value of the final neural network model.--, in [0238]-[0241]; also see: -- [0252] The training device may obtain a second training data set which includes the plurality of fundus images and at least partially differs from the first training data set and may train a second neural network model using the second training data set. [0253] According to an embodiment of the present invention, the control method of the training device may further include pre-processing a second fundus image so that the second fundus image included in the second data training set is suitable for training the second neural network model, serializing the pre-processed second fundus image, and training the second neural network model that classifies the target fundus image as a third label or a fourth label by using the serialized second fundus image.--, in [0252-[0254], and [0257]-[0259]; In addition, Choi discloses a plurality of examples of Neural Networks appliable for above first neural network model as the claimed “main generator”, and second model as the claimed “assistant generator” in: -- [0219] A neural network model may include a convolutional neural network (CNN). As a CNN structure, at least one of AlexNet, LENET, NIN, VGGNet, ResNet, WideResnet, GoogleNet, FractaNet, DenseNet, FitNet, RitResNet, HighwayNet, MobileNet, and DeeplySupervisedNet may be used. The neural network model may be implemented using a plurality of CNN structures. [0220] For example, a neural network model may be implemented to include a plurality of VGGNet blocks. As a more specific example, a neural network model may be provided by coupling between a first structure in which a 3×3 CNN layer having 64 filters, a batch normalization (BN) layer, and a ReLu layer are sequentially coupled and a second block in which a 3×3 CNN layer having 128 filters, a ReLu layer, and a BN layer are sequentially coupled. [0221] A neural network model may include a max pooling layer subsequent to each CNN block and include a global average pooling (GAP) layer, a fully connected (FC) layer, and an activation layer (for example, sigmoid, softmax, and the like) at an end.--; [0219]-[0221]; further see: 2.3.3 Fundus Image Reconstruction [0528] A fundus image may be reconstructed for training of a heart disease diagnosis assistance neural network model or for assistance in heart disease diagnosis using the neural network model. The reconstruction of the fundus image may be performed by the above-described diagnosis assistance system, diagnostic device, client device, mobile device, or server device. The control unit or processor of each device may perform the reconstruction of the image. [0529] The reconstruction of the fundus image may include modifying the fundus image to a form in which efficiency of the training of the heart disease diagnosis assistance neural network model or the assistance in the heart disease diagnosis using the neural network model may be improved. For example, the reconstruction of the image may include blurring the fundus image or changing chromaticity or saturation of the fundus image. [0530] For example, when the size of a fundus image is reduced or a color channel thereof is simplified, since the amount of data that needs to be processed by a neural network model is reduced, accuracy of a result or a speed of obtaining a result may be improved…. [0539] According to an embodiment, reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0540] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation……, in [0528]-[0542]); identifying a first discrepancy between the cycled image and the first medical image (see CHOI: e.g., -- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766]; and, --[1125] For example, the outputting of the diagnosis assistance information (S1107) may include comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information and determining the output diagnosis assistance information in consideration of a result of comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information. [1126] The determining of the output diagnosis assistance information in consideration of the result of the comparison may include, when the left-eye diagnosis assistance information and the right-eye diagnosis assistance information are logically consistent, determining the left-eye diagnosis assistance information, the right-eye diagnosis assistance information, or intermediate information (a median) between the left-eye diagnosis assistance information and the right-eye diagnosis assistance information as the output diagnosis assistance information.--, in [1125]-[1127]; also see PANES SAAVEDRA: e.g., --[0105] In a typical deep-learning model, prepared or designed to solve image classification tasks, the input image is first passed through a series of sequential filtering steps, which may be referred to as convolutional layers. Note that at each convolutional layer, several filters can be applied in parallel. After the application of each filter a new matrix may be obtained. These matrices may be called feature activation maps (also referred to as activation maps or feature maps). The stacked set of features maps may function as the input for the next convolutional layer. This block of operations may be referred to as the perceptual block. Note that the matrix components that define the filtering matrices may be learned during the training process and may be called weights. In general there can be several convolutional layers before proceeding to a subsequent step of the deep-learning model, which may be called the logical block. After the input image is sequentially filtered by the elements of the perceptual block, the final activation map may be reshaped as a 1D vector. At this step, the tabulated complementary data can be added to the 1D vector by a concatenation operation. The resulting 1D vector is the input for a subsequent series of operations of the deep-learning model. Typically, this vector is passed through a series of dense layers that incorporate non-linear operations between the 1D vector components. This process ends with a final layer that contains N neurons, one for each class, that record values that can be interpreted as a discrete probability distribution. The final decision of the network is usually defined as the neuron/class with the highest probability.--, in [0105]); generating, using the assistant generator, a preserved image based on the first medical image (see Choi: e.g., Fig. 14 as reproduced below: PNG media_image1.png 455 771 media_image1.png Greyscale As demonstrated in above CHOI’s Fig. 14, Choi discloses “First Model S1031” Neural Network read on claimed “the main generator” Neural Network; while “Second Model S1033” read on claimed “the assistant generator” Neural Network; and the training, functions and algorithms of Choi’s “First Model” aligned with the claimed “the main generator”, and “Second Model” aligned with the claimed “the assistant generator” are further disclosed and found support in Choi’s disclosures: ---- [0265] The pre-processing of the first fundus image may further include, by a processing unit, applying a blood vessel emphasizing filter to the fundus image so that a blood vessel included in the first fundus image is emphasized. [0266] Serialized first fundus images may be sequentially stored in a queue, and a predetermined number of the serialized first fundus images stored in the queue may be used each time in training the first neural network model. When the capacity of the serialized first fundus images which have not been used in the training of the first neural network model is reduced to a reference amount or lower, the queue may request for supplementation of the serialized first fundus images. [0267] The first finding may be any one of a finding of retinal hemorrhage, a finding of generation of retinal exudates, a finding of opacity of crystalline lens, and a finding of diabetic retinopathy.--, in [0265]-[0267]; and, -- [0300] For example, the diagnosis assistance neural network model may predict diagnostic information (that is, information on the presence of a disease) or findings information (that is, information on the presence of abnormal findings) related to an eye disease or a systemic disease of the patient. In this case, the diagnostic information or findings information may be output in the form of a probability. For example, the probability that the patient has a specific disease or the probability that there may be a specific abnormal finding in the patient's fundus image may be output. When a diagnosis assistance neural network model provided in the form of a classifier is used, a predicted label may be determined in consideration of whether an output probability value (or predicted score) exceeds a threshold value. [0301] As a specific example, a diagnosis assistance neural network model may output a probability value with respect to the presence of diabetic retinopathy in a patient with the patient's fundus image as a diagnosis target image. When a diagnosis assistance neural network model in the form of a classifier that assumes 1 as normal is used, a patient's fundus image may be input to the diagnosis assistance neural network model, and in relation to whether the patient has diabetic retinopathy, a normal: abnormal probability value may be obtained in the form of 0.74:0.26 or the like. [0302] Although the case in which data is classified using the diagnosis assistance neural network model in the form of a classifier has been described herein, the present invention is not limited thereto, and a specific diagnosis assistance numerical value (for example, blood pressure or the like) may also be predicted using a diagnosis assistance neural network model implemented in the form of a regression model.--, in [0300]-[0302]; {so that, it is clearly disclosed in Choi’s above First Model” aligned with the claimed “the main generator”, which is trained with fundus image, and with the patient's fundus image as a diagnosis target image, to output a probability value with respect to the presence of diabetic retinopathy, with identifying a classifier that assumes 1 as normal is used}; also see Choi’s disclosures of the Second Model aligned with the claimed “the main generator” in: [0238]… Referring to FIG. 14, the training process of a neural network model according to an embodiment of the present invention may include obtaining a data set (S1011), training a first model (that is, first neural network model) and a second model (that is, second neural network model) using the obtained data (S1031, S1033), validating the trained first neural network model and second neural network model (S1051), and determining a final neural network model and obtaining parameters or variables thereof (S1072). … a plurality of sub-neural network models may obtain the same training data set and individually generate output values. In this case, an ensemble of the plurality of sub-neural network models may be determined as a final neural network model, and parameter values related to each of the plurality of sub-neural network models may be obtained as training results. An output value of the final neural network model may be set to an average value of the output values by the sub-neural network models. Alternatively, in consideration of accuracy obtained as a result of validating each of the sub-neural network models, the output value of the final neural network model may be set to a weighted average value of the output values of the sub-neural network models. [0241] As a more specific example, when a neural network model includes a first sub-neural network model and a second sub-neural network model, optimized parameter values of the first sub-neural network model and optimized parameter values of the second sub-neural network model may be obtained by machine learning. In this case, an average value of output values (for example, probability values related to specific diagnosis assistance information) obtained from the first sub-neural network model and second sub-neural network model may be determined as an output value of the final neural network model.--, in [0238]-[0241]; also see: -- [0252] The training device may obtain a second training data set which includes the plurality of fundus images and at least partially differs from the first training data set and may train a second neural network model using the second training data set. [0253] According to an embodiment of the present invention, the control method of the training device may further include pre-processing a second fundus image so that the second fundus image included in the second data training set is suitable for training the second neural network model, serializing the pre-processed second fundus image, and training the second neural network model that classifies the target fundus image as a third label or a fourth label by using the serialized second fundus image.--, in [0252-[0254], and [0257]-[0259]; In addition, Choi discloses a plurality of examples of Neural Networks appliable for above first neural network model as the claimed “main generator”, and second model as the claimed “assistant generator” in: -- [0219] A neural network model may include a convolutional neural network (CNN). As a CNN structure, at least one of AlexNet, LENET, NIN, VGGNet, ResNet, WideResnet, GoogleNet, FractaNet, DenseNet, FitNet, RitResNet, HighwayNet, MobileNet, and DeeplySupervisedNet may be used. The neural network model may be implemented using a plurality of CNN structures. [0220] For example, a neural network model may be implemented to include a plurality of VGGNet blocks. As a more specific example, a neural network model may be provided by coupling between a first structure in which a 3×3 CNN layer having 64 filters, a batch normalization (BN) layer, and a ReLu layer are sequentially coupled and a second block in which a 3×3 CNN layer having 128 filters, a ReLu layer, and a BN layer are sequentially coupled. [0221] A neural network model may include a max pooling layer subsequent to each CNN block and include a global average pooling (GAP) layer, a fully connected (FC) layer, and an activation layer (for example, sigmoid, softmax, and the like) at an end.--; [0219]-[0221]; further see: 2.3.3 Fundus Image Reconstruction [0528] A fundus image may be reconstructed for training of a heart disease diagnosis assistance neural network model or for assistance in heart disease diagnosis using the neural network model. The reconstruction of the fundus image may be performed by the above-described diagnosis assistance system, diagnostic device, client device, mobile device, or server device. The control unit or processor of each device may perform the reconstruction of the image. [0529] The reconstruction of the fundus image may include modifying the fundus image to a form in which efficiency of the training of the heart disease diagnosis assistance neural network model or the assistance in the heart disease diagnosis using the neural network model may be improved. For example, the reconstruction of the image may include blurring the fundus image or changing chromaticity or saturation of the fundus image. [0530] For example, when the size of a fundus image is reduced or a color channel thereof is simplified, since the amount of data that needs to be processed by a neural network model is reduced, accuracy of a result or a speed of obtaining a result may be improved…. [0539] According to an embodiment, reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0540] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation……, in [0528]-[0542]); identifying a second discrepancy between the preserved image and the first medical image (see CHOI: e.g., -- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766]; and, --[1125] For example, the outputting of the diagnosis assistance information (S1107) may include comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information and determining the output diagnosis assistance information in consideration of a result of comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information. [1126] The determining of the output diagnosis assistance information in consideration of the result of the comparison may include, when the left-eye diagnosis assistance information and the right-eye diagnosis assistance information are logically consistent, determining the left-eye diagnosis assistance information, the right-eye diagnosis assistance information, or intermediate information (a median) between the left-eye diagnosis assistance information and the right-eye diagnosis assistance information as the output diagnosis assistance information.--, in [1125]-[1127]; also see PANES SAAVEDRA: e.g., --[0105] In a typical deep-learning model, prepared or designed to solve image classification tasks, the input image is first passed through a series of sequential filtering steps, which may be referred to as convolutional layers. Note that at each convolutional layer, several filters can be applied in parallel. After the application of each filter a new matrix may be obtained. These matrices may be called feature activation maps (also referred to as activation maps or feature maps). The stacked set of features maps may function as the input for the next convolutional layer. This block of operations may be referred to as the perceptual block. Note that the matrix components that define the filtering matrices may be learned during the training process and may be called weights. In general there can be several convolutional layers before proceeding to a subsequent step of the deep-learning model, which may be called the logical block. After the input image is sequentially filtered by the elements of the perceptual block, the final activation map may be reshaped as a 1D vector. At this step, the tabulated complementary data can be added to the 1D vector by a concatenation operation. The resulting 1D vector is the input for a subsequent series of operations of the deep-learning model. Typically, this vector is passed through a series of dense layers that incorporate non-linear operations between the 1D vector components. This process ends with a final layer that contains N neurons, one for each class, that record values that can be interpreted as a discrete probability distribution. The final decision of the network is usually defined as the neuron/class with the highest probability.--, in [0105]); and adjusting at least one parameter of the main generator and the assistant generator based on the first discrepancy and the second discrepancy (see Choi: e.g., Fig. 14 as reproduced below: PNG media_image1.png 455 771 media_image1.png Greyscale As demonstrated in above CHOI’s Fig. 14, Choi discloses “First Model S1031” Neural Network read on claimed “the main generator” Neural Network; while “Second Model S1033” read on claimed “the assistant generator” Neural Network; and the training, functions and algorithms of Choi’s “First Model” aligned with the claimed “the main generator”, and “Second Model” aligned with the claimed “the assistant generator” are further disclosed and found support in Choi’s disclosures: ---- [0265] The pre-processing of the first fundus image may further include, by a processing unit, applying a blood vessel emphasizing filter to the fundus image so that a blood vessel included in the first fundus image is emphasized. [0266] Serialized first fundus images may be sequentially stored in a queue, and a predetermined number of the serialized first fundus images stored in the queue may be used each time in training the first neural network model. When the capacity of the serialized first fundus images which have not been used in the training of the first neural network model is reduced to a reference amount or lower, the queue may request for supplementation of the serialized first fundus images. [0267] The first finding may be any one of a finding of retinal hemorrhage, a finding of generation of retinal exudates, a finding of opacity of crystalline lens, and a finding of diabetic retinopathy.--, in [0265]-[0267]; and, -- [0300] For example, the diagnosis assistance neural network model may predict diagnostic information (that is, information on the presence of a disease) or findings information (that is, information on the presence of abnormal findings) related to an eye disease or a systemic disease of the patient. In this case, the diagnostic information or findings information may be output in the form of a probability. For example, the probability that the patient has a specific disease or the probability that there may be a specific abnormal finding in the patient's fundus image may be output. When a diagnosis assistance neural network model provided in the form of a classifier is used, a predicted label may be determined in consideration of whether an output probability value (or predicted score) exceeds a threshold value. [0301] As a specific example, a diagnosis assistance neural network model may output a probability value with respect to the presence of diabetic retinopathy in a patient with the patient's fundus image as a diagnosis target image. When a diagnosis assistance neural network model in the form of a classifier that assumes 1 as normal is used, a patient's fundus image may be input to the diagnosis assistance neural network model, and in relation to whether the patient has diabetic retinopathy, a normal: abnormal probability value may be obtained in the form of 0.74:0.26 or the like. [0302] Although the case in which data is classified using the diagnosis assistance neural network model in the form of a classifier has been described herein, the present invention is not limited thereto, and a specific diagnosis assistance numerical value (for example, blood pressure or the like) may also be predicted using a diagnosis assistance neural network model implemented in the form of a regression model.--, in [0300]-[0302]; {so that, it is clearly disclosed in Choi’s above First Model” aligned with the claimed “the main generator”, which is trained with fundus image, and with the patient's fundus image as a diagnosis target image, to output a probability value with respect to the presence of diabetic retinopathy, with identifying a classifier that assumes 1 as normal is used}; also see Choi’s disclosures of the Second Model aligned with the claimed “the main generator” in: [0238]… Referring to FIG. 14, the training process of a neural network model according to an embodiment of the present invention may include obtaining a data set (S1011), training a first model (that is, first neural network model) and a second model (that is, second neural network model) using the obtained data (S1031, S1033), validating the trained first neural network model and second neural network model (S1051), and determining a final neural network model and obtaining parameters or variables thereof (S1072). … a plurality of sub-neural network models may obtain the same training data set and individually generate output values. In this case, an ensemble of the plurality of sub-neural network models may be determined as a final neural network model, and parameter values related to each of the plurality of sub-neural network models may be obtained as training results. An output value of the final neural network model may be set to an average value of the output values by the sub-neural network models. Alternatively, in consideration of accuracy obtained as a result of validating each of the sub-neural network models, the output value of the final neural network model may be set to a weighted average value of the output values of the sub-neural network models. [0241] As a more specific example, when a neural network model includes a first sub-neural network model and a second sub-neural network model, optimized parameter values of the first sub-neural network model and optimized parameter values of the second sub-neural network model may be obtained by machine learning. In this case, an average value of output values (for example, probability values related to specific diagnosis assistance information) obtained from the first sub-neural network model and second sub-neural network model may be determined as an output value of the final neural network model.--, in [0238]-[0241]; also see: -- [0252] The training device may obtain a second training data set which includes the plurality of fundus images and at least partially differs from the first training data set and may train a second neural network model using the second training data set. [0253] According to an embodiment of the present invention, the control method of the training device may further include pre-processing a second fundus image so that the second fundus image included in the second data training set is suitable for training the second neural network model, serializing the pre-processed second fundus image, and training the second neural network model that classifies the target fundus image as a third label or a fourth label by using the serialized second fundus image.--, in [0252-[0254], and [0257]-[0259]; In addition, Choi discloses a plurality of examples of Neural Networks appliable for above first neural network model as the claimed “main generator”, and second model as the claimed “assistant generator” in: -- [0219] A neural network model may include a convolutional neural network (CNN). As a CNN structure, at least one of AlexNet, LENET, NIN, VGGNet, ResNet, WideResnet, GoogleNet, FractaNet, DenseNet, FitNet, RitResNet, HighwayNet, MobileNet, and DeeplySupervisedNet may be used. The neural network model may be implemented using a plurality of CNN structures. [0220] For example, a neural network model may be implemented to include a plurality of VGGNet blocks. As a more specific example, a neural network model may be provided by coupling between a first structure in which a 3×3 CNN layer having 64 filters, a batch normalization (BN) layer, and a ReLu layer are sequentially coupled and a second block in which a 3×3 CNN layer having 128 filters, a ReLu layer, and a BN layer are sequentially coupled. [0221] A neural network model may include a max pooling layer subsequent to each CNN block and include a global average pooling (GAP) layer, a fully connected (FC) layer, and an activation layer (for example, sigmoid, softmax, and the like) at an end.--; [0219]-[0221]; further see: 2.3.3 Fundus Image Reconstruction [0528] A fundus image may be reconstructed for training of a heart disease diagnosis assistance neural network model or for assistance in heart disease diagnosis using the neural network model. The reconstruction of the fundus image may be performed by the above-described diagnosis assistance system, diagnostic device, client device, mobile device, or server device. The control unit or processor of each device may perform the reconstruction of the image. [0529] The reconstruction of the fundus image may include modifying the fundus image to a form in which efficiency of the training of the heart disease diagnosis assistance neural network model or the assistance in the heart disease diagnosis using the neural network model may be improved. For example, the reconstruction of the image may include blurring the fundus image or changing chromaticity or saturation of the fundus image. [0530] For example, when the size of a fundus image is reduced or a color channel thereof is simplified, since the amount of data that needs to be processed by a neural network model is reduced, accuracy of a result or a speed of obtaining a result may be improved…. [0539] According to an embodiment, reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0540] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation……, in [0528]-[0542]); identifying a second discrepancy between the preserved image and the first medical image (see CHOI: e.g., -- suitability information on an image may be obtained. The suitability information may indicate whether a diagnosis target image is suitable for obtaining diagnosis assistance information using a diagnosis assistance neural network model…. a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. --, in [0303]-[0310]; and, -- [0670] The heart disease diagnosis assistance module 503 may further obtain additional information (in other words, secondary diagnosis assistance information) other than primary heart disease diagnosis assistance information directly output from a neural network model. For example, the heart disease diagnosis assistance module 503 may obtain instruction information, prescription information or the like which will be described below. Also, for example, the heart disease diagnosis assistance module may obtain diagnosis assistance information related to a disease other than a target disease or a class activation map (CAM) image corresponding to the output diagnosis assistance information. The class activation map in this description can be construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. [0671] The diagnosis assistance information output module 505 may obtain diagnosis assistance information from the heart disease diagnosis assistance module. The diagnosis assistance information output module 505 may output diagnosis assistance information related to a heart disease.--, in [0670]-[0671], and, -- [0765] Meanwhile, according to an embodiment of the present invention, diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image. [0766] In other words, a method of providing diagnosis assistance information to a user may include obtaining a third image which is a CAM image obtained via a heart disease diagnosis assistance neural network model based on a first image corresponding to an image which is obtained by imaging (that is, original image) and a second image (for example, a blood vessel highlighting image or a blood vessel extraction image) obtained by reconstructing the first image so that target elements (for example, blood vessels) included in the first image are highlighted and displaying the first image and the third image to superimpose each other.--, in [0765]-[0766]; and, --[1125] For example, the outputting of the diagnosis assistance information (S1107) may include comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information and determining the output diagnosis assistance information in consideration of a result of comparing the left-eye diagnosis assistance information and the right-eye diagnosis assistance information. [1126] The determining of the output diagnosis assistance information in consideration of the result of the comparison may include, when the left-eye diagnosis assistance information and the right-eye diagnosis assistance information are logically consistent, determining the left-eye diagnosis assistance information, the right-eye diagnosis assistance information, or intermediate information (a median) between the left-eye diagnosis assistance information and the right-eye diagnosis assistance information as the output diagnosis assistance information.--, in [1125]-[1127]; also see PANES SAAVEDRA: e.g., --[0105] In a typical deep-learning model, prepared or designed to solve image classification tasks, the input image is first passed through a series of sequential filtering steps, which may be referred to as convolutional layers. Note that at each convolutional layer, several filters can be applied in parallel. After the application of each filter a new matrix may be obtained. These matrices may be called feature activation maps (also referred to as activation maps or feature maps). The stacked set of features maps may function as the input for the next convolutional layer. This block of operations may be referred to as the perceptual block. Note that the matrix components that define the filtering matrices may be learned during the training process and may be called weights. In general there can be several convolutional layers before proceeding to a subsequent step of the deep-learning model, which may be called the logical block. After the input image is sequentially filtered by the elements of the perceptual block, the final activation map may be reshaped as a 1D vector. At this step, the tabulated complementary data can be added to the 1D vector by a concatenation operation. The resulting 1D vector is the input for a subsequent series of operations of the deep-learning model. Typically, this vector is passed through a series of dense layers that incorporate non-linear operations between the 1D vector components. This process ends with a final layer that contains N neurons, one for each class, that record values that can be interpreted as a discrete probability distribution. The final decision of the network is usually defined as the neuron/class with the highest probability.--, in [0105]). Claims 13, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over CHOI as modified by PANES SAAVEDRA, and further in view of FAUST (US 20200272864 A1). Re Claim 13, CHOI as modified by PANES SAAVEDRA however do not explicitly disclose the trained classifier comprises a VGG19 classifier, FAUST discloses a trained classifier comprises a trained classifier comprises a VGG19 classifier (see FAUST: e.g., -- a trained convolutional neural network may classify a tile, and digital pathology platform 110 can use the classification to determine which second convolutional neural network will be used to further classify the tile. A plurality of convolutional neural networks may be used successively on the same data, for example, slide image tile, or on output from the previous convolutional neural network. The identity of each convolutional neural network used in succession can be dynamically determined by digital platform 110, for example, based on the classification, result, or data output from one or more convolutional neural networks (for example, previously used in the hierarchy), machine learning, classification model, clustering algorithm, or data collected by digital pathology platform 110. [0107] For example, a VGG19 CNN trained on 1.2 million images available through ImageNet, for example, pictures of cats and dogs, can be received from external system 150 and used by digital pathology platform 110. Digital pathology platform can re-train the CNN and change the weighting of the CNN to be better optimized at recognizing and classifying tumors instead of cats and dogs. For each “module” or level in a hierarchy of CNNs, each CNN is retrained to carry out a context specific task. [0108] In some embodiments, digital pathology platform 110 can generate output indications of regions of interest on digital pathology slides using classification data.--, in [0106]-[0108]); CHOI (as modified by PANES SAAVEDRA) and FAUST are combinable as they are in the same field of endeavor: neural network in medical image processing and analysis and corresponding diagnosis. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify CHOI (as modified by PANES SAAVEDRA)’s method using FAUST’s teachings by including a trained classifier comprises a VGG19 classifier to CHOI (as modified by PANES SAAVEDRA)’s neural network and trained classifier in order to generate output indications of regions of interest on digital pathology slides using classification data (see FAUST: e.g. in [0106]-[0108]). Re Claim 16, CHOI as modified by PANES SAAVEDRA and FAUST further disclose wherein the disease comprises a brain tumor or a breast tumor (see FAUST: e.g., -- wherein the pathology features and the predicted region of interest type comprise a brain tumor type.--, in [0018], and, -- [0090] The platform can process pathology images (e.g., in some cases increasing the base of data by breaking larger fields of view into smaller ones) to train deep convolutional neural networks (CNNs) in the features consistent with certain cancers. The platform can use the CNNs to identify regions of interest on pathology slides. The platform and process is not limited to specific pathologies and regions. An example embodiment relates to brain tumors for illustrative purposes. Similar results have been achieved in other cancer types (e.g., lung), as described in an example below.--, in [0090]; and, -- Need for Protein-Based Glioma Biomarkers. [0278] The following is an example application. One challenge facing gene-based biomarker discovery is the assumption that the genomic landscapes of tumors closely mirror their proteomic (and functional) phenotypes. Proteogenomic studies in colorectal, breast and ovarian cancer, that superimpose proteomic and genomic data, now show that protein abundance cannot as yet be accurately inferred from DNA- or RNA measurements (r=0.23-0.45). For example, although copy number alterations (CNA) drive local (“cis”) mRNA changes, relatively few translate to protein abundance changes. In ovarian cancer, >200 protein aberrations can be detected from genes at distant (“trans”) loci of CNA and are discordant with mRNA levels. These discrepancies are ominous, as KEGG pathway analysis reveals that these changes are involved in invasion, cell migration and immune regulation. Consequently, it is perhaps not surprising that these proteomic signatures outperformed transcriptomics for risk stratification between short (<3 yr) and long term survivors (>5 yr). There is a strong need for more proteomic approaches in precision and personalized medicine.--, in [0278]). See the similar obviousness and motivation statements for the combinations of cited references as addressed above for claim 13. Conclusion Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEI WEN YANG whose telephone number is (571)270-5670. The examiner can normally be reached on 8:00 - 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEI WEN YANG/Primary Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Mar 31, 2023
Application Filed
Aug 22, 2025
Non-Final Rejection — §103
Nov 26, 2025
Response Filed
Feb 15, 2026
Final Rejection — §103
Mar 24, 2026
Interview Requested
Apr 01, 2026
Examiner Interview Summary
Apr 01, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602789
ENDOSCOPIC IMAGE SEGMENTATION METHOD BASED ON SINGLE IMAGE AND DEEP LEARNING NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12586413
METHOD FOR RECOGNIZING ACTIVITIES USING SEPARATE SPATIAL AND TEMPORAL ATTENTION WEIGHTS
2y 5m to grant Granted Mar 24, 2026
Patent 12582359
IMAGE DISPLAY METHOD, STORAGE MEDIUM, AND IMAGE DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12573034
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM, AND IMAGE PROCESSING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12567168
DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 657 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month