Prosecution Insights
Last updated: April 19, 2026
Application No. 18/550,315

IMAGE DIAGNOSTIC SYSTEM AND IMAGE DIAGNOSTIC METHOD

Final Rejection §103
Filed
Sep 13, 2023
Examiner
YANG, WEI WEN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
539 granted / 657 resolved
+20.0% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
34 currently pending
Career history
691
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
72.5%
+32.5% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 657 resolved cases

Office Action

§103
DETAILED ACTION Response to Arguments The Applicant's amendments and arguments filed 1/16/2025 have been considered. Applicant's amendments and arguments have been considered but are moot in view of the new ground(s) of rejection because the Applicant has amended independent claim(s), and Applicant's arguments have been considered but are not persuasive: Re amended independent claim 1, which recites newly added limitation of “generate a visualization of the determination basis information based on the selection of the part of the diagnosis content wherein the visualization of the determination basis information is superimposed on the input image”, and Applicant asserts that cited references, particularly COLLEY as modified by CHOI, do not disclose above limitation, However, the Examiner disagrees, because: CHOI discloses generate a visualization of the determination basis information based on the selection of the part of the diagnosis content wherein the visualization of the determination basis information is superimposed on the input image (see CHOI: e.g., -- diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image.--, in [0832]-[0833]); Apparently, above CAM is disclosed as in CHOI’s [0341]-[0342]: -- a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. [0342] The CAM may be obtained optionally. For example, the CAM may be extracted and/or output when diagnostic information or findings information obtained by a diagnosis assistance model is classified into an abnormal class.--; So that, CHOI discloses that above “diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information” and “a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image.” , which clearly read on the claimed limitation of “generate a visualization of the determination basis information based on the selection of the part of the diagnosis content wherein the visualization of the determination basis information is superimposed on the input image”; Choi further discloses that above output of “diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information” and “a visualized image of the CAM may be output” are being selected: as disclosed in CHOI’s disclosures of: [0349] When a CAM is obtained by a neural network model, an image of the CAM may be provided together. The image of the CAM may be selectively provided. For example, the CAM image may not be provided when diagnostic information obtained through a diagnosis assistance neural network model is normal findings information or normal diagnostic information, and the CAM image may be provided together for more accurate clinical diagnosis when the obtained diagnostic information is abnormal findings information or abnormal diagnostic information. …. [0353]…diagnosis assistance information corresponding to a diagnosis target image may include level information. The level information may be selected among a plurality of levels. The level information may be determined on the basis of diagnostic information and/or findings information obtained through a neural network model. The level information may be determined in consideration of suitability information or quality information of a diagnosis target image.--, in [0349]-[0353]; Therefore, claims 1-18 are still not patentably distinguishable over the prior art reference(s). {claims 19-20 were withdrawn previously as the results of the Restriction Election}. Further discussions are addressed in the prior art rejection section below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-18 are rejected under 35 U.S.C. 103 as being patentable over COLLEY (US 20210090694 A1), and in view of CHOI (US 20200202527 A1). Re Claim 1, COLLEY discloses an image diagnostic system (see COLLEY: e.g., --A method and system for storing user application programs and micro-service programs, for each of multiple patients that have cancerous cells and receive treatment, includes obtaining clinical records data in original forms, storing it in a semi-structured first database, generating sequencing data for the patient's cancerous and normal cells using a next generation genomic sequencer, storing the sequencing data in the first database, shaping at least some of the first database data to generate system structured data optimized for searching and including clinical record data--, in abstract, and, --systems and methods for obtaining and employing data related to physical and genomic patient characteristics as well as diagnosis, treatments and treatment efficacy to provide a suite of tools to healthcare providers, researchers and other interested parties enabling those entities to develop new cancer state-treatment-results insights and/or improve overall patient healthcare and treatment plans for specific patients. [0007] The present disclosure is described in the context of a system related to cancer research, diagnosis, treatment and results analysis. Nevertheless, it should be appreciated that the present disclosure is intended to teach concepts, features and aspects that will be useful in many different health related contexts and therefore the specification should not be considered limited to a cancer related systems unless specifically indicated for some system aspect.--, in [0005]-[0006]) comprising: a central processing unit (CPU) configured to (see COLLEY: e.g., Fig. 1, and --implemented as a system, method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer or processor based device to implement aspects detailed herein.--, in [0948], and, --[0959] System specialists (e.g. employees of the provider that controls/maintains overall system 100) also use interface computing devices to link to server 150 to perform various processes and functions. In FIG. 1--, in [0959]; and, -- [1837] The deep learning MSI predictor processing system 6200 may be implemented on a computing device such as a computer, tablet or other mobile computing device, or server. The system 6200 may include a number of processors, controllers or other electronic components for processing or facilitating the image capture, generation, or storage and image analysis, as described herein. … for communicating to and/or from a portable personal computer, smart phone, electronic document, tablet, and/or desktop personal computer, or other computing devices. The computing device further includes an I/O interface connected to devices, such as digital displays, user input devices, etc.--, in [1837]); estimate a diagnosis result by a set of a machine learning models based on of an input image (see COLLEY: e.g., -- [0175] Technological advances have enabled the digitization of histopathology H&E and IHC slides into high resolution whole slide images (WSIs), providing opportunities to develop computer vision tools for a wide range of clinical applications (27-29). High-resolution, digital images of microscope slides make it possible to use artificial intelligence to analyze the slides and classify the tissue components by tissue class. Recently, deep learning applications to pathology images have shown tremendous promise in predicting treatment outcomes (30), disease subtypes (31,32), lymph node status (27,28), and genetic characteristics (30,33,34) in various malignancies. Deep learning is a subset of machine learning wherein models are built with a number of discrete neural node layers, imitating the structure of the human brain (35). [0176] These models learn to recognize complex visual features from WSIs by iteratively updating the weighting of each neural node based on the training examples (29). [0177] A Convolutional Neural Network (“CNN”) is a deep learning algorithm that analyzes digital images by assigning one class label to each input image. Slides, however, include more than one type of tissue, including the borders between neighboring tissue classes. There is a need to classify different regions as different tissue classes, in part to study the borders between neighboring tissue classes and the presence of immune cells among tumor cells.--, in [0175]-[0178]); output a diagnosis result report based the diagnosis result (see COLLEY: e.g., --an overall process that includes one or more sub-processes that process clinical and other patient data and samples (e.g., tumor tissue) to generate intermediate data deliverables and eventually final work product in the form of one or more final reports provided to system clients.--, in [0181]’ also see: --[0121] Rich and meaningful data can be found in source clinical documents and records, such as diagnosis, progress notes, pathology reports, radiology reports, lab test results, follow-up notes, images, and flow sheets. These types of records are referred to as “raw clinical data”.--, in [0121]); select a part of content within the diagnostic result report (see COLLEY: e.g., --to identify factors needed to select optimized cancer treatments and initiation of some of those processes is dependent on the results of prior processes. For instance, a tumor sample has to be collected from a patient prior to developing a genetic panel for the tumor, the panel has to be completed prior to analyzing panel results to identify relevant factors and the factors have to be analyzed prior to selecting treatments and/or clinical studies to select for a specific patient.--, in [0065]; and, -- [0151] Selection and evaluation of a treatment or medication typically includes comparing patients' populations. The standard way of performing clinical trials is randomized clinical trials. Observational, nonrandomized data analysis is another frequently used approach. The observational data analysis differs from randomized trials in that there is no reason to believe that populations being studied are free of correlation with an observed outcome. For example, comparison of breast cancer patients who had surgery to those breast cancer patients who did not have a surgery can be akin to comparing apples and oranges, because the patients that had surgery had a reason for their surgery (meaning that they were not selected at random) and they are thus fundamentally different from those patients who did not have surgery.--, in [0151], -- The instrument reports the sequences as a string of letters, called a read. These reads allow the identification of genes, variants, or sequences of nucleotides in the human genome. An analyst compares these reads from genes to one or more reference genomes of the same genes, variants, or sequences of nucleotides. Identification of certain genetic mutations or particular variants plays an important role in selecting the most beneficial line of therapy for a patient--, in [0161], and, -- [0186] Medical treatment prescriptions or plans are typically based on an understanding of how treatments affect illness (e.g., treatment results) including how well specific treatments eradicate illness, duration of specific treatments, duration of healing processes associated with specific treatments and typical treatment specific side effects. Ideally treatments result in complete elimination of an illness in a short period with minimal or no adverse side effects. In some cases cost is also a consideration when selecting specific medical treatments for specific ailments.--, in [0186]; and, -- a workstation computer/processor runs electronic medical records (EMR) or medical research application programs (hereinafter “research applications”) that present different data representations along with on screen cursor selectable control icons for selecting different data access and manipulation options.)--, in [0266]; and, -- the propensity predictions are generated using a propensity scoring model, also referred to herein as a propensity model. The propensity model is a machine-leaning model that is trained on the base population of subjects, based at least in part on a plurality of features, which can be temporal or static. Various demographic, genomic, and clinical features can be selected for building a model, which can be done automatically and/or manually. In some embodiments, the propensity model is applied to the base population of subjects to identify a patient profile for patients who are likely to incur the event (e.g., to receive a treatment).--, in [0354]; and, -- the of data comprises a CT result. In at least some cases the first set of data comprises a therapy prescription. In at least some cases the first set of data comprises a therapy administration. In at least some cases the first set of data comprises a cancer subtype diagnosis. In at least some cases the first set of data comprises an cancer subtype diagnosis by RNA class. In at least some cases the first set of data comprises a result of a therapy applied to an organoid grown from the patient's cells. In at least some cases the first set of data comprises a tumor quality measure. In at least some cases the first set of data comprises a tumor quality measure selected from at least one of the set of PD-L1, MMR, tumor infiltrating lymphocyte count, and tumor ploidy. In at least some cases the first set of data comprises a tumor quality measure derived from an image analysis of a pathology slide of the patient's tumor. In at least some cases the first set of data comprises a signaling pathway associated with a tumor of the patient.--, in [0443]); COLLEY however does not explicitly disclose select a part of diagnostic content within the diagnostic result report based on a first user input on the part of the diagnostic content, CHOI discloses select a part of diagnostic content within the diagnostic result report based on a first user input on the part of the diagnostic content (see CHOI: e.g., -- a heart disease diagnosis assistance information obtaining unit configured to, on the basis of the target fundus image, obtain heart disease diagnosis assistance information of the testee according to the target fundus image, via a heart disease diagnosis assistance neural network model which obtains diagnosis assistance information that is used for diagnosis of the target heart disease according to the fundus image; and a heart disease diagnosis assistance information output unit configured to output the obtained heart disease diagnosis assistance information, wherein the heart disease diagnosis assistance information includes at least one of grade information which includes a grade selected from a plurality of grades indicating an extent of risk of the target heart disease, score information which is numerical value information for determining an extent of risk of the target heart disease, and risk information which indicates whether the testee belongs to a risk group for the target heart disease.--, in [0008]; and, --diagnosis assistance information corresponding to a diagnosis target image may include level information. The level information may be selected among a plurality of levels. The level information may be determined on the basis of diagnostic information and/or findings information obtained through a neural network model. The level information may be determined in consideration of suitability information or quality information of a diagnosis target image. When a neural network model is a classifier model that performs multiclass classification, the level information may be determined in consideration of a class into which a diagnosis target image is classified by the neural network model. When a neural network model is a regression model that outputs a numerical value related to a specific disease, the level information may be determined in consideration of the output numerical value. [0354] For example, diagnosis assistance information obtained corresponding to a diagnosis target image may include any one level information selected from a first level information and a second level information. When abnormal findings information or abnormal diagnostic information is obtained through a neural network model, the first level information may be selected as the level information. When abnormal findings information or abnormal diagnostic information is not obtained through a neural network model, the second level information may be selected as the level information.--, in [353]-[0354], and, --[0485] The user interface according to an embodiment of the present invention may obtain a user comment on a diagnosis target fundus image from the user. The user interface may include a user comment object 409 and may display a user input window in response to a user selection on the user comment object. A comment obtained from the user may also be used in updating a diagnosis assistance neural network model. For example, the user input window displayed in response to the user's selection on the user comment object may obtain a user's evaluation on diagnosis assistance information obtained through a neural network, and the obtained user's evaluation may be used in updating a neural network model.--, in [0485]-[0488]); CHOI and COLLEY are combinable as they are in the same field of endeavor: machine learning algorithms in images and diagnosis results analysis. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify COLLEY’s system using CHOI’s teachings by including select a part of diagnostic content within the diagnostic result report based on a first user input on the part of the diagnostic content to COLLEY’s selected set of data, such as selection of treatments, plurality of features, and/or quality measures in order to obtain abnormal findings information or abnormal diagnostic information (see CHOI: e.g. in [0008], [0353]-[0354], and [0485]-[0488]); COLLEY as modified by CHOI further disclose generate a visualization of the determination basis information based on the selection of the part of the diagnosis content wherein the visualization of the determination basis information is superimposed on the input image (see CHOI: e.g., -- diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image.--, in [0832]-[0833]); Apparently, above CAM is disclosed as in CHOI’s [0341]-[0342]: -- a class activation map (CAM) may be obtained from a trained neural network model. Diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information. [0342] The CAM may be obtained optionally. For example, the CAM may be extracted and/or output when diagnostic information or findings information obtained by a diagnosis assistance model is classified into an abnormal class.--; So that, CHOI discloses that above “diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information” and “a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image.” , which clearly read on the claimed limitation of “generate a visualization of the determination basis information based on the selection of the part of the diagnosis content wherein the visualization of the determination basis information is superimposed on the input image”; Choi further discloses that above output of “diagnosis assistance information may include a CAM. The CAM may be obtained together with other diagnosis assistance information” and “a visualized image of the CAM may be output” are being selected: as disclosed in CHOI’s disclosures of: [0349] When a CAM is obtained by a neural network model, an image of the CAM may be provided together. The image of the CAM may be selectively provided. For example, the CAM image may not be provided when diagnostic information obtained through a diagnosis assistance neural network model is normal findings information or normal diagnostic information, and the CAM image may be provided together for more accurate clinical diagnosis when the obtained diagnostic information is abnormal findings information or abnormal diagnostic information. …. [0353]…diagnosis assistance information corresponding to a diagnosis target image may include level information. The level information may be selected among a plurality of levels. The level information may be determined on the basis of diagnostic information and/or findings information obtained through a neural network model. The level information may be determined in consideration of suitability information or quality information of a diagnosis target image.--, in [0349]-[0353]; also see COLLEY: e.g., -- [0126] Mobile Supplementation, Extraction, and Analysis of Health Records [0127] A system and method implemented in a mobile platform are described herein that facilitate the capture of documentation, along with the extraction and analysis of data embedded within the data.--, in [0126]-[017], and, -- [0147] Extracting meaningful medical features from an ever expanding quantity of health information tabulated for a similarly expanding cohort of patients having a multitude of sparsely populated features is a difficult endeavor. Identifying which medical features from the tens of thousands of features available in health information are most probative to training and utilizing a prediction engine only compounds the difficulty. Features which may be relevant to predictions may only be available in a small subset of patients and features which are not relevant may be available in many patients. What is needed is a system which may ingest these impossibly comprehensive scope of available data across entire populations of patients to identify features which apply to the largest number of patients and establish a model for prediction of an objective. When there are multiple objectives to choose from, what is needed is a system which may curate the medical features extracted from patient health information to a specific model associated with the prediction of the desired objective.--, in [0147], and, -- Templates for each document may be one component of the model, along with identifiers for each template, regions or masks, features or fields and tools or instructions for how to extract those features or fields and verify the accuracy of that extraction, associated sub-fields, and rules for normalization of those fields and sub-fields. [1522] One exemplary technique to access the data within each of the identified sections may be to generate a mask which “outlines” the section, apply the mask to the document to extract each section in turn, and then provide the section to an OCR algorithm, such as an OCR post-processing optimized to extracting information from the respective section type.--, in [1521]-[1524]; also see CHOI: e.g., -- extract fundus image data having a left eye label of a specific patient and fundus image data having a right eye label of the specific patient to be used together in training--, in [0232]; and, -- reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0596] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation. [0597] The highlighting of blood vessels may include processing a region in which the blood vessels are distributed or processing an extracted blood vessel image. For example, the highlighting of blood vessels may include changing color, brightness, and histogram of a region of a fundus image in which blood vessels are distributed or a blood vessel image extracted from the fundus image.--, in [0595]-[0600]); and, output the visualization of the determination basis information (see CHOI: e.g., --assisting in determination of the presence of a disease or illness on the basis of a fundus image or the presence of an abnormality which is a basis of the determination will be described.--, in [0100], and, --[0619] When the obtained information is of a different type from the label included in the input data, the updating of the heart disease diagnosis assistance neural network model may include comparing a label assigned to the input fundus image with diagnosis assistance information obtained on the basis of the corresponding fundus image and updating the neural network model on the basis of an error between the label and the diagnosis assistance information…..[0620] For example, when a label assigned to an input fundus image is Grade A and diagnosis information obtained by a neural network model is normality information which indicates that a patient is healthy, the neural network model may be determined as having made a correct judgment, and such determination may be reflected in updating. Also, for example, when a score label assigned to an input fundus image is 0 and diagnosis information obtained by a neural network model is Grade B which indicates that a score ranges from 1 to 10, the diagnosis information obtained by the neural network model and the label may be determined as being different, and such determination may be reflected in updating.--, in [0619]-[0620]; and, --[0923] FIG. 54 is a view for describing a heart disease diagnosis assistance device according to an embodiment of the present invention. Referring to FIG. 54, a heart disease diagnosis assistance device6701 may include a diagnosis assistance information matching determination unit 6711 and a heart disease diagnosis assistance information output unit 6731. [0924] The diagnosis assistance information matching determination unit 6711 may determine whether a first type of diagnosis assistance information and a second type of diagnosis assistance information match each other. [0925] The heart disease diagnosis assistance information output unit 6731 may output heart disease diagnosis assistance information determined according to whether the first type of diagnosis assistance information and the second type of diagnosis assistance information match each other. [0926] The heart disease diagnosis assistance information output unit 6731 may output heart disease diagnosis assistance information including at least one of the first type of diagnosis assistance information and the second type of diagnosis assistance information when the first type of diagnosis assistance information and the second type of diagnosis assistance information match each other. [0927] The heart disease diagnosis assistance information output unit 6731 may output heart disease diagnosis assistance information including any one piece of diagnosis assistance information selected from the first type of diagnosis assistance information and the second type of diagnosis assistance information when the first type of diagnosis assistance information and the second type of diagnosis assistance information do not match each other. [0928] The heart disease diagnosis assistance information output unit 6731 may further include, when the first type of diagnosis assistance information and the second type of diagnosis assistance information do not match each other, outputting reference diagnosis assistance information, which is any one piece of diagnosis assistance information selected from the first type of diagnosis assistance information and the second type of diagnosis assistance information, and heart disease diagnosis assistance information including the other piece of diagnosis assistance information which is corrected to match the reference diagnosis assistance information.--, in [0923]-[0928]). Re Claim 2, COLLEY as modified by CHOI further disclose wherein the CPU further configured to: correct the determination basis information (see COLLEY: e.g., --the validation may be developed using a selection tool such as a search bar 3432, that may receive a search term from a user and display a suggestion menu 3442 based on the search term in order to reduce user search time. The suggestion menu 3442 may be populated from the one or more templates associated with the validation. As an example, the four categories displayed in suggestion menu 3442 may be matched back to the list of items displayed in panel 3322. After a category 3452 from suggestion menu 3442 has been selected, the rule authoring system can create the corresponding system code for the new validation. For example, the selection of the category 3452 may trigger the appearance of sub-level data entry elements tied to the selected category 3452. As shown in FIG. 94, for example, the selection of .diagnosis.primary Diagnosis.site.display as category 3452 populates the rule displays shown below the label. The field “.diagnosis.primaryDiagnosis.site.display” refers to the text that is in a set of structured data that reflects the primary diagnosis of a tumor at a site of biopsy. For example, the value of that text entry may be “breast cancer,” “prostate cancer,” or the like. However, it should be understood that in the examination of a patient's medical record, particularly the medical record of a medically complex patient such as a metastatic cancer patient, there is a substantial amount of information in the record that may suggest the factual primary diagnosis of a cancer at a site. In addition, because medical records are updated over a period of many years and many clinical visits, it is often the case that information collected in the medical record over time may be inconsistent—that is to say, records from a first clinical visit may indicate that the size of the tumor is 5 cm while records from a second clinical visit may indicate that the size of the tumor is 6 cm. This may be because the size of the tumor has changed between appointments. Or, it may be a data entry error that was not corrected in the medical record. For this reason, a validation may be set up to check one or more structured data fields for internal value consistency as a condition to ensuring that the primary diagnosis text can be relied on with a high level of confidence, given the extensive and sometimes contrary information in a medical record….[1321]…Accordingly, data abstractors can verify/correct pre-populated data--, in [01315]-[1321] {COLLEY’s above disclosures of a validation of “the determination basis information” is correction unit that corrects pre-populated data, such as “the medical record of a medically complex patient such as a metastatic cancer patient, there is a substantial amount of information in the record that may suggest the factual primary diagnosis of a cancer at a site”; also see CHOI: e.g., --[0203] For example, expansion of image data may be performed by reversing the left and right of an image, cutting(cropping) a part of the image, correcting a color value of the image, or adding artificial noise to the image. As a specific example, cutting a part of the image may be performed by cutting a partial region of an element constituting an image or randomly cutting partial regions. In addition, image data may be expanded by reversing the left and right of the image data, reversing the top and bottom of the image data, rotating the image data, resizing the image data to a certain ratio, cropping the image data, padding the image data, adjusting color of the image data, or adjusting brightness of the image data.--, in [0203], and, --[0907] The outputting of the determined heart disease diagnosis assistance information (S473) may further include, when the first type of diagnosis assistance information and the second type of diagnosis assistance information do not match each other, outputting reference diagnosis assistance information, which is any one piece of diagnosis assistance information selected from the first type of diagnosis assistance information and the second type of diagnosis assistance information, and heart disease diagnosis assistance information including the other piece of diagnosis assistance information which is corrected to match the reference diagnosis assistance information.--, in [0907]); execute a relearning process of the set of machine learning models based on the corrected determination basis information (see COLLEY: e.g., --the validation may be developed using a selection tool such as a search bar 3432, that may receive a search term from a user and display a suggestion menu 3442 based on the search term in order to reduce user search time. The suggestion menu 3442 may be populated from the one or more templates associated with the validation. As an example, the four categories displayed in suggestion menu 3442 may be matched back to the list of items displayed in panel 3322. After a category 3452 from suggestion menu 3442 has been selected, the rule authoring system can create the corresponding system code for the new validation. For example, the selection of the category 3452 may trigger the appearance of sub-level data entry elements tied to the selected category 3452. As shown in FIG. 94, for example, the selection of .diagnosis.primaryDiagnosis.site.display as category 3452 populates the rule displays shown below the label. The field “.diagnosis.primaryDiagnosis.site.display” refers to the text that is in a set of structured data that reflects the primary diagnosis of a tumor at a site of biopsy. For example, the value of that text entry may be “breast cancer,” “prostate cancer,” or the like. However, it should be understood that in the examination of a patient's medical record, particularly the medical record of a medically complex patient such as a metastatic cancer patient, there is a substantial amount of information in the record that may suggest the factual primary diagnosis of a cancer at a site. In addition, because medical records are updated over a period of many years and many clinical visits, it is often the case that information collected in the medical record over time may be inconsistent—that is to say, records from a first clinical visit may indicate that the size of the tumor is 5 cm while records from a second clinical visit may indicate that the size of the tumor is 6 cm. This may be because the size of the tumor has changed between appointments. Or, it may be a data entry error that was not corrected in the medical record. For this reason, a validation may be set up to check one or more structured data fields for internal value consistency as a condition to ensuring that the primary diagnosis text can be relied on with a high level of confidence, given the extensive and sometimes contrary information in a medical record….[1321]…Accordingly, data abstractors can verify/correct pre-populated data--, in [01315]-[1321] {COLLEY’s above disclosures of a validation of “the determination basis information” is correction unit that corrects pre-populated data, such as “the medical record of a medically complex patient such as a metastatic cancer patient, there is a substantial amount of information in the record that may suggest the factual primary diagnosis of a cancer at a site”; also see CHOI: e.g., --[0203] For example, expansion of image data may be performed by reversing the left and right of an image, cutting(cropping) a part of the image, correcting a color value of the image, or adding artificial noise to the image. As a specific example, cutting a part of the image may be performed by cutting a partial region of an element constituting an image or randomly cutting partial regions. In addition, image data may be expanded by reversing the left and right of the image data, reversing the top and bottom of the image data, rotating the image data, resizing the image data to a certain ratio, cropping the image data, padding the image data, adjusting color of the image data, or adjusting brightness of the image data.--, in [0203], and, --[0907] The outputting of the determined heart disease diagnosis assistance information (S473) may further include, when the first type of diagnosis assistance information and the second type of diagnosis assistance information do not match each other, outputting reference diagnosis assistance information, which is any one piece of diagnosis assistance information selected from the first type of diagnosis assistance information and the second type of diagnosis assistance information, and heart disease diagnosis assistance information including the other piece of diagnosis assistance information which is corrected to match the reference diagnosis assistance information.--, in [0907]; and see: -- [0913] The second type diagnosis assistance information obtaining unit 6500 may, on the basis of the target fundus image, obtain second type of diagnosis assistance information according to the target fundus image via a second neural network model which is trained to obtain a second type of diagnosis assistance information used in diagnosis of a target heart disease on the basis of a fundus image. [0914] The heart disease diagnosis assistance information output unit 6700 may, on the basis of the first type of diagnosis assistance information and the second type of diagnosis assistance information, output heart disease diagnosis assistance information for assisting in diagnosis of a target heart disease of a testee. [0915] The second type of diagnosis assistance information may be obtained as information having a different dimension from the first type of diagnosis assistance information, via a second neural network model which is at least partially different from a first neural network model. [0916] The first neural network model may be provided to perform multiclass classification of fundus images into a plurality of pieces of diagnosis assistance information, and the second neural network model may be provided to perform binary classification of fundus images into a first piece of diagnosis assistance information and a second piece of diagnosis assistance information.--, in [0913]-0916]). Re Claim 3, COLLEY as modified by CHOI further disclose wherein the CPU further configured to: infer observation data related to a feature of the input image and a infer finding data related to diagnosis of the input image (see COLLEY: e.g., -- analysis and interpretation of genetic and clinical patient data, including bulk-cell sequencing data, to make inferences about disease susceptibility and pharmacogenomics and thereby make appropriate treatment decisions, which can improve overall patient healthcare.--, in [0164], and, -- a platform for identifying the number of both new and known CNV in a patient's DNA/RNA and referencing CNV occurrence with patient/clinical information through the proper analysis tools to make inferences about disease susceptibility and pharmacogenomics that can be used to make treatment decisions which improve overall patient healthcare.--, in [0234], and, -- Additional fields for either table may include: graph distance (number of hops), preferred dictionary CUID, pre-defined entries (such as names, regions, categories), inferred structure entries (such as diagnosis site, generic drug name), language of text, match type (such as exact, exact but letter case mismatch, fuzzy matched, etc.), text type (TTY, described above), or other fields.--, in [1653], and, -- The central role of the propensity score in observational studies for causal effects, Biometrika (1983), 70(1):41-55, each of which is incorporated by reference herein in its entirety. This score acts as a balancer—conditional on the propensity score, the distribution of X should be identical between the treatment and control groups. A model that links a binary response to a set of features can be used. See Stuart, E. Matching methods for causal inference: a review and a look forward, Statistical Science (2010) 25(1):1-21.--, in [2088]-[2089]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]); calculate a basis of the inference of the observation data and a basis of the inference of finding data (see COLLEY: e.g., -- analysis and interpretation of genetic and clinical patient data, including bulk-cell sequencing data, to make inferences about disease susceptibility and pharmacogenomics and thereby make appropriate treatment decisions, which can improve overall patient healthcare.--, in [0164], and, -- a platform for identifying the number of both new and known CNV in a patient's DNA/RNA and referencing CNV occurrence with patient/clinical information through the proper analysis tools to make inferences about disease susceptibility and pharmacogenomics that can be used to make treatment decisions which improve overall patient healthcare.--, in [0234], and, -- Additional fields for either table may include: graph distance (number of hops), preferred dictionary CUID, pre-defined entries (such as names, regions, categories), inferred structure entries (such as diagnosis site, generic drug name), language of text, match type (such as exact, exact but letter case mismatch, fuzzy matched, etc.), text type (TTY, described above), or other fields.--, in [1653], and, -- The central role of the propensity score in observational studies for causal effects, Biometrika (1983), 70(1):41-55, each of which is incorporated by reference herein in its entirety. This score acts as a balancer—conditional on the propensity score, the distribution of X should be identical between the treatment and control groups. A model that links a binary response to a set of features can be used. See Stuart, E. Matching methods for causal inference: a review and a look forward, Statistical Science (2010) 25(1):1-21.--, in [2088]-[2089]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]), and, generate the diagnosis result report of the input image based on the observation data and the finding, and the basis of the inference of the observation data and the basis of the inference of the finding data (see COLLEY: e.g., -- analysis and interpretation of genetic and clinical patient data, including bulk-cell sequencing data, to make inferences about disease susceptibility and pharmacogenomics and thereby make appropriate treatment decisions, which can improve overall patient healthcare.--, in [0164], and, -- a platform for identifying the number of both new and known CNV in a patient's DNA/RNA and referencing CNV occurrence with patient/clinical information through the proper analysis tools to make inferences about disease susceptibility and pharmacogenomics that can be used to make treatment decisions which improve overall patient healthcare.--, in [0234], and, -- Additional fields for either table may include: graph distance (number of hops), preferred dictionary CUID, pre-defined entries (such as names, regions, categories), inferred structure entries (such as diagnosis site, generic drug name), language of text, match type (such as exact, exact but letter case mismatch, fuzzy matched, etc.), text type (TTY, described above), or other fields.--, in [1653], and, -- The central role of the propensity score in observational studies for causal effects, Biometrika (1983), 70(1):41-55, each of which is incorporated by reference herein in its entirety. This score acts as a balancer—conditional on the propensity score, the distribution of X should be identical between the treatment and control groups. A model that links a binary response to a set of features can be used. See Stuart, E. Matching methods for causal inference: a review and a look forward, Statistical Science (2010) 25(1):1-21.--, in [2088]-[2089]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]). Re Claim 4, COLLEY as modified by CHOI further disclose wherein the CPU further configured to: infer the observation data based on a first machine learning model of the set of machine learning models (see COLLEY: e.g., -- analysis and interpretation of genetic and clinical patient data, including bulk-cell sequencing data, to make inferences about disease susceptibility and pharmacogenomics and thereby make appropriate treatment decisions, which can improve overall patient healthcare.--, in [0164], and, -- a platform for identifying the number of both new and known CNV in a patient's DNA/RNA and referencing CNV occurrence with patient/clinical information through the proper analysis tools to make inferences about disease susceptibility and pharmacogenomics that can be used to make treatment decisions which improve overall patient healthcare.--, in [0234], and, -- Additional fields for either table may include: graph distance (number of hops), preferred dictionary CUID, pre-defined entries (such as names, regions, categories), inferred structure entries (such as diagnosis site, generic drug name), language of text, match type (such as exact, exact but letter case mismatch, fuzzy matched, etc.), text type (TTY, described above), or other fields.--, in [1653], and, -- The central role of the propensity score in observational studies for causal effects, Biometrika (1983), 70(1):41-55, each of which is incorporated by reference herein in its entirety. This score acts as a balancer—conditional on the propensity score, the distribution of X should be identical between the treatment and control groups. A model that links a binary response to a set of features can be used. See Stuart, E. Matching methods for causal inference: a review and a look forward, Statistical Science (2010) 25(1):1-21.--, in [2088]-[2089]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]), and infer the finding data based on a second machine learning model of the set of machine learn models (see COLLEY: e.g., -- analysis and interpretation of genetic and clinical patient data, including bulk-cell sequencing data, to make inferences about disease susceptibility and pharmacogenomics and thereby make appropriate treatment decisions, which can improve overall patient healthcare.--, in [0164], and, -- a platform for identifying the number of both new and known CNV in a patient's DNA/RNA and referencing CNV occurrence with patient/clinical information through the proper analysis tools to make inferences about disease susceptibility and pharmacogenomics that can be used to make treatment decisions which improve overall patient healthcare.--, in [0234], and, -- Additional fields for either table may include: graph distance (number of hops), preferred dictionary CUID, pre-defined entries (such as names, regions, categories), inferred structure entries (such as diagnosis site, generic drug name), language of text, match type (such as exact, exact but letter case mismatch, fuzzy matched, etc.), text type (TTY, described above), or other fields.--, in [1653], and, -- The central role of the propensity score in observational studies for causal effects, Biometrika (1983), 70(1):41-55, each of which is incorporated by reference herein in its entirety. This score acts as a balancer—conditional on the propensity score, the distribution of X should be identical between the treatment and control groups. A model that links a binary response to a set of features can be used. See Stuart, E. Matching methods for causal inference: a review and a look forward, Statistical Science (2010) 25(1):1-21.--, in [2088]-[2089]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]). Re Claim 5, COLLEY as modified by CHOI further disclose wherein the CPU further configured to: extract the observation data from patient information and an examination value, wherein the examination value corresponds to the input image (see COLLEY: e.g., -- [0126] Mobile Supplementation, Extraction, and Analysis of Health Records [0127] A system and method implemented in a mobile platform are described herein that facilitate the capture of documentation, along with the extraction and analysis of data embedded within the data.--, in [0126]-[017], and, -- [0147] Extracting meaningful medical features from an ever expanding quantity of health information tabulated for a similarly expanding cohort of patients having a multitude of sparsely populated features is a difficult endeavor. Identifying which medical features from the tens of thousands of features available in health information are most probative to training and utilizing a prediction engine only compounds the difficulty. Features which may be relevant to predictions may only be available in a small subset of patients and features which are not relevant may be available in many patients. What is needed is a system which may ingest these impossibly comprehensive scope of available data across entire populations of patients to identify features which apply to the largest number of patients and establish a model for prediction of an objective. When there are multiple objectives to choose from, what is needed is a system which may curate the medical features extracted from patient health information to a specific model associated with the prediction of the desired objective.--, in [0147], and, -- Templates for each document may be one component of the model, along with identifiers for each template, regions or masks, features or fields and tools or instructions for how to extract those features or fields and verify the accuracy of that extraction, associated sub-fields, and rules for normalization of those fields and sub-fields. [1522] One exemplary technique to access the data within each of the identified sections may be to generate a mask which “outlines” the section, apply the mask to the document to extract each section in turn, and then provide the section to an OCR algorithm, such as an OCR post-processing optimized to extracting information from the respective section type.--, in [1521]-[1524]; also see CHOI: e.g., -- extract fundus image data having a left eye label of a specific patient and fundus image data having a right eye label of the specific patient to be used together in training--, in [0232]; and, -- reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0596] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation. [0597] The highlighting of blood vessels may include processing a region in which the blood vessels are distributed or processing an extracted blood vessel image. For example, the highlighting of blood vessels may include changing color, brightness, and histogram of a region of a fundus image in which blood vessels are distributed or a blood vessel image extracted from the fundus image.--, in [0595]-[0600]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]); execute a first learning process of the first machine learning model based on the input image as a first explanatory variable and the extracted observation data as a first objective variable (see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]); extract the finding data from the diagnosis result report for the input image (see COLLEY: e.g., -- [0126] Mobile Supplementation, Extraction, and Analysis of Health Records [0127] A system and method implemented in a mobile platform are described herein that facilitate the capture of documentation, along with the extraction and analysis of data embedded within the data.--, in [0126]-[017], and, -- [0147] Extracting meaningful medical features from an ever expanding quantity of health information tabulated for a similarly expanding cohort of patients having a multitude of sparsely populated features is a difficult endeavor. Identifying which medical features from the tens of thousands of features available in health information are most probative to training and utilizing a prediction engine only compounds the difficulty. Features which may be relevant to predictions may only be available in a small subset of patients and features which are not relevant may be available in many patients. What is needed is a system which may ingest these impossibly comprehensive scope of available data across entire populations of patients to identify features which apply to the largest number of patients and establish a model for prediction of an objective. When there are multiple objectives to choose from, what is needed is a system which may curate the medical features extracted from patient health information to a specific model associated with the prediction of the desired objective.--, in [0147], and, -- Templates for each document may be one component of the model, along with identifiers for each template, regions or masks, features or fields and tools or instructions for how to extract those features or fields and verify the accuracy of that extraction, associated sub-fields, and rules for normalization of those fields and sub-fields. [1522] One exemplary technique to access the data within each of the identified sections may be to generate a mask which “outlines” the section, apply the mask to the document to extract each section in turn, and then provide the section to an OCR algorithm, such as an OCR post-processing optimized to extracting information from the respective section type.--, in [1521]-[1524]; also see CHOI: e.g., -- extract fundus image data having a left eye label of a specific patient and fundus image data having a right eye label of the specific patient to be used together in training--, in [0232]; and, -- reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0596] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation. [0597] The highlighting of blood vessels may include processing a region in which the blood vessels are distributed or processing an extracted blood vessel image. For example, the highlighting of blood vessels may include changing color, brightness, and histogram of a region of a fundus image in which blood vessels are distributed or a blood vessel image extracted from the fundus image.--, in [0595]-[0600]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]); execute a second learning process of the second machine learning model based on the input image as a second explanatory variable and extracted finding data as a second objective variable (see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]). Re Claim 6, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to: extract the observation data from patient and an examination value, wherein the examination value corresponds to the input image (see COLLEY: e.g., -- [0126] Mobile Supplementation, Extraction, and Analysis of Health Records [0127] A system and method implemented in a mobile platform are described herein that facilitate the capture of documentation, along with the extraction and analysis of data embedded within the data.--, in [0126]-[017], and, -- [0147] Extracting meaningful medical features from an ever expanding quantity of health information tabulated for a similarly expanding cohort of patients having a multitude of sparsely populated features is a difficult endeavor. Identifying which medical features from the tens of thousands of features available in health information are most probative to training and utilizing a prediction engine only compounds the difficulty. Features which may be relevant to predictions may only be available in a small subset of patients and features which are not relevant may be available in many patients. What is needed is a system which may ingest these impossibly comprehensive scope of available data across entire populations of patients to identify features which apply to the largest number of patients and establish a model for prediction of an objective. When there are multiple objectives to choose from, what is needed is a system which may curate the medical features extracted from patient health information to a specific model associated with the prediction of the desired objective.--, in [0147], and, -- Templates for each document may be one component of the model, along with identifiers for each template, regions or masks, features or fields and tools or instructions for how to extract those features or fields and verify the accuracy of that extraction, associated sub-fields, and rules for normalization of those fields and sub-fields. [1522] One exemplary technique to access the data within each of the identified sections may be to generate a mask which “outlines” the section, apply the mask to the document to extract each section in turn, and then provide the section to an OCR algorithm, such as an OCR post-processing optimized to extracting information from the respective section type.--, in [1521]-[1524]; also see CHOI: e.g., -- extract fundus image data having a left eye label of a specific patient and fundus image data having a right eye label of the specific patient to be used together in training--, in [0232]; and, -- reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0596] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation. [0597] The highlighting of blood vessels may include processing a region in which the blood vessels are distributed or processing an extracted blood vessel image. For example, the highlighting of blood vessels may include changing color, brightness, and histogram of a region of a fundus image in which blood vessels are distributed or a blood vessel image extracted from the fundus image.--, in [0595]-[0600]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]); execute a first learning process of the first machine learning model based on the input image and extracted observation data as first explanatory variables (see COLLEY: e.g., -- analysis and interpretation of genetic and clinical patient data, including bulk-cell sequencing data, to make inferences about disease susceptibility and pharmacogenomics and thereby make appropriate treatment decisions, which can improve overall patient healthcare.--, in [0164], and, -- a platform for identifying the number of both new and known CNV in a patient's DNA/RNA and referencing CNV occurrence with patient/clinical information through the proper analysis tools to make inferences about disease susceptibility and pharmacogenomics that can be used to make treatment decisions which improve overall patient healthcare.--, in [0234], and, -- Additional fields for either table may include: graph distance (number of hops), preferred dictionary CUID, pre-defined entries (such as names, regions, categories), inferred structure entries (such as diagnosis site, generic drug name), language of text, match type (such as exact, exact but letter case mismatch, fuzzy matched, etc.), text type (TTY, described above), or other fields.--, in [1653], and, -- The central role of the propensity score in observational studies for causal effects, Biometrika (1983), 70(1):41-55, each of which is incorporated by reference herein in its entirety. This score acts as a balancer—conditional on the propensity score, the distribution of X should be identical between the treatment and control groups. A model that links a binary response to a set of features can be used. See Stuart, E. Matching methods for causal inference: a review and a look forward, Statistical Science (2010) 25(1):1-21.--, in [2088]-[2089]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]); and extract the finding data from the diagnosis result report for the input image (see COLLEY: e.g., -- [0126] Mobile Supplementation, Extraction, and Analysis of Health Records [0127] A system and method implemented in a mobile platform are described herein that facilitate the capture of documentation, along with the extraction and analysis of data embedded within the data.--, in [0126]-[017], and, -- [0147] Extracting meaningful medical features from an ever expanding quantity of health information tabulated for a similarly expanding cohort of patients having a multitude of sparsely populated features is a difficult endeavor. Identifying which medical features from the tens of thousands of features available in health information are most probative to training and utilizing a prediction engine only compounds the difficulty. Features which may be relevant to predictions may only be available in a small subset of patients and features which are not relevant may be available in many patients. What is needed is a system which may ingest these impossibly comprehensive scope of available data across entire populations of patients to identify features which apply to the largest number of patients and establish a model for prediction of an objective. When there are multiple objectives to choose from, what is needed is a system which may curate the medical features extracted from patient health information to a specific model associated with the prediction of the desired objective.--, in [0147], and, -- Templates for each document may be one component of the model, along with identifiers for each template, regions or masks, features or fields and tools or instructions for how to extract those features or fields and verify the accuracy of that extraction, associated sub-fields, and rules for normalization of those fields and sub-fields. [1522] One exemplary technique to access the data within each of the identified sections may be to generate a mask which “outlines” the section, apply the mask to the document to extract each section in turn, and then provide the section to an OCR algorithm, such as an OCR post-processing optimized to extracting information from the respective section type.--, in [1521]-[1524]; also see CHOI: e.g., -- extract fundus image data having a left eye label of a specific patient and fundus image data having a right eye label of the specific patient to be used together in training--, in [0232]; and, -- reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0596] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation. [0597] The highlighting of blood vessels may include processing a region in which the blood vessels are distributed or processing an extracted blood vessel image. For example, the highlighting of blood vessels may include changing color, brightness, and histogram of a region of a fundus image in which blood vessels are distributed or a blood vessel image extracted from the fundus image.--, in [0595]-[0600]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]); execute a second learning process of the second machine learning model based on the input image and extracted observation data as a second explanatory variable and the extracted finding data as an objective variable (see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]). Re Claim 7, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to: extract observation data from the diagnosis result report based on the input image, the patient information and an examination value (see COLLEY: e.g., -- [0126] Mobile Supplementation, Extraction, and Analysis of Health Records [0127] A system and method implemented in a mobile platform are described herein that facilitate the capture of documentation, along with the extraction and analysis of data embedded within the data.--, in [0126]-[017], and, -- [0147] Extracting meaningful medical features from an ever expanding quantity of health information tabulated for a similarly expanding cohort of patients having a multitude of sparsely populated features is a difficult endeavor. Identifying which medical features from the tens of thousands of features available in health information are most probative to training and utilizing a prediction engine only compounds the difficulty. Features which may be relevant to predictions may only be available in a small subset of patients and features which are not relevant may be available in many patients. What is needed is a system which may ingest these impossibly comprehensive scope of available data across entire populations of patients to identify features which apply to the largest number of patients and establish a model for prediction of an objective. When there are multiple objectives to choose from, what is needed is a system which may curate the medical features extracted from patient health information to a specific model associated with the prediction of the desired objective.--, in [0147], and, -- Templates for each document may be one component of the model, along with identifiers for each template, regions or masks, features or fields and tools or instructions for how to extract those features or fields and verify the accuracy of that extraction, associated sub-fields, and rules for normalization of those fields and sub-fields. [1522] One exemplary technique to access the data within each of the identified sections may be to generate a mask which “outlines” the section, apply the mask to the document to extract each section in turn, and then provide the section to an OCR algorithm, such as an OCR post-processing optimized to extracting information from the respective section type.--, in [1521]-[1524]); the patient information and the examination value correspond to the input image (see CHOI: e.g., -- extract fundus image data having a left eye label of a specific patient and fundus image data having a right eye label of the specific patient to be used together in training--, in [0232]; and, -- reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0596] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation. [0597] The highlighting of blood vessels may include processing a region in which the blood vessels are distributed or processing an extracted blood vessel image. For example, the highlighting of blood vessels may include changing color, brightness, and histogram of a region of a fundus image in which blood vessels are distributed or a blood vessel image extracted from the fundus image.--, in [0595]-[0600]). Re Claim 8, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to: output reliability data of at least one of the inference of the observation data or the inference of the finding data, and generate the diagnosis result report, wherein the diagnosis result report includes information of the reliability data (see COLLEY: e.g., -- analysis and interpretation of genetic and clinical patient data, including bulk-cell sequencing data, to make inferences about disease susceptibility and pharmacogenomics and thereby make appropriate treatment decisions, which can improve overall patient healthcare.--, in [0164], and, -- a platform for identifying the number of both new and known CNV in a patient's DNA/RNA and referencing CNV occurrence with patient/clinical information through the proper analysis tools to make inferences about disease susceptibility and pharmacogenomics that can be used to make treatment decisions which improve overall patient healthcare.--, in [0234], and, -- Additional fields for either table may include: graph distance (number of hops), preferred dictionary CUID, pre-defined entries (such as names, regions, categories), inferred structure entries (such as diagnosis site, generic drug name), language of text, match type (such as exact, exact but letter case mismatch, fuzzy matched, etc.), text type (TTY, described above), or other fields.--, in [1653], and, -- The central role of the propensity score in observational studies for causal effects, Biometrika (1983), 70(1):41-55, each of which is incorporated by reference herein in its entirety. This score acts as a balancer—conditional on the propensity score, the distribution of X should be identical between the treatment and control groups. A model that links a binary response to a set of features can be used. See Stuart, E. Matching methods for causal inference: a review and a look forward, Statistical Science (2010) 25(1):1-21.--, in [2088]-[2089]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]). Re Claim 9, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to control presentation of the diagnosis result report (see COLLEY: e.g., --an overall process that includes one or more sub-processes that process clinical and other patient data and samples (e.g., tumor tissue) to generate intermediate data deliverables and eventually final work product in the form of one or more final reports provided to system clients.--, in [0181]’ also see: --[0121] Rich and meaningful data can be found in source clinical documents and records, such as diagnosis, progress notes, pathology reports, radiology reports, lab test results, follow-up notes, images, and flow sheets. These types of records are referred to as “raw clinical data”.--, in [0121]). Re Claim 10, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to: control display of the determination basis information superimposed on the input image, the determination basis information includes a basis of inference of each of observation data and finding data, the observation data and the finding data are associated with the input image (see CHOI: e.g., -- diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image.--, in [0832]-[0833]). Re Claim 11, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to control display of the determination basis information in association with observation data and finding data, the determination basis information includes a basis of inference of each of the observation data and the finding data, and the observation data and the finding data are associated with the input image (see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]; and, -- diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image.--, in [0832]-[0833]). Re Claim 12, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to control presentation of reliability data of inference of each of observation data and finding data, and the observation data and the finding data are associated with the input image (see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]; and, -- diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image.--, in [0832]-[0833]). Re Claim 13, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to determine adoption of diagnosis result report based on a second user input (see COLLEY: e.g., --the validation may be developed using a selection tool such as a search bar 3432, that may receive a search term from a user and display a suggestion menu 3442 based on the search term in order to reduce user search time. The suggestion menu 3442 may be populated from the one or more templates associated with the validation. As an example, the four categories displayed in suggestion menu 3442 may be matched back to the list of items displayed in panel 3322. After a category 3452 from suggestion menu 3442 has been selected, the rule authoring system can create the corresponding system code for the new validation. For example, the selection of the category 3452 may trigger the appearance of sub-level data entry elements tied to the selected category 3452. As shown in FIG. 94, for example, the selection of .diagnosis.primaryDiagnosis.site.display as category 3452 populates the rule displays shown below the label. The field “.diagnosis.primaryDiagnosis.site.display” refers to the text that is in a set of structured data that reflects the primary diagnosis of a tumor at a site of biopsy. For example, the value of that text entry may be “breast cancer,” “prostate cancer,” or the like. However, it should be understood that in the examination of a patient's medical record, particularly the medical record of a medically complex patient such as a metastatic cancer patient, there is a substantial amount of information in the record that may suggest the factual primary diagnosis of a cancer at a site. In addition, because medical records are updated over a period of many years and many clinical visits, it is often the case that information collected in the medical record over time may be inconsistent—that is to say, records from a first clinical visit may indicate that the size of the tumor is 5 cm while records from a second clinical visit may indicate that the size of the tumor is 6 cm. This may be because the size of the tumor has changed between appointments. Or, it may be a data entry error that was not corrected in the medical record. For this reason, a validation may be set up to check one or more structured data fields for internal value consistency as a condition to ensuring that the primary diagnosis text can be relied on with a high level of confidence, given the extensive and sometimes contrary information in a medical record….[1321]…Accordingly, data abstractors can verify/correct pre-populated data--, in [01315]-[1321]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]; and, -- diagnosis assistance information may include a CAM related to the output diagnosis assistance information. Together with the primary diagnosis assistance information or as the primary diagnosis assistance information, a CAM may be obtained from a neural network model. When the CAM is obtained, a visualized image of the CAM may be output. The CAM may be provided to a user via the above-described user interface. The CAM may be provided according to a user's selection. A CAM image may be provided together with a fundus image. The CAM image may be provided to superimpose the fundus image. The class activation map in this description is construed as including similar or expanded concepts which refer to indicate relationship between locations in the image and the prediction result. For example, the class activation map may be a Saliency map, a heat map, a feature map or a probability map, which provide information in relationship between pixels in the image and the prediction result. As a specific example, when a diagnosis assistance system for assisting in heart disease diagnosis on the basis of a fundus image includes a fundus image obtaining unit configured to obtain a target fundus image, a pre-processing unit configured to process the target fundus image so that blood vessels therein are highlighted, a diagnosis assistance unit configured to obtain heart disease diagnosis assistance information related to a patient on the basis of the pre-processed image, and an output unit configured to output the heart disease diagnosis assistance information, the diagnosis assistance unit may obtain a CAM related to a heart disease diagnosis assistance unit, and the output unit may output the obtained CAM to superimpose the target fundus image.--, in [0832]-[0833]). Re Claim 14, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to execute based on the second user input at least one of: correct of one of the observation data or the finding data in the diagnosis report, deletion of the observation data or the finding data, or addition one of the observation data or finding data and in the diagnosis result report (see CHOI: e.g., --[0185] According to another embodiment, an image may be cut or pixels may be added to an obtained image to adjust the size or aspect ratio of the image. For example, when a portion unnecessary for training is included in an image, a portion of the image may be cropped to remove the unnecessary portion. Alternatively, when a portion of the image is cut away and a set aspect ratio is not met, a column or row may be added to the image to adjust the aspect ratio of the image. In other words, a margin or padding may be added to the image to adjust the aspect ratio. [0186] According to still another embodiment, the volume and the size or aspect ratio of the image may be adjusted together. For example, when a volume of an image is large, the image may be down-sampled to reduce the volume of the image, and an unnecessary portion included in the reduced image may be cropped to convert the image to appropriate image data. [0187] According to another embodiment of the present invention, an orientation of image data may be changed. [0188] As a specific example, when a fundus image data set is used as a data set, the volume or size of each fundus image may be adjusted. Cropping may be performed to remove a margin portion excluding a fundus portion of a fundus image, or padding may be performed to supplement a cut-away portion of a fundus image and adjust an aspect ratio thereof.--, in [0185]-[0188]). Re Claim 15, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to extract a composite variable based on the second user input; and name the composite variable based on the second user input (see COLLEY: e.g., -- [0175] Technological advances have enabled the digitization of histopathology H&E and IHC slides into high resolution whole slide images (WSIs), providing opportunities to develop computer vision tools for a wide range of clinical applications (27-29). High-resolution, digital images of microscope slides make it possible to use artificial intelligence to analyze the slides and classify the tissue components by tissue class. Recently, deep learning applications to pathology images have shown tremendous promise in predicting treatment outcomes (30), disease subtypes (31,32), lymph node status (27,28), and genetic characteristics (30,33,34) in various malignancies. Deep learning is a subset of machine learning wherein models are built with a number of discrete neural node layers, imitating the structure of the human brain (35). [0176] These models learn to recognize complex visual features from WSIs by iteratively updating the weighting of each neural node based on the training examples (29). [0177] A Convolutional Neural Network (“CNN”) is a deep learning algorithm that analyzes digital images by assigning one class label to each input image. Slides, however, include more than one type of tissue, including the borders between neighboring tissue classes. There is a need to classify different regions as different tissue classes, in part to study the borders between neighboring tissue classes and the presence of immune cells among tumor cells.--, in [0175]-[0178]; and, -- analysis and interpretation of genetic and clinical patient data, including bulk-cell sequencing data, to make inferences about disease susceptibility and pharmacogenomics and thereby make appropriate treatment decisions, which can improve overall patient healthcare.--, in [0164], and, -- a platform for identifying the number of both new and known CNV in a patient's DNA/RNA and referencing CNV occurrence with patient/clinical information through the proper analysis tools to make inferences about disease susceptibility and pharmacogenomics that can be used to make treatment decisions which improve overall patient healthcare.--, in [0234], and, -- Additional fields for either table may include: graph distance (number of hops), preferred dictionary CUID, pre-defined entries (such as names, regions, categories), inferred structure entries (such as diagnosis site, generic drug name), language of text, match type (such as exact, exact but letter case mismatch, fuzzy matched, etc.), text type (TTY, described above), or other fields.--, in [1653], and, -- The central role of the propensity score in observational studies for causal effects, Biometrika (1983), 70(1):41-55, each of which is incorporated by reference herein in its entirety. This score acts as a balancer—conditional on the propensity score, the distribution of X should be identical between the treatment and control groups. A model that links a binary response to a set of features can be used. See Stuart, E. Matching methods for causal inference: a review and a look forward, Statistical Science (2010) 25(1):1-21.--, in [2088]-[2089]; also see CHOI: e.g., -- [0784] The heart disease diagnosis assistance module may further obtain information inferred on the basis of the information on the presence or absence of the target disease. For example, the heart disease diagnosis assistance module may further obtain, on the basis of a predetermined correlation or in consideration of an input value other than the fundus image as well as the fundus image, an extent of risk of a disease other than the target heart disease. [0785] According to an embodiment, a heart disease diagnosis assistance device may obtain secondary diagnosis assistance information using a heart disease neural network model including a primary neural network model which obtains a primary diagnosis assistance information(for example, a probability that a testee has a target heart disease) and a secondary neural network model which is connected in series to the primary neural network model and obtains a secondary diagnosis assistance information(for example, a probability that a testee belongs to a risk group for a target heart disease) at least partly based on the primary diagnosis assistance information. The outputting of the diagnosis assistance information (S5025) may further include outputting disease presence/absence information related to the target heart disease. The outputting of the diagnosis assistance information may further include outputting the disease presence/absence information on the target heart disease and other information thereon together. For example, the outputting of the diagnosis assistance information may include outputting the disease presence/absence information and information inferred using the disease presence/absence information together.--, in [0784]-[0785]). Re Claim 16, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to extract an image feature amount or a based on a learning process of the input image, the composite variable includes at least one of the extracted image feature or a variable of observation values, and the observation values are associated with the input image (see COLLEY: e.g., -- [0126] Mobile Supplementation, Extraction, and Analysis of Health Records [0127] A system and method implemented in a mobile platform are described herein that facilitate the capture of documentation, along with the extraction and analysis of data embedded within the data.--, in [0126]-[017], and, -- [0147] Extracting meaningful medical features from an ever expanding quantity of health information tabulated for a similarly expanding cohort of patients having a multitude of sparsely populated features is a difficult endeavor. Identifying which medical features from the tens of thousands of features available in health information are most probative to training and utilizing a prediction engine only compounds the difficulty. Features which may be relevant to predictions may only be available in a small subset of patients and features which are not relevant may be available in many patients. What is needed is a system which may ingest these impossibly comprehensive scope of available data across entire populations of patients to identify features which apply to the largest number of patients and establish a model for prediction of an objective. When there are multiple objectives to choose from, what is needed is a system which may curate the medical features extracted from patient health information to a specific model associated with the prediction of the desired objective.--, in [0147], and, -- Templates for each document may be one component of the model, along with identifiers for each template, regions or masks, features or fields and tools or instructions for how to extract those features or fields and verify the accuracy of that extraction, associated sub-fields, and rules for normalization of those fields and sub-fields. [1522] One exemplary technique to access the data within each of the identified sections may be to generate a mask which “outlines” the section, apply the mask to the document to extract each section in turn, and then provide the section to an OCR algorithm, such as an OCR post-processing optimized to extracting information from the respective section type.--, in [1521]-[1524]; also see CHOI: e.g., -- extract fundus image data having a left eye label of a specific patient and fundus image data having a right eye label of the specific patient to be used together in training--, in [0232]; and, -- reconstructing a fundus image to highlight blood vessels may include blurring the fundus image, applying the Gaussian filter to the blurred fundus image, and highlighting (or extracting) blood vessels included in the fundus image to which the Gaussian filter is applied. All or some of the above-described processes may be used in order to highlight or extract the blood vessels. [0596] The reconstructing of the fundus image may include extracting blood vessels. For example, the reconstructing of the fundus image may include generating blood vessel segmentation. [0597] The highlighting of blood vessels may include processing a region in which the blood vessels are distributed or processing an extracted blood vessel image. For example, the highlighting of blood vessels may include changing color, brightness, and histogram of a region of a fundus image in which blood vessels are distributed or a blood vessel image extracted from the fundus image.--, in [0595]-[0600]). Re Claim 17, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to detect a loss of importance of a variable, and calculate the basis of the inference of the observation data based on the variable (see COLLEY: e.g., --accurate profiling in clinical specimens requires an extremely sensitive assay capable of detecting gene alterations in specimens with a low tumor percentage. Second, millions of bases within the tumor genome are assayed. For this reason, rigorous statistical and analytical approaches for validation are required in order to demonstrate the accuracy of NGS technology for use in clinical settings and in developing cause and effect efficacy insights.--, in [0045], and, --[0106] Conventional approaches to bring pharmacogenomics into precision medicine for the treatment, diagnosis, and analysis of diseases include the use of single nucleotide polymorphism (SNP) genotyping and detection methods (such as through the use of a SNP chip)--, in [0106], and [0112]). Re Claim 18, COLLEY as modified by CHOI further disclose wherein the CPU is further configured to detect a missing value that is a basis for the calculation of the basis of the inference of the observation data; prompt a user to input the missing value, and infer the observation data based on the input missing value (see COLLEY: e.g., --Generally, an analyst views the slide to estimate the percentage of the total cancer cells that are positive and compares it to a threshold value. If the percentage exceeds that threshold, the cancer cell sample on the slide is designated as positive for that biomarker. [0247] Similarly FISH and RPPA can be used to visually detect and quantify copies of the PD-L1 protein and/or CD274 RNA in a cancer cell sample. If the results of these assays exceed a selected threshold value, the cancer cell sample can be labeled as PD-L1 positive.--, in [0246]-0247], also see CHOI: e.g., -- the client device may further include an output unit. The output unit may include a display configured to output a video or an image or may include a speaker configured to output sound. The output unit may output video or image data obtained by the imaging unit. The output unit may output diagnosis assistance information obtained from the diagnostic device. [0156] Although not illustrated, the client device may further include an input unit. The input unit may obtain a user input. For example, the input unit may obtain a user input that requests for diagnosis assistance information. The input unit may obtain information on a user who evaluates diagnosis assistance information obtained from the diagnostic device.--, in [0155]-0156]). Conclusion Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEI WEN YANG whose telephone number is (571)270-5670. The examiner can normally be reached on 8:00 - 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEI WEN YANG/Primary Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Sep 13, 2023
Application Filed
Oct 10, 2025
Non-Final Rejection — §103
Jan 16, 2026
Response Filed
Mar 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602789
ENDOSCOPIC IMAGE SEGMENTATION METHOD BASED ON SINGLE IMAGE AND DEEP LEARNING NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12586413
METHOD FOR RECOGNIZING ACTIVITIES USING SEPARATE SPATIAL AND TEMPORAL ATTENTION WEIGHTS
2y 5m to grant Granted Mar 24, 2026
Patent 12582359
IMAGE DISPLAY METHOD, STORAGE MEDIUM, AND IMAGE DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12573034
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM, AND IMAGE PROCESSING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12567168
DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 657 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month