DETAILED ACTION
Applicant's amendment of November 28, 2025 overcomes the following:
Claim objections
Applicant has amended claims 3, 5, 7, 9-10, 13-16, 19, and 20. Claim 25 has been canceled. Claims 1, 3-24, and 26 are pending.
Response to Arguments
Applicant’s asserts that “with respect to claim 21, the Office Action incorrectly recites claim 21 and alleges that claim 21 recites, in part, "providing a schematic body model of the patient."…” (Remarks, Pg. 12).
Claim 21 recites "building a schematic body model of the patient" instead of "providing a schematic body model of the patient", as indicated by Applicant above. Therefore, Applicant’s arguments above regarding claim 21 have been fully considered and are found persuasive. However, upon further review, previously cited prior art can still be applied to reject claim 21. Therefore, a new ground of rejection for claim 21 is warranted, as indicated further below.
Applicant further asserts with respect to claim 1 that “a person of ordinary skill in the art would not combine Kruecker with a combination of Gossler and Shoudy to arrive at the method of claim 1… the schematic body model of claim 1 replicates an anatomy of a patient. In contrast, the segmentation of Krueker is performed on medical image data… and is unrelated to subdividing a representation of an anatomy of a patient” (Remarks, Pg. 14-15).
Examiner respectfully disagrees.
Krueker, in Par. [0001-2], indicates that “Medical image segmentation divides medical images into regions with similar properties. The role of segmentation is to subdivide anatomical structures in the medical images, so as to, for example, study the anatomical structures, identify region(s) of interest, measure tissue volume, and so on. Anatomical structures include bones and organs in a human body, and medical images may include one such anatomical structure or multiple anatomical structures… Model-based segmentation is a tool for automated or semi-automated medical image segmentation. Models include multiple parts and/or nodes, and consist of a three-dimensional (3D) surface mesh and a set of features that detail anatomical structures”, and in Par. [0035-60], further indicates that “process starts at S210 by loading and displaying a segmentation model. A segmentation model is a 2D or 3D model of one or more types of structures, and is therefore a modeled tissue structure. The segmentation model and the medical images described herein can be displayed as 2D models and medical images… a current structure is displayed or highlighted for an organ/structure in the segmentation model of S210. In other words, if there is only a single organ/structure in the segmentation model of S210, the displaying or highlighting at S215 will only occur once. If there are multiple organs/structures in the segmentation model of S210, the displaying or highlighting at S215 will be repeated with intermediate processing from S230 to S275 between each iteration. When an organ/structure is displayed but not highlighted, this may mean that the organ/structure of the segmentation model is selectively displayed or illuminated for a time but not at other times. When the organ/structure is highlighted, this may be taken to mean that the organ/structure of the segmentation model is selectively highlighted… the image may contain a first tissue structure, a second tissue structure, and additional tissue structures derived from imaging a patient… a structure in an image from a patient may be segmented… mapping a predetermined number of landmarks to corresponding locations. In this way, a structure in an image from a patient may be segmented”, for example.
Krueker teachings above indicate that segmentation is performed on medical image data and teaches a similar concept that is related to subdividing (i.e. segmenting) a representation of an anatomy of a patient (e.g. medical image segmentation model uses segmentation process to subdivide anatomical structures in medical images including a structure in an image from a patient to be segmented), as indicated above, for example.
Therefore, based on above rationale, Applicant’s remarks above with respect to claim 1 are respectfully found unconvincing.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Gossler, in view of Shoudy et al. (US PG Publication No. US 2020/0375546 A1), hereafter referred to as Shoudy.
Regarding claim 21, Gossler discloses a computer-implemented method for ascertaining examination information during a diagnostic assessment of patient data relating to a patient (Par. [0002-14]: computer-assisted structuring of medical examination data and/or one or more examination data records… method and an apparatus as well as a computer program product according to the independent claims are disclosed… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method, which can be defined in a hardware and/or software relevant fashion and/or as a computer program product… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model), comprising:
receiving the patient data relating to the patient, wherein the patient data comprises medical image data that represents an anatomical region of the patient (Abstract: method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; Par. [0014-16]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data; Par. [0040]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; comprising: receiving the patient data relating to the patient, wherein the patient data comprises medical image data that represents an anatomical region of the patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. patient data assigned to the patient), including a diagnostic station which enables a user to access (i.e. receive, retrieve, obtain, etc.) image data records of a patient (i.e. receiving the patient data relating to the patient), including anatomical structures of the patient (i.e. wherein the patient data comprises medical image data that represents an anatomical region of the patient), as indicated above), for example);
providing a schematic body model of the patient based on the patient data, the schematic body model schematically replicates at least one anatomy of the patient (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-16]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data; Par. [0040-44]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6… The interaction with the body model K consists inter alia of zooming and filtering the body model … Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; providing a schematic body model of the patient based on the patient data, the schematic body model schematically replicates at least one anatomy of the patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the patient data assigned to the patient), includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient (i.e. providing a schematic body model of the patient based on the patient data), such as a patient-specific whole body model, including schematic representations (i.e. wherein the schematic body model schematically replicates at least one anatomy of the patient), as shown in Figs. 2-6, for example, which enables registration of image data on the model to assign (i.e. associate, relate, etc.) image diagnoses to correct anatomical structures in relation to the anatomy of the patient, as indicated above), for example);
establishing a registration between the medical image data and the schematic body model (Par. [0010-17]: software for image diagnosis also enables the simultaneous representation of several image data records (adjacent to one another or superimposed). The image data records can herewith also originate from different imaging methods. Registration of the image data records herewith enables individual image diagnoses to be compared longitudinally or observed in extended representations (e.g. anatomical details by means of CT, functional information by means of MR, metabolic information by way of PET)… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model. With the subsequent diagnosis, the full context information relating to each individual diagnosis is therefore available at any time… The results of previous patient examinations are also registered with the same body model on this basis, so that changes to the diagnoses can be shown between different points in time (also animated as film). Registration of the results of different examinations on a body model also enables reference to be made to possible inconsistencies in the results… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data … a uniform type of information representation is enabled at any time and in any procedural context across all body regions, organs and image data records of different modalities. As a result, learning and synergy effects and higher efficiencies result during the further (development) and use of the system; Par. [0042-43]: automatically determined information relating to image diagnosis by further characteristics and interpretations… The position in the image (volume) can therefore take place by way of classical registration algorithms REGB (see 1a, 1b). In the simplest case, a registration takes place for instance with the model based on automatically detected field markers. To this end, proximately automatically detected field markers are initially determined for the image diagnosis and the relative position with respect to these field markers is transmitted to the body model… If the diagnoses are prestructured (e.g. in separate sections for head, neck/shoulder, thorax, abdomen/pelvis), this structure can be used to determine the anatomical position of individual image diagnoses. If this is not possible, the anatomical position of individual image diagnoses can generally be determined by means of text analysis REGM. If the anatomical position is determined, a (purely semantic) registration can likewise take place on the body model 2a, 2b. The interaction with the body model K consists inter alia of zooming and filtering the body model. The assistance for the user interaction such as also the function for charging and storing the models 3c, 3d including all contained image diagnoses is summarized in a component ML (model logic) which is likewise connected to the user interface (see 3a, 3b); Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; establishing a registration between the medical image data and the schematic body model (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the patient data assigned to the patient), includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient (i.e. providing a schematic body model of the patient based on the patient data), such as a patient-specific whole body model, including schematic representations (i.e. wherein the schematic body model schematically replicates at least one anatomy of the patient), as shown in Figs. 2-6, for example, which enables registration of image data on the model to assign (i.e. associate, relate, etc.) image diagnoses to correct anatomical structures in relation to the anatomy of the patient (i.e. establishing a registration between the medical image data and the schematic body model), as indicated above), for example), for example);
generating a visualization of the medical image data;
displaying the visualization for a user via a user interface (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0040-69]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 5, which are identified with 1 and 2… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… The user can select whether all results are shown or only those which correspond to certain criteria… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change… The user can display a history at each examination result… FIG. 6 shows the afore-cited progress mode having symbols S in color, which may have the following meaning… red: (current finding)… green: (prior finding)… red-green: (got worse)… green-red: (got better)… brown: (unchanged)… white: (disappeared)… temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings; Par. [0083-95]: R1 to R7 reference points and/or positions e.g. R1: lymph nodes, neck, right… S symbols in color, e.g. in red, green, red-green, green-red, brown, white; generating a visualization of the medical image data; displaying the visualization for a user via a user interface (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize medical images as well as diagnosis available to the user at the diagnostic station (i.e. generating a visualization of the medical image data), for example, by automatically detecting (i.e. identifying, recognizing, etc.) field markers, which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. the schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. the anatomical position for the at least one piece of the examination information), including at least one position (i.e. segment, portion, region, etc.) in the body model assigned to the examination data record (i.e. the anatomical position for the at least one piece of the examination information within the schematic body model), by way of the user interface, as shown in Figs. 2-6 (i.e. displaying the visualization for a user via a user interface), as indicated above), for example);
receiving a user input from the user via the user interface, the user input is directed to a generation of the examination information based on the visualization (Par. [0040-59]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station by means of dictation or text entry. This diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… User interactions are shown in FIG. 3… The user can change the zoom settings, so that more or less details relating to the examination results are shown… The user can switch the labels on and/or off… User interactions are shown in FIG. 4… If the user positions the mouse above an examination result, a preview pain appears with a detailed description of the result… if available, a preview image of the result can be shown. If the user clicks on this preview image, he navigates directly to this result in the original images… User interactions are shown in FIG. 5… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… User interactions are shown in FIG. 6… The user can move to results of earlier examinations by way of a time bar. Furthermore, he/she can activate a comparison mode in order to select which time points are to be compared with one another… The user can select whether all results are shown or only those which correspond to certain criteria (e.g. change in size). [0058] 3. Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change etc.) [0059] 4. The user can display a history at each examination result; receiving a user input from the user via the user interface, the user input is directed to a generation of the examination information based on the visualization (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the examination information), including a diagnostic station which enables a user to access image data records of the patient (i.e. a generation of the examination information), by way of the user interface, for example, by interacting, as in the interactions shown in FIGS. 2 to 6 (i.e. receiving a user input from the user via the user interface, the user input is directed to a generation of the examination information based on the visualization), as indicated above), for example);
determining an anatomical position for the examination information within the schematic body model based on the user input and the registration;
ascertaining the examination information based on the determined anatomical position and on the user input; and
providing the examination information (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed. A method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; and registering the at least one examination data record with the body model, wherein at least one position in the body model is assigned to the examination data record; the position being made known for interaction by way of a user interface; Par. [0002-7]: computer-assisted structuring of medical examination data and/or one or more examination data records… evaluation of the image data records largely takes place in a computer-assisted fashion at diagnostic stations, which provide for observation and navigation through the image data record and a summary of the evaluation (for instance as text or dictation). The image data record is to this end stored in series of medical images, which a radiologist essentially observes sequentially, wherein he/she dictates the evaluation… the appearance, position and changes to pathological structures are described in the evaluation; Par. [0014-18]: medical data relating to a patient examination… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method; Par. [0040-49]: diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6. Image diagnoses are transmitted largely fully automatically into the body model, wherein different methods and/or algorithms A and/or services are used… a registration takes place for instance with the model based on automatically detected field markers. To this end, proximately automatically detected field markers are initially determined for the image diagnosis and the relative position with respect to these field markers is transmitted to the body model… the diagnoses are prestructured (e.g. in separate sections for head, neck/shoulder, thorax, abdomen/pelvis), this structure can be used to determine the anatomical position of individual image diagnoses… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization)… User interactions are shown in FIG. 4, which are identified with 1 and 2… If the user positions the mouse above an examination result, a preview pain appears with a detailed description of the result… if available, a preview image of the result can be shown. If the user clicks on this preview image, he navigates directly to this result in the original images; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; determining an anatomical position for the examination information within the schematic body model based on the user input and the registration; ascertaining the examination information based on the determined anatomical position and on the user input; and providing the examination information (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of the patient (i.e. the examination information in the patient data), by automatically detecting (i.e. identifying, recognizing, etc.) field markers (i.e. identifying at least one piece of the examination information in the patient data), which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. the schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. determining an anatomical position for the at least one piece of the examination information within the schematic body model), for example, including at least one position (i.e. segment, portion, region, etc.) in the body model assigned (i.e. associated, related, etc.) to the examination data record (i.e. the examination information), which enables registration of image data on the model to assign image diagnoses to anatomical structures (i.e. determining an anatomical position for the examination information within the schematic body model based on the user input and the registration), for example, including a diagnostic station which enables a user to access image data records of the patient (i.e. providing the examination information), by way of the user interface, for example, by interacting, as in the interactions shown in FIGS. 2 to 6 (i.e. ascertaining the examination information based on the determined anatomical position and on the user input), as indicated above), for example).
Gossler teachings above disclose providing an interactive whole body model which is used for the diagnosis of medical data of a patient, as indicated above, but does not expressly disclose building (i.e. creating, generating, etc.) the schematic body model of the patient, as recited in claim 21.
However, Shoudy teaches building (Par. [0004]: medical imaging guidance system may have a patient sensor that may receive three-dimensional (3D) data associated with a patient and an imaging system that has an imaging hardware component that may acquire image data of an anatomical feature associated with the patient… The medical guidance system may also have a processor that generates a 3D surface map associated with the patient based on the 3D data, generates a 3D patient space from the 3D surface map associated with the patient, generates a 3D patient model by mapping an anatomical atlas to the 3D patient space… The 3D patient model may have one or more 3D representations of anatomical features of a human body within the 3D patient space; Par. [0020-32]: guidance system provided herein provide guidance to an operator via a three-dimensional (3D) patient model. For example, the 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient to the operator. The guidance system may generate the 3D patient model by generating a 3D surface map of the patient, identifying reference points (e.g., anatomical landmarks) based on the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient… the anatomical and/or physiological information associated with patient 22 may include the degrees of freedom associated with the desired anatomical feature to be imaged or any surrounding and/or adjacent anatomical features of the patient 22… the anatomical information and/or physiological information may include one or more anatomical models. For example, the anatomical models may be associated with anatomical features such a body part, an organ, a muscle, a bone, or the like. The anatomical models may include a polygonal or volumetric 3D model of the anatomical feature. The anatomical model may also be associated with an indexed list of anatomical components of the anatomical feature. The indexed list of anatomical components may include each body part, organ, muscle, bone, or the like, that is connected with each other body part, organ, muscle, bone, or the like, in the associated anatomical feature. Each anatomical component in the indexed list may share at least one point of correspondence to another anatomical component in the indexed list. For example, with respect to the anatomical feature of the hip-to-femur joint, the anatomical components may include the last lumbar vertebrae (L5), the sacrum (S1), the ilium, the ischium, and the femur. As such, each anatomical model may define the linkages between each of the anatomical components associated with each anatomical model. For example, in the 3D model of the anatomical feature, a point of correspondence for the femur ‘A’ and the point of correspondence for the ischium ‘B’; Par. [0037-39]: controller 24 may generate a three-dimensional (3D) patient model (e.g., an anatomical twin) associated with the patient 22 and provide visual guidance to the operator to position and/or orient the patient 22, the imaging hardware components, or both, via the 3D patient model. For example, after generating the 3D patient model, the controller 24 may send a command signal to the display 30 to present the 3D patient model associated with the patient 22 to the operator. The 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient 22 to the operator… The controller 24 may generate the 3D patient model by generating a 3D surface map of the patient 22… identifying reference points (e.g., anatomical landmarks) within the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient 22… based on the acquired sensor data of the patient 22… the controller 24 may estimate the pose (e.g., the position and/or orientation) of the patient 22 and identify one or more anatomical reference points. For example, the anatomical reference points may include the shoulders, the hips, the knees, or any other suitable anatomical landmark… the anatomical reference points may be inferred based on the 3D surface map of the patient. The controller 24 may then fuse the anatomical reference points with the acquired 3D surface map of the patient 22… Based on 3D surface map of the patient 22, the controller 24 may identify or extract 3D anatomical reference points. The controller 24 may then deform one or more anatomical features from an anatomical atlas to the 3D surface map of the patient 22 based on the extracted 3D anatomical reference points to generate the 3D patient model… the guidance system 10 may provide the operator with spatial awareness of expected anatomical features via the 3D patient model; building (e.g. system generates (i.e. builds, creates, etc.) a three-dimensional (3D) patient model (i.e. a schematic body model of the patient) by generating a 3D surface map of the patient, identifying reference points, such as anatomical landmarks, based on the 3D surface map, and deforming an anatomical atlas to the patient space (i.e. the patient data) defined by the 3D surface map of the patient (i.e. building the schematic body model of the patient), as indicated above), for example, including a programmed controller which generates a 3D patient model (e.g., an anatomical twin) associated with the patient and provides visual guidance to an operator to position and/or orient the patient, as indicated above), for example).
Gossler and Shoudy are considered to be analogous art because they pertain to medical image processing applications. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the method for computer-assisted structuring of medical examination data (as disclosed by Gossler) with building (as taught by Shoudy, Abstract, Par. [0004, 20-32, 37-39]) to provide guidance to an operator via a three-dimensional (3D) patient mode and to accurately perform imaging of a desired anatomical feature of a patient (Shoudy, Abstract, Par. [0002, 17, 28, 48]).
Claims 1, 3, 8, 10-15, 22, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Gossler, in view of Shoudy, in further view of KRUECKER et al. (US PG Publication No. US 2020/0258233 A1), hereafter referred to as KRUECKER.
Regarding claim 1, Gossler discloses a computer-implemented method for structuring medical examination information relating to a patient (Par. [0002-14]: computer-assisted structuring of medical examination data and/or one or more examination data records… method and an apparatus as well as a computer program product according to the independent claims are disclosed… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method, which can be defined in a hardware and/or software relevant fashion and/or as a computer program product… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model), the method comprising:
receiving patient data assigned to the patient (Abstract: method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; Par. [0014]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model; Par. [0040]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station; receiving patient data assigned to the patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. patient data assigned to the patient), including a diagnostic station which enables a user to access (i.e. receive, retrieve, obtain, etc.) image data records of a patient (i.e. receiving patient data assigned to the patient), as indicated above), for example);
providing a schematic body model of the patient based on the patient data, wherein the schematic body model schematically replicates at least one anatomy of the patient (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-16]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data; Par. [0040-44]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6… The interaction with the body model K consists inter alia of zooming and filtering the body model … Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; providing a schematic body model of the patient based on the patient data, wherein the schematic body model schematically replicates at least one anatomy of the patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the patient data assigned to the patient), includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient (i.e. providing a schematic body model of the patient based on the patient data), such as a patient-specific whole body model, including schematic representations (i.e. wherein the schematic body model schematically replicates at least one anatomy of the patient), as shown in Figs. 2-6, for example, which enables registration of image data on the model to assign (i.e. associate, relate, etc.) image diagnoses to correct anatomical structures in relation to the anatomy of the patient, as indicated above), for example);
identifying at least one piece of the examination information in the patient data;
determining an anatomical position for the at least one piece of the examination information within the schematic body model by assigning the at least one piece of examination information to a segment of the body model (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed. A method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; and registering the at least one examination data record with the body model, wherein at least one position in the body model is assigned to the examination data record; the position being made known for interaction by way of a user interface; Par. [0002-7]: computer-assisted structuring of medical examination data and/or one or more examination data records… evaluation of the image data records largely takes place in a computer-assisted fashion at diagnostic stations, which provide for observation and navigation through the image data record and a summary of the evaluation (for instance as text or dictation). The image data record is to this end stored in series of medical images, which a radiologist essentially observes sequentially, wherein he/she dictates the evaluation… the appearance, position and changes to pathological structures are described in the evaluation; Par. [0014-18]: medical data relating to a patient examination… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method; Par. [0040-49]: diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6. Image diagnoses are transmitted largely fully automatically into the body model, wherein different methods and/or algorithms A and/or services are used… a registration takes place for instance with the model based on automatically detected field markers. To this end, proximately automatically detected field markers are initially determined for the image diagnosis and the relative position with respect to these field markers is transmitted to the body model… the diagnoses are prestructured (e.g. in separate sections for head, neck/shoulder, thorax, abdomen/pelvis), this structure can be used to determine the anatomical position of individual image diagnoses… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization)… User interactions are shown in FIG. 4, which are identified with 1 and 2… If the user positions the mouse above an examination result, a preview pain appears with a detailed description of the result… if available, a preview image of the result can be shown. If the user clicks on this preview image, he navigates directly to this result in the original images; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; identifying at least one piece of the examination information in the patient data; determining an anatomical position for the at least one piece of the examination information within the schematic body model by assigning the at least one piece of examination information to a segment of the body model (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of the patient (i.e. the examination information in the patient data) by automatically detecting (i.e. identifying, recognizing, etc.) field markers (i.e. identifying at least one piece of the examination information in the patient data), which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. the schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. determining an anatomical position for the at least one piece of the examination information within the schematic body model), for example, including at least one position (i.e. segment, portion, region, etc.) in the body model assigned (i.e. associated, related, etc.) to the examination data record (i.e. the examination information), which enables registration of image data on the model to assign image diagnoses to anatomical structures (i.e. determining an anatomical position for the at least one piece of the examination information within the schematic body model by assigning the at least one piece of examination information to a segment a segment of the body model), as indicated above), for example);
generating a visualization of the schematic body model in which the anatomical position of the at least one piece of the medical examination information is highlighted; and
displaying the visualization for a user via a user interface (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0040-69]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 5, which are identified with 1 and 2… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… The user can select whether all results are shown or only those which correspond to certain criteria… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change… The user can display a history at each examination result… FIG. 6 shows the afore-cited progress mode having symbols S in color, which may have the following meaning… red: (current finding)… green: (prior finding)… red-green: (got worse)… green-red: (got better)… brown: (unchanged)… white: (disappeared)… temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings; Par. [0083-95]: R1 to R7 reference points and/or positions e.g. R1: lymph nodes, neck, right… S symbols in color, e.g. in red, green, red-green, green-red, brown, white; generating a visualization of the schematic body model in which the anatomical position of the at least one piece of the examination information is highlighted and displaying the visualization for a user via a user interface (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize (i.e. generating a visualization) medical images as well as diagnosis available to the user at the diagnostic station, for example, by automatically detecting (i.e. identifying, recognizing, etc.) field markers, which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. the schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. the anatomical position for the at least one piece of the examination information), including at least one position (i.e. segment, portion, region, etc.) in the body model assigned to the examination data record (i.e. the anatomical position for the at least one piece of the examination information within the schematic body model), by way of the user interface, which is highlighted (i.e. emphasized, accentuated, etc.) by markings, icons, symbols and/or text, shown on a display apparatus, as shown in Figs. 2-6 (i.e. the anatomical position of the at least one piece of the medical examination information is highlighted and displaying the visualization for a user via a user interface), as indicated above), for example).
Gossler teachings above disclose providing an interactive whole body model which is used for the diagnosis of medical data of a patient, as indicated above, but does not expressly disclose building (i.e. creating, generating, etc.) the schematic body model of the patient, as recited in claim 1.
However, Shoudy teaches building (Par. [0004]: medical imaging guidance system may have a patient sensor that may receive three-dimensional (3D) data associated with a patient and an imaging system that has an imaging hardware component that may acquire image data of an anatomical feature associated with the patient… The medical guidance system may also have a processor that generates a 3D surface map associated with the patient based on the 3D data, generates a 3D patient space from the 3D surface map associated with the patient, generates a 3D patient model by mapping an anatomical atlas to the 3D patient space… The 3D patient model may have one or more 3D representations of anatomical features of a human body within the 3D patient space; Par. [0020-32]: guidance system provided herein provide guidance to an operator via a three-dimensional (3D) patient model. For example, the 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient to the operator. The guidance system may generate the 3D patient model by generating a 3D surface map of the patient, identifying reference points (e.g., anatomical landmarks) based on the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient… the anatomical and/or physiological information associated with patient 22 may include the degrees of freedom associated with the desired anatomical feature to be imaged or any surrounding and/or adjacent anatomical features of the patient 22… the anatomical information and/or physiological information may include one or more anatomical models. For example, the anatomical models may be associated with anatomical features such a body part, an organ, a muscle, a bone, or the like. The anatomical models may include a polygonal or volumetric 3D model of the anatomical feature. The anatomical model may also be associated with an indexed list of anatomical components of the anatomical feature. The indexed list of anatomical components may include each body part, organ, muscle, bone, or the like, that is connected with each other body part, organ, muscle, bone, or the like, in the associated anatomical feature. Each anatomical component in the indexed list may share at least one point of correspondence to another anatomical component in the indexed list. For example, with respect to the anatomical feature of the hip-to-femur joint, the anatomical components may include the last lumbar vertebrae (L5), the sacrum (S1), the ilium, the ischium, and the femur. As such, each anatomical model may define the linkages between each of the anatomical components associated with each anatomical model. For example, in the 3D model of the anatomical feature, a point of correspondence for the femur ‘A’ and the point of correspondence for the ischium ‘B’; Par. [0037-39]: controller 24 may generate a three-dimensional (3D) patient model (e.g., an anatomical twin) associated with the patient 22 and provide visual guidance to the operator to position and/or orient the patient 22, the imaging hardware components, or both, via the 3D patient model. For example, after generating the 3D patient model, the controller 24 may send a command signal to the display 30 to present the 3D patient model associated with the patient 22 to the operator. The 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient 22 to the operator… The controller 24 may generate the 3D patient model by generating a 3D surface map of the patient 22… identifying reference points (e.g., anatomical landmarks) within the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient 22… based on the acquired sensor data of the patient 22… the controller 24 may estimate the pose (e.g., the position and/or orientation) of the patient 22 and identify one or more anatomical reference points. For example, the anatomical reference points may include the shoulders, the hips, the knees, or any other suitable anatomical landmark… the anatomical reference points may be inferred based on the 3D surface map of the patient. The controller 24 may then fuse the anatomical reference points with the acquired 3D surface map of the patient 22… Based on 3D surface map of the patient 22, the controller 24 may identify or extract 3D anatomical reference points. The controller 24 may then deform one or more anatomical features from an anatomical atlas to the 3D surface map of the patient 22 based on the extracted 3D anatomical reference points to generate the 3D patient model… the guidance system 10 may provide the operator with spatial awareness of expected anatomical features via the 3D patient model; building (e.g. system generates (i.e. builds, creates, etc.) a three-dimensional (3D) patient model (i.e. a schematic body model of the patient) by generating a 3D surface map of the patient, identifying reference points, such as anatomical landmarks, based on the 3D surface map, and deforming an anatomical atlas to the patient space (i.e. the patient data) defined by the 3D surface map of the patient (i.e. building the schematic body model of the patient), as indicated above), for example, including a programmed controller which generates a 3D patient model (e.g., an anatomical twin) associated with the patient and provides visual guidance to an operator to position and/or orient the patient, as indicated above), for example).
Gossler and Shoudy are considered to be analogous art because they pertain to medical image processing applications. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the method for computer-assisted structuring of medical examination data (as disclosed by Gossler) with building (as taught by Shoudy, Abstract, Par. [0004, 20-32, 37-39]) to provide guidance to an operator via a three-dimensional (3D) patient mode and to accurately perform imaging of a desired anatomical feature of a patient (Shoudy, Abstract, Par. [0002, 17, 28, 48]).
The combination of Gossler and Shoudy teachings, as a whole, teaches the method, as indicated above, but fails to teach the following as further recited in claim 1.
However, KRUECKER teaches the schematic body model is subdivided into multiple segments and assigning the at least one piece of the medical examination information to a segment of the multiple segments (Par. [0001-2]: Medical image segmentation divides medical images into regions with similar properties. The role of segmentation is to subdivide anatomical structures in the medical images, so as to, for example, study the anatomical structures, identify region(s) of interest, measure tissue volume, and so on. Anatomical structures include bones and organs in a human body, and medical images may include one such anatomical structure or multiple anatomical structures… Model-based segmentation is a tool for automated or semi-automated medical image segmentation. Models include multiple parts and/or nodes, and consist of a three-dimensional (3D) surface mesh and a set of features that detail anatomical structures. The models of anatomical structures are created based on previous measurements of the same types of anatomical structures from multiple patients. The types of anatomical structures in models are the same types of anatomical structures in the medical images. The 3D surface mesh represents the idealized geometries (e.g., geometric shapes) of the anatomical structures. The set of features describe the appearance of the 3D surface mesh at locations corresponding to different parts and/or nodes. In model-based segmentation, a segmentation algorithm optimizes the matching of features in the models with corresponding locations in the medical images to be segmented; Par. [0023-61]: FIG. 1A illustrates views of a structure with landmarks superimposed thereon for landmark visualization for medical image segmentation… elements in FIG. 1A may identify landmarks on a modeled tissue structure in a model or corresponding locations on a medical image… FIG. 1A may be views of either a medical image or a model, and the elements identified as landmarks for FIG. 1A may alternatively be locations on a medical image. As described herein, landmarks on a modeled tissue structure are leveraged to assist in identifying corresponding locations in a medical image, where the structure(s) in the model are of the same type as the structure(s) in the medical image… a computer system for landmark visualization for medical image segmentation… accentuating of landmarks on a screen with waiting for and recognizing identification of a location corresponding to an accentuated landmark on the same screen or a different screen. By intuitively presenting and correlating landmarks on a modeled tissue structure with locations on an image of a tissue structure of the same type, reliability of the mapping and subsequent segmentation can be improved. The accentuating described herein may be performed by highlighting or otherwise visually isolating and contrasting a landmark from its surroundings… process starts at S210 by loading and displaying a segmentation model. A segmentation model is a 2D or 3D model of one or more types of structures… The segmentation model and the medical images described herein can be displayed as 2D models and medical images. The segmentation model of any one structure is based on measurements of multiple structures of the same type… The basis of the segmentation model may be average measurements or median measurements of the multiple structures of the same type… The multiple structures may be tissue structures, such as internal organs. An example of a type of structure is a liver, or a pancreas…a current structure is displayed or highlighted for an organ/structure in the segmentation model of S210… Highlighting a structure may involve selectively brightening, outlining, overlaying, changing a color, or otherwise changing the display characteristics of an area on the modeled tissue structure screen 150A corresponding to the specific organ/structure being highlighted… a location in an image corresponding to the landmark in the model is identified. That is, a location in an image of a structure (i.e., the first tissue structure) corresponding to the landmark is identified… Mapping the landmarks on the modeled tissue structures to the locations of the tissue structures may involve transforming coordinates of the landmarks in the segmentation model to coordinates for the corresponding locations in an image, or vise versa…transformation of a medical image to be aligned with a segmentation model is referred to herein as fitting… Fitting is performed based on the mapping… the mapping provides a predetermined spatial relationship between a modeled tissue structure and a tissue structure in a medical image, i.e., due to the correlating between identified landmarks on the model and identified locations in the medical image… a region on the medical image where the corresponding location for the additional landmark is expected may be accentuated to help guide selection of the location… selection of a location after segmenting has already occurred in an iterative process may show how additional identifications of locations for additional landmarks will affect previous segmentation results. A next segmentation of a structure may be optimized using features such as intensities and gradients near a current segmentation. A region can be highlighted by, for example, changing colors where different colors correspond to different predetermined amounts of change… landmarks may be individually displayed and highlighted/accentuated one at a time. Similarly, each of the structures in the segmentation model of the multi-structural organ can be individually and sequentially displayed and highlighted/accentuated in order to assist the user in identifying which structure of the multi-structural organ to check for locations corresponding to a particular landmark… the landmarks on the modeled tissue structures are mapped to image(s) of the tissue structures at S577. Mapping may involve transforming coordinates of the landmark in the segmentation model to the corresponding location in an image. Once a predetermined number of landmarks in the model and locations in the image are identified, all locations in the image may have coordinates transformed to the coordinate system of the segmentation model in the segmentation at S580. In other words, a process in FIG. 5 may include mapping a predetermined number of landmarks to corresponding locations. In this way, a structure in an image from a patient may be segmented based on an idealized segmentation model from measurements of structures of the same type for previous patients… segmentation may be partially or fully performed. Additionally, segmentation may be performed again when additional landmarks and additional locations are identified. In other words, a process in FIG. 5 may include reperforming the mapping to include a predetermined number of additional landmarks when the mapping and segmenting are performed iteratively… an iterative process for a multi-structure segmentation model may be performed wherein landmarks for a first structure are sequentially identified, corresponding locations in a medical image are next identified, and the process switches to a second structure once all locations or a predetermined minimum number of locations for the first structure are identified; the schematic body model is subdivided into multiple segments and assigning the at least one piece of the medical examination information to a segment of the multiple segments (e.g. computer system for landmark visualization and medical image segmentation includes a model-based segmentation, in which model anatomical structures (i.e. multiple segments) in an image from a patient (i.e. schematic body model of a patient) are segmented (i.e. the schematic body model is subdivided into multiple segments), for example, and the model-based segmentation is used to identify landmarks (i.e. at least one piece of the medical examination information) by intuitively (i.e. automatically) presenting and correlating (i.e. assigning, associating, etc.) each of the landmarks on a modeled structure (i.e. a segment of the multiple segments) with locations (i.e. positions, coordinates, etc.) on an image of a structure of the same type (i.e. and assigning the at least one piece of the medical examination information to a segment of the multiple segments), as indicated above), for example).
Gossler teachings above disclose that a relative position with respect to field markers is transmitted to the body model in order to determine the anatomical position of individual image diagnoses, including at least one position in the body model assigned to the examination data record, by way of the user interface, which is highlighted (i.e. emphasized, accentuated, etc.) by markings, icons, symbols and/or text, as shown in Figs. 2-6, for example, as indicated above, but does not expressly disclose highlighted, as recited in the claim.
However, KRUECKER teaches highlighted (Par. [0023-61]: FIG. 1A illustrates views of a structure with landmarks superimposed thereon for landmark visualization for medical image segmentation… elements in FIG. 1A may identify landmarks on a modeled tissue structure in a model or corresponding locations on a medical image… FIG. 1A may be views of either a medical image or a model, and the elements identified as landmarks for FIG. 1A may alternatively be locations on a medical image. As described herein, landmarks on a modeled tissue structure are leveraged to assist in identifying corresponding locations in a medical image, where the structure(s) in the model are of the same type as the structure(s) in the medical image… a computer system for landmark visualization for medical image segmentation… accentuating of landmarks on a screen with waiting for and recognizing identification of a location corresponding to an accentuated landmark on the same screen or a different screen. By intuitively presenting and correlating landmarks on a modeled tissue structure with locations on an image of a tissue structure of the same type, reliability of the mapping and subsequent segmentation can be improved. The accentuating described herein may be performed by highlighting or otherwise visually isolating and contrasting a landmark from its surroundings… process starts at S210 by loading and displaying a segmentation model. A segmentation model is a 2D or 3D model of one or more types of structures… The segmentation model and the medical images described herein can be displayed as 2D models and medical images. The segmentation model of any one structure is based on measurements of multiple structures of the same type… The basis of the segmentation model may be average measurements or median measurements of the multiple structures of the same type… The multiple structures may be tissue structures, such as internal organs. An example of a type of structure is a liver, or a pancreas…a current structure is displayed or highlighted for an organ/structure in the segmentation model of S210… Highlighting a structure may involve selectively brightening, outlining, overlaying, changing a color, or otherwise changing the display characteristics of an area on the modeled tissue structure screen 150A corresponding to the specific organ/structure being highlighted… a location in an image corresponding to the landmark in the model is identified. That is, a location in an image of a structure (i.e., the first tissue structure) corresponding to the landmark is identified… Mapping the landmarks on the modeled tissue structures to the locations of the tissue structures may involve transforming coordinates of the landmarks in the segmentation model to coordinates for the corresponding locations in an image, or vise versa…transformation of a medical image to be aligned with a segmentation model is referred to herein as fitting… Fitting is performed based on the mapping… the mapping provides a predetermined spatial relationship between a modeled tissue structure and a tissue structure in a medical image, i.e., due to the correlating between identified landmarks on the model and identified locations in the medical image… a region on the medical image where the corresponding location for the additional landmark is expected may be accentuated to help guide selection of the location… selection of a location after segmenting has already occurred in an iterative process may show how additional identifications of locations for additional landmarks will affect previous segmentation results. A next segmentation of a structure may be optimized using features such as intensities and gradients near a current segmentation. A region can be highlighted by, for example, changing colors where different colors correspond to different predetermined amounts of change… landmarks may be individually displayed and highlighted/accentuated one at a time. Similarly, each of the structures in the segmentation model of the multi-structural organ can be individually and sequentially displayed and highlighted/accentuated in order to assist the user in identifying which structure of the multi-structural organ to check for locations corresponding to a particular landmark… the landmarks on the modeled tissue structures are mapped to image(s) of the tissue structures at S577. Mapping may involve transforming coordinates of the landmark in the segmentation model to the corresponding location in an image. Once a predetermined number of landmarks in the model and locations in the image are identified, all locations in the image may have coordinates transformed to the coordinate system of the segmentation model in the segmentation at S580. In other words, a process in FIG. 5 may include mapping a predetermined number of landmarks to corresponding locations. In this way, a structure in an image from a patient may be segmented based on an idealized segmentation model from measurements of structures of the same type for previous patients… segmentation may be partially or fully performed. Additionally, segmentation may be performed again when additional landmarks and additional locations are identified. In other words, a process in FIG. 5 may include reperforming the mapping to include a predetermined number of additional landmarks when the mapping and segmenting are performed iteratively… an iterative process for a multi-structure segmentation model may be performed wherein landmarks for a first structure are sequentially identified, corresponding locations in a medical image are next identified, and the process switches to a second structure once all locations or a predetermined minimum number of locations for the first structure are identified; highlighted (e.g. accentuating is performed by highlighting or otherwise visually isolating and contrasting a landmark from its surroundings, including a current structure highlighted for an organ/structure in the segmentation model, as indicated above), for example).
Gossler, Shoudy, and KRUECKER are considered to be analogous art because they pertain to medical image processing applications. Therefore, the combined teachings of Gossler, Shoudy, and KRUECKER, as a whole, would have rendered obvious the invention recited in claim 1 with a reasonable expectation of success in order to modify the method for computer-assisted structuring of medical examination data (as disclosed by Gossler) with the schematic body model is subdivided into multiple segments, and assigning the at least one piece of the medical examination information to a segment of the multiple segments, and highlighted (as taught by KRUECKER, Abstract, Par. [0001-2, 23-61]) in order to study anatomical structures and to identify regions of interest of patients, for example, by using a segmentation algorithm that optimizes matching of features in models with corresponding locations in medical images to be segmented (KRUECKER, Abstract, Par. [0001-3, 29, 46]).
Regarding claim 3, claim 1 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), wherein the schematic body model is embodied such that each segment of the multiple segments is assigned a unique marker a plurality of unique markers, and
the determining the anatomical position determines the anatomical position by identifying at least one of the unique markers of the plurality of unique markers for the at least one piece of the medical examination information, the assigning being based on the identifying the at least one of the plurality of unique markers (KRUECKER, Par. [0001-3]: role of segmentation is to subdivide anatomical structures in the medical images, so as to, for example, study the anatomical structures, identify region(s) of interest, measure tissue volume, and so on. Anatomical structures include bones and organs in a human body, and medical images may include one such anatomical structure or multiple anatomical structures… Model-based segmentation is a tool for automated or semi-automated medical image segmentation. Models include multiple parts and/or nodes, and consist of a three-dimensional (3D) surface mesh and a set of features that detail anatomical structures. The models of anatomical structures are created based on previous measurements of the same types of anatomical structures from multiple patients. The types of anatomical structures in models are the same types of anatomical structures in the medical images. The 3D surface mesh represents the idealized geometries (e.g., geometric shapes) of the anatomical structures. The set of features describe the appearance of the 3D surface mesh at locations corresponding to different parts and/or nodes. In model-based segmentation, a segmentation algorithm optimizes the matching of features in the models with corresponding locations in the medical images to be segmented… One known and efficient way to initialize the segmentation is to identify landmarks in the structure(s) to be segmented. A small number of landmarks, when accurately identified, allow the segmentation algorithm to determine the approximate position, size, shape and pose of the structure(s) to be segmented. This information can be used to initialize and subsequently guide the optimization algorithm, which in turn increases the probability of successful segmentation; Par. [0023-29]: landmarks in FIG. 1A may all be predetermined, and may be representative of landmark types in different locations for a type of structure or different types of structures in a single model… elements in FIG. 1A may identify landmarks on a modeled tissue structure in a model or corresponding locations on a medical image. In other words, FIG. 1A may be views of either a medical image or a model, and the elements identified as landmarks for FIG. 1A may alternatively be locations on a medical image. As described herein, landmarks on a modeled tissue structure are leveraged to assist in identifying corresponding locations in a medical image, where the structure(s) in the model are of the same type as the structure(s) in the medical image… descriptive labels can be provided for both tissue structures in a medical image and modeled tissue structures in a model. For example, each or any landmark in a model may be labelled with a descriptive label to help a user understand what on a structure is designated by a landmark… the display of the tissue structure and the modeled tissue structure can be coordinated, such as by alternating accentuating of landmarks on a screen with waiting for and recognizing identification of a location corresponding to an accentuated landmark on the same screen or a different screen. By intuitively presenting and correlating landmarks on a modeled tissue structure with locations on an image of a tissue structure of the same type, reliability of the mapping and subsequent segmentation can be improved. The accentuating described herein may be performed by highlighting or otherwise visually isolating and contrasting a landmark from its surroundings. The accentuating may also be performed by focusing on a region that includes a landmark, overlaying a region that includes a landmark, or changing a color of a region that includes a landmark. It may be possible to highlight a region on a medical image where a next location is likely to be found given previous landmarks and previous mapping of the previous landmarks to previous identified locations… a location in an image corresponding to the landmark in the model is identified… a determination is made as to whether there are more landmarks… correlating between identified landmarks on the model and identified locations in the medical image; wherein the schematic body model is embodied such that each segment of the multiple segments is assigned a unique marker a plurality of unique markers, and the determining the anatomical position determines the anatomical position by identifying at least one of the unique markers of the plurality of unique markers for the at least one piece of the medical examination information, the assigning being based on the identifying the at least one of the plurality of unique markers (e.g. computer system for landmark (i.e. unique marker, identifier, label, etc.) visualization and medical image segmentation includes a model-based segmentation, in which model anatomical structures in an image from a patient are segmented, for example, and the model-based segmentation is used to identify landmarks (i.e. unique markers, identifiers, labels, etc.) by intuitively (i.e. automatically) presenting and correlating (i.e. assigning, associating, etc.) each of the landmarks on a modeled structure (i.e. wherein the schematic body model is embodied such that each segment of the multiple segments is assigned a unique marker a plurality of unique markers) with locations (i.e. positions, coordinates, etc.) on an image of a structure of the same type (i.e. and the determining the anatomical position determines the anatomical position by identifying at least one of the unique markers of the plurality of unique markers for the at least one piece of the medical examination information, the assigning being based on the identifying the at least one of the plurality of unique markers), as indicated above), for example).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 1.
Regarding claim 8, claim 1 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), further comprising:
determining one or more attributes based on the at least one piece of the medical examination information;
providing a predetermined number of different pictograms, each pictogram representing different attributes of the one or more attributes of the medical examination information; and
assigning one pictogram from the predetermined number of different pictograms to the at least one piece of the medical examination information based on the determined attributes,
wherein the generating the visualization includes highlighting the anatomical position of the at least one piece of the medical examination information by the assigned pictogram (Gossler, Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0040-69]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 5, which are identified with 1 and 2… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… The user can select whether all results are shown or only those which correspond to certain criteria… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change… The user can display a history at each examination result… FIG. 6 shows the afore-cited progress mode having symbols S in color, which may have the following meaning… red: (current finding)… green: (prior finding)… red-green: (got worse)… green-red: (got better)… brown: (unchanged)… white: (disappeared)… temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings; Par. [0083-95]: R1 to R7 reference points and/or positions e.g. R1: lymph nodes, neck, right… S symbols in color, e.g. in red, green, red-green, green-red, brown, white; further comprising: determining one or more attributes based on the at least one piece of the medical examination information; providing a predetermined number of different pictograms, each pictogram representing different attributes of the one or more attributes of the medical examination information; and assigning one pictogram from the predetermined number of different pictograms to the at least one piece of the medical examination information based on the determined attributes, wherein the generating the visualization includes highlighting the anatomical position of the at least one piece of the medical examination information by the assigned pictogram (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize (i.e. generating a visualization) medical images as well as diagnosis available to the user at the diagnostic station, for example, by automatically detecting (i.e. identifying, determining, recognizing, etc.) field markers (i.e. one or more attributes), which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model in order to determine the anatomical position of individual image diagnoses (i.e. determining one or more attributes based on the at least one piece of the medical examination information), including at least one position (i.e. segment, portion, region, etc.) in the body model assigned to the examination data record (i.e. the anatomical position of the at least one piece of the medical examination information), by way of the user interface, which is highlighted (i.e. emphasized, accentuated, etc.) by markings, icons, symbols and/or text (i.e. pictograms), shown on a display apparatus (i.e. providing a predetermined number of different pictograms and assigning one pictogram from the predetermined number of different pictograms to the at least one piece of the medical examination information based on the determined attributes, wherein the generating the visualization includes highlighting the anatomical position of the at least one piece of the medical examination information by the assigned pictogram), as shown in Figs. 2-6), for example).
Gossler teachings above disclose that a relative position with respect to field markers is transmitted to the body model in order to determine the anatomical position of individual image diagnoses, including at least one position in the body model assigned to the examination data record, by way of the user interface, which is highlighted (i.e. emphasized, accentuated, etc.) by markings, icons, symbols and/or text, as shown in Figs. 2-6, for example, as indicated above, but does not expressly disclose highlighting, as recited in the claim.
However, KRUECKER teaches highlighting (Par. [0023-61]: FIG. 1A illustrates views of a structure with landmarks superimposed thereon for landmark visualization for medical image segmentation… elements in FIG. 1A may identify landmarks on a modeled tissue structure in a model or corresponding locations on a medical image… FIG. 1A may be views of either a medical image or a model, and the elements identified as landmarks for FIG. 1A may alternatively be locations on a medical image. As described herein, landmarks on a modeled tissue structure are leveraged to assist in identifying corresponding locations in a medical image, where the structure(s) in the model are of the same type as the structure(s) in the medical image… a computer system for landmark visualization for medical image segmentation… accentuating of landmarks on a screen with waiting for and recognizing identification of a location corresponding to an accentuated landmark on the same screen or a different screen. By intuitively presenting and correlating landmarks on a modeled tissue structure with locations on an image of a tissue structure of the same type, reliability of the mapping and subsequent segmentation can be improved. The accentuating described herein may be performed by highlighting or otherwise visually isolating and contrasting a landmark from its surroundings… process starts at S210 by loading and displaying a segmentation model. A segmentation model is a 2D or 3D model of one or more types of structures… The segmentation model and the medical images described herein can be displayed as 2D models and medical images. The segmentation model of any one structure is based on measurements of multiple structures of the same type… The basis of the segmentation model may be average measurements or median measurements of the multiple structures of the same type… The multiple structures may be tissue structures, such as internal organs. An example of a type of structure is a liver, or a pancreas…a current structure is displayed or highlighted for an organ/structure in the segmentation model of S210… Highlighting a structure may involve selectively brightening, outlining, overlaying, changing a color, or otherwise changing the display characteristics of an area on the modeled tissue structure screen 150A corresponding to the specific organ/structure being highlighted… a location in an image corresponding to the landmark in the model is identified. That is, a location in an image of a structure (i.e., the first tissue structure) corresponding to the landmark is identified… Mapping the landmarks on the modeled tissue structures to the locations of the tissue structures may involve transforming coordinates of the landmarks in the segmentation model to coordinates for the corresponding locations in an image, or vise versa…transformation of a medical image to be aligned with a segmentation model is referred to herein as fitting… Fitting is performed based on the mapping… the mapping provides a predetermined spatial relationship between a modeled tissue structure and a tissue structure in a medical image, i.e., due to the correlating between identified landmarks on the model and identified locations in the medical image… a region on the medical image where the corresponding location for the additional landmark is expected may be accentuated to help guide selection of the location… selection of a location after segmenting has already occurred in an iterative process may show how additional identifications of locations for additional landmarks will affect previous segmentation results. A next segmentation of a structure may be optimized using features such as intensities and gradients near a current segmentation. A region can be highlighted by, for example, changing colors where different colors correspond to different predetermined amounts of change… landmarks may be individually displayed and highlighted/accentuated one at a time. Similarly, each of the structures in the segmentation model of the multi-structural organ can be individually and sequentially displayed and highlighted/accentuated in order to assist the user in identifying which structure of the multi-structural organ to check for locations corresponding to a particular landmark… the landmarks on the modeled tissue structures are mapped to image(s) of the tissue structures at S577. Mapping may involve transforming coordinates of the landmark in the segmentation model to the corresponding location in an image. Once a predetermined number of landmarks in the model and locations in the image are identified, all locations in the image may have coordinates transformed to the coordinate system of the segmentation model in the segmentation at S580. In other words, a process in FIG. 5 may include mapping a predetermined number of landmarks to corresponding locations. In this way, a structure in an image from a patient may be segmented based on an idealized segmentation model from measurements of structures of the same type for previous patients… segmentation may be partially or fully performed. Additionally, segmentation may be performed again when additional landmarks and additional locations are identified. In other words, a process in FIG. 5 may include reperforming the mapping to include a predetermined number of additional landmarks when the mapping and segmenting are performed iteratively… an iterative process for a multi-structure segmentation model may be performed wherein landmarks for a first structure are sequentially identified, corresponding locations in a medical image are next identified, and the process switches to a second structure once all locations or a predetermined minimum number of locations for the first structure are identified; highlighting (e.g. accentuating is performed by highlighting or otherwise visually isolating and contrasting a landmark from its surroundings, including a current structure highlighted for an organ/structure in the segmentation model, as indicated above), for example).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 1.
Regarding claim 10, claim 1 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), in which the patient data comprises medical image data which represents an anatomical region of the patient, further comprising:
establishing a registration between the medical image data and the schematic body model (Gossler, Par. [0010-17]: software for image diagnosis also enables the simultaneous representation of several image data records (adjacent to one another or superimposed). The image data records can herewith also originate from different imaging methods. Registration of the image data records herewith enables individual image diagnoses to be compared longitudinally or observed in extended representations (e.g. anatomical details by means of CT, functional information by means of MR, metabolic information by way of PET)… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model. With the subsequent diagnosis, the full context information relating to each individual diagnosis is therefore available at any time… The results of previous patient examinations are also registered with the same body model on this basis, so that changes to the diagnoses can be shown between different points in time (also animated as film). Registration of the results of different examinations on a body model also enables reference to be made to possible inconsistencies in the results… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data … a uniform type of information representation is enabled at any time and in any procedural context across all body regions, organs and image data records of different modalities. As a result, learning and synergy effects and higher efficiencies result during the further (development) and use of the system; Par. [0042-43]: automatically determined information relating to image diagnosis by further characteristics and interpretations… The position in the image (volume) can therefore take place by way of classical registration algorithms REGB (see 1a, 1b). In the simplest case, a registration takes place for instance with the model based on automatically detected field markers. To this end, proximately automatically detected field markers are initially determined for the image diagnosis and the relative position with respect to these field markers is transmitted to the body model… If the diagnoses are prestructured (e.g. in separate sections for head, neck/shoulder, thorax, abdomen/pelvis), this structure can be used to determine the anatomical position of individual image diagnoses. If this is not possible, the anatomical position of individual image diagnoses can generally be determined by means of text analysis REGM. If the anatomical position is determined, a (purely semantic) registration can likewise take place on the body model 2a, 2b. The interaction with the body model K consists inter alia of zooming and filtering the body model. The assistance for the user interaction such as also the function for charging and storing the models 3c, 3d including all contained image diagnoses is summarized in a component ML (model logic) which is likewise connected to the user interface (see 3a, 3b); Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; in which the patient data comprises medical image data which represents an anatomical region of the patient, further comprising: establishing a registration between the medical image data and the schematic body model (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the patient data assigned to the patient), includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient (i.e. providing a schematic body model of the patient based on the patient data), such as a patient-specific whole body model, including schematic representations (i.e. in which the patient data comprises medical image data which represents an anatomical region of the patient), as shown in Figs. 2-6, for example, which enables registration of image data on the model to assign (i.e. associate, relate, etc.) image diagnoses to correct anatomical structures in relation to the anatomy of the patient (i.e. establishing a registration between the medical image data and the schematic body model), as indicated above), for example), for example);
generating a second visualization based on the medical image data;
displaying the second visualization via the user interface (Gossler, Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0040-69]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 5, which are identified with 1 and 2… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… The user can select whether all results are shown or only those which correspond to certain criteria… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change… The user can display a history at each examination result… FIG. 6 shows the afore-cited progress mode having symbols S in color, which may have the following meaning… red: (current finding)… green: (prior finding)… red-green: (got worse)… green-red: (got better)… brown: (unchanged)… white: (disappeared)… temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings; Par. [0083-95]: R1 to R7 reference points and/or positions e.g. R1: lymph nodes, neck, right… S symbols in color, e.g. in red, green, red-green, green-red, brown, white; generating a second visualization based on the medical image data; displaying the second visualization via the user interface (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize medical images as well as diagnosis available to the user at the diagnostic station (i.e. generating a first, second, third… Nth visualization), for example, by automatically detecting (i.e. identifying, recognizing, etc.) field markers, which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. the schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. the anatomical position for the at least one piece of the examination information), including at least one position (i.e. segment, portion, region, etc.) in the body model assigned to the examination data record (i.e. the anatomical position for the at least one piece of the examination information within the schematic body model), by way of the user interface, as shown in Figs. 2-6 (i.e. displaying the visualization for a user via a user interface), as indicated above), for example);
receiving a user input from the user via the user interface, the user input is directed to a generation of a further piece of examination information based on the second visualization (Par. [0040-59]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station by means of dictation or text entry. This diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… User interactions are shown in FIG. 3… The user can change the zoom settings, so that more or less details relating to the examination results are shown… The user can switch the labels on and/or off… User interactions are shown in FIG. 4… If the user positions the mouse above an examination result, a preview pain appears with a detailed description of the result… if available, a preview image of the result can be shown. If the user clicks on this preview image, he navigates directly to this result in the original images… User interactions are shown in FIG. 5… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… User interactions are shown in FIG. 6… The user can move to results of earlier examinations by way of a time bar. Furthermore, he/she can activate a comparison mode in order to select which time points are to be compared with one another… The user can select whether all results are shown or only those which correspond to certain criteria (e.g. change in size). [0058] 3. Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change etc.) [0059] 4. The user can display a history at each examination result; receiving a user input from the user via the user interface, the user input is directed to a generation of a further piece of examination information based on the second visualization (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the examination information), including a diagnostic station which enables a user to access image data records of the patient (i.e. a generation of the examination information), by way of the user interface, for example, by interacting, as in the interactions shown in FIGS. 2 to 6 (i.e. receiving a user input from the user via the user interface, the user input is directed to a generation of a further piece of examination information based on the first, second, third… Nth visualization), as indicated above), for example);
determining an anatomical position for the further piece of examination information based on the user input and the registration;
ascertaining the further piece of examination information based on the determined anatomical position and on the user input; and
assigning the further piece of examination information to the patient data (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed. A method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; and registering the at least one examination data record with the body model, wherein at least one position in the body model is assigned to the examination data record; the position being made known for interaction by way of a user interface; Par. [0002-7]: computer-assisted structuring of medical examination data and/or one or more examination data records… evaluation of the image data records largely takes place in a computer-assisted fashion at diagnostic stations, which provide for observation and navigation through the image data record and a summary of the evaluation (for instance as text or dictation). The image data record is to this end stored in series of medical images, which a radiologist essentially observes sequentially, wherein he/she dictates the evaluation… the appearance, position and changes to pathological structures are described in the evaluation; Par. [0014-18]: medical data relating to a patient examination… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method; Par. [0040-49]: diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6. Image diagnoses are transmitted largely fully automatically into the body model, wherein different methods and/or algorithms A and/or services are used… a registration takes place for instance with the model based on automatically detected field markers. To this end, proximately automatically detected field markers are initially determined for the image diagnosis and the relative position with respect to these field markers is transmitted to the body model… the diagnoses are prestructured (e.g. in separate sections for head, neck/shoulder, thorax, abdomen/pelvis), this structure can be used to determine the anatomical position of individual image diagnoses… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization)… User interactions are shown in FIG. 4, which are identified with 1 and 2… If the user positions the mouse above an examination result, a preview pain appears with a detailed description of the result… if available, a preview image of the result can be shown. If the user clicks on this preview image, he navigates directly to this result in the original images; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; determining an anatomical position for the further piece of examination information based on the user input and the registration;
ascertaining the further piece of examination information based on the determined anatomical position and on the user input; and
assigning the further piece of examination information to the patient data (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of the patient (i.e. the examination information in the patient data), by automatically detecting (i.e. identifying, recognizing, etc.) field markers (i.e. identifying at least one piece of the examination information in the patient data), which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. the schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. determining an anatomical position for the at least one piece of the examination information within the schematic body model), for example, including at least one position (i.e. segment, portion, region, etc.) in the body model assigned (i.e. associated, related, etc.) to the examination data record (i.e. assigning the further piece of examination information to the patient data), which enables registration of image data on the model to assign image diagnoses to anatomical structures (i.e. determining an anatomical position for the further piece of examination information based on the user input and the registration), for example, including a diagnostic station which enables a user to access image data records of the patient, by way of the user interface, for example, by interacting, as in the interactions shown in FIGS. 2 to 6 (i.e. ascertaining the further piece of examination information based on the determined anatomical position and on the user input), as indicated above), for example).
Regarding claim 11, claim 1 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), wherein
the schematic body model comprises a whole-body model of the patient, and
the visualization of the schematic body model comprises a schematic whole-body view of the patient (Gossler, Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-16]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data; Par. [0040-44]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6… The interaction with the body model K consists inter alia of zooming and filtering the body model … Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; wherein the schematic body model comprises a whole-body model of the patient, and the visualization of the schematic body model comprises a schematic whole-body view of the patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient, such as a patient-specific whole body model, including schematic representations (i.e. wherein the schematic body model comprises a whole-body model of the patient, and the visualization of the schematic body model comprises a schematic whole-body view of the patient), as shown in Fig. 2 below:
PNG
media_image1.png
788
460
media_image1.png
Greyscale
, for example).
Regarding claim 12, claim 1 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), wherein
the schematic body model includes at least one first level of detail and a second level of detail, wherein the second level of detail is an extract from the first level of detail,
at least one of the first level of detail of the second level of detail is selectable, and
the generating the visualization generates the visualization of the schematic body model based on at least one of the first level of detail or the second level of detail (Gossler, Par. [0040-47]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6… If the anatomical position is determined, a (purely semantic) registration can likewise take place on the body model 2a, 2b. The interaction with the body model K consists inter alia of zooming and filtering the body model… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization)… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 3, which are characterized with 1 and 2…1. The user can change the zoom settings, so that more or less details relating to the examination results are shown; wherein the schematic body model includes at least one first level of detail and a second level of detail, wherein the second level of detail is an extract from the first level of detail, at least one of the first level of detail of the second level of detail is selectable, and the generating the visualization generates the visualization of the schematic body model based on at least one of the first level of detail or the second level of detail (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient, such as a patient-specific whole body model, including interaction with the body model, by way of a user interface, in which a user changes (i.e. selects) zoom (i.e. detail) settings (i.e. at least one of the first level of detail of the second level of detail is selectable), so that more (+) or less (-) details (i.e. wherein the schematic body model includes at least one first level of detail and a second level of detail) relating to the examination results are shown, including a first level of detail, or low level of detail, which is selected by using the Zoom detail level control, as shown in Fig. 2 below:
[AltContent: arrow]
PNG
media_image1.png
788
460
media_image1.png
Greyscale
, for example, and including a second level of detail, or high level of detail, which is an extract (i.e. section, segment, portion, etc.) from the first level of detail, and is selected by using the Zoom detail level control (1), as shown if Fig. 3 below:
[AltContent: arrow]
PNG
media_image2.png
768
502
media_image2.png
Greyscale
(i.e. the generating the visualization generates the visualization of the schematic body model based on at least one of the first level of detail or the second level of detail), as indicated above), for example).
Regarding claim 13, claim 12 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), further comprising:
automatically selecting a level of detail based on at least one of the patient data, the at least one piece of examination information or a diagnostic assessment task of the patient data (Gossler, Par. [0043-45]: the user will also use the possibility of writing or dictating image diagnoses directly into the diagnosis without specific measurements or evaluations. If the diagnoses are prestructured (e.g. in separate sections for head, neck/shoulder, thorax, abdomen/pelvis), this structure can be used to determine the anatomical position of individual image diagnoses… If the anatomical position is determined, a (purely semantic) registration can likewise take place on the body model 2a, 2b. The interaction with the body model K consists inter alia of zooming and filtering the body model… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2; further comprising: automatically selecting a level of detail based on at least one of the patient data, the at least one piece of examination information or a diagnostic assessment task of the patient data (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient, such as a patient-specific whole body model, including interaction with the body model, by way of a user interface, including zooming (i.e. automatically selecting a level of detail) and filtering the body model, which provides the user with an indication of the number of results per group and textual information relating to a description of the examination results in the lowest zoom stage are shown (i.e. further comprising: automatically selecting a level of detail based on at least one of the patient data), by way of example in FIG. 2, as indicated above), for example).
Regarding claim 14, claim 12 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), wherein
the examination information is associated in each case with at least one time point in a patient trajectory of the patient,
at least one of one or more time points or one or more time ranges of the at least one time point in the patient trajectory of the patient are selectable, and
the generating the visualization generates the visualization of the schematic body model based on at least one of the one or more time points or the one or more time ranges (Gossler, Par. [0013-17]: method and an apparatus as well as a computer program product according to the independent claims are disclosed… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model. With the subsequent diagnosis, the full context information relating to each individual diagnosis is therefore available at any time…The results of previous patient examinations are also registered with the same body model on this basis, so that changes to the diagnoses can be shown between different points in time; Par. [0056-69]: user can move to results of earlier examinations by way of a time bar. Furthermore, he/she can activate a comparison mode in order to select which time points are to be compared with one another… obtaining a quick overview of changes, since these can indicate a change in the state of health of the patient but may also be an indication that the radiologist has overseen an image diagnosis relating to a time instant or estimated/measured the same differently. The temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings. A film of the changes over time can result with a quick sequence of these steps; wherein the examination information is associated in each case with at least one time point in a patient trajectory of the patient, at least one of one or more time points or one or more time ranges of the at least one time point in the patient trajectory of the patient are selectable, and the generating the visualization generates the visualization of the schematic body model based on at least one of the one or more time points or the one or more time ranges (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient, such as a patient-specific whole body model, including interaction with the body model, by way of a user interface, in which a user can move to results of earlier examinations of a patient (i.e. the examination information is associated in each case with at least one time point in a patient trajectory of the patient), by way of a time bar of the user interface, and activating a comparison mode in order to select which time points (i.e. at least one of one or more time points or one or more time ranges of the at least one time point in the patient trajectory of the patient are selectable) are to be compared with one another, in order to obtaining a quick overview of changes, which indicate a change in an image diagnosis relating to a time instant, for example and a temporal progress can therefore not only be represented by special symbols, but instead also by the automatically or manually triggered continuous display of the model relating to the available time instants (i.e. and the generating the visualization generates the visualization of the schematic body model based on at least one of the one or more time points or the one or more time ranges), as indicated above), for example).
Regarding claim 15, claim 14 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), further comprising:
automatically selecting the at least one of one or more time points or one or more time ranges based on the at least one of the patient data, the at least one piece of the examination information or a diagnostic assessment task of the patient data (Gossler, Par. [0013-17]: method and an apparatus as well as a computer program product according to the independent claims are disclosed… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model. With the subsequent diagnosis, the full context information relating to each individual diagnosis is therefore available at any time…The results of previous patient examinations are also registered with the same body model on this basis, so that changes to the diagnoses can be shown between different points in time; Par. [0056-69]: user can move to results of earlier examinations by way of a time bar. Furthermore, he/she can activate a comparison mode in order to select which time points are to be compared with one another… obtaining a quick overview of changes, since these can indicate a change in the state of health of the patient but may also be an indication that the radiologist has overseen an image diagnosis relating to a time instant or estimated/measured the same differently. The temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings. A film of the changes over time can result with a quick sequence of these steps; automatically selecting the at least one of one or more time points or one or more time ranges based on the at least one of the patient data, the at least one piece of the examination information or a diagnostic assessment task of the patient data (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the patient data), includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient, such as a patient-specific whole body model, including interaction with the body model, by way of a user interface, in which a user can move to results of earlier examinations of a patient, by way of a time bar of the user interface, and activating a comparison mode in order to select which time points are to be compared with one another, in order to obtaining a quick overview of changes, which indicate a change in an image diagnosis relating to a time instant, for example and a temporal progress can therefore not only be represented by special symbols, but instead also by the automatically or manually triggered continuous display of the model relating to the available time instants (i.e. automatically selecting the at least one of one or more time points or one or more time ranges based on the at least one of the patient data), as indicated above), for example).
Regarding claim 22, Gossler discloses a system for structuring medical examination information relating to a patient (Par. [0002-14]: computer-assisted structuring of medical examination data and/or one or more examination data records… method and an apparatus as well as a computer program product according to the independent claims are disclosed… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method, which can be defined in a hardware and/or software relevant fashion and/or as a computer program product… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model), the system comprising:
an interface (Par. [0040]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station); and
a controller, the controller is configured to cause the system (Par. [0002-14]: computer-assisted structuring of medical examination data and/or one or more examination data records… method and an apparatus as well as a computer program product according to the independent claims are disclosed… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method, which can be defined in a hardware and/or software relevant fashion and/or as a computer program product… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model) to,
receive patient data assigned to the patient via the interface (Abstract: method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; Par. [0014]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model; Par. [0040]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station; receive patient data assigned to the patient via the interface (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. patient data assigned to the patient), including a diagnostic station which enables a user to access (i.e. receive, retrieve, obtain, etc.) image data records of a patient at the diagnostic station (i.e. receive patient data assigned to the patient via the interface), as indicated above), for example),
provide a schematic body model of the patient based on the patient data, wherein the schematic body model schematically replicates at least one anatomy of the patient (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-16]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data; Par. [0040-44]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6… The interaction with the body model K consists inter alia of zooming and filtering the body model … Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; provide a schematic body model of the patient based on the patient data, wherein the schematic body model schematically replicates at least one anatomy of the patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the patient data assigned to the patient), includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient (i.e. provide a schematic body model of the patient based on the patient data), such as a patient-specific whole body model, including schematic representations (i.e. wherein the schematic body model schematically replicates at least one anatomy of the patient), as shown in Figs. 2-6, for example, which enables registration of image data on the model to assign (i.e. associate, relate, etc.) image diagnoses to correct anatomical structures in relation to the anatomy of the patient, as indicated above), for example),
identify at least one piece of the medical examination information based on the patient data,
determine an anatomical position for the at least one piece of the medical examination information within the schematic body model by assigning the at least one piece of the medical examination information to a segment of the body model (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed. A method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; and registering the at least one examination data record with the body model, wherein at least one position in the body model is assigned to the examination data record; the position being made known for interaction by way of a user interface; Par. [0002-7]: computer-assisted structuring of medical examination data and/or one or more examination data records… evaluation of the image data records largely takes place in a computer-assisted fashion at diagnostic stations, which provide for observation and navigation through the image data record and a summary of the evaluation (for instance as text or dictation). The image data record is to this end stored in series of medical images, which a radiologist essentially observes sequentially, wherein he/she dictates the evaluation… the appearance, position and changes to pathological structures are described in the evaluation; Par. [0014-18]: medical data relating to a patient examination… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method; Par. [0040-49]: diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6. Image diagnoses are transmitted largely fully automatically into the body model, wherein different methods and/or algorithms A and/or services are used… a registration takes place for instance with the model based on automatically detected field markers. To this end, proximately automatically detected field markers are initially determined for the image diagnosis and the relative position with respect to these field markers is transmitted to the body model… the diagnoses are prestructured (e.g. in separate sections for head, neck/shoulder, thorax, abdomen/pelvis), this structure can be used to determine the anatomical position of individual image diagnoses… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization)… User interactions are shown in FIG. 4, which are identified with 1 and 2… If the user positions the mouse above an examination result, a preview pain appears with a detailed description of the result… if available, a preview image of the result can be shown. If the user clicks on this preview image, he navigates directly to this result in the original images; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; identify at least one piece of the examination information in the patient data; determine an anatomical position for the at least one piece of the examination information within the schematic body model by assigning the at least one piece of examination information to a segment of the body model (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of the patient (i.e. the examination information in the patient data) by automatically detecting (i.e. identifying, recognizing, etc.) field markers (i.e. identify at least one piece of the examination information in the patient data), which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. the schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. determining an anatomical position for the at least one piece of the examination information within the schematic body model), for example, including at least one position (i.e. segment, portion, region, etc.) in the body model assigned (i.e. associated, related, etc.) to the examination data record (i.e. the examination information), which enables registration of image data on the model to assign image diagnoses to anatomical structures (i.e. determine an anatomical position for the at least one piece of the examination information within the schematic body model by assigning the at least one piece of examination information to a segment a segment of the body model), as indicated above), for example), and
generate a visualization of the schematic body model in which the anatomical position of the at least one piece of the medical examination information is highlighted (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0040-69]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 5, which are identified with 1 and 2… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… The user can select whether all results are shown or only those which correspond to certain criteria… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change… The user can display a history at each examination result… FIG. 6 shows the afore-cited progress mode having symbols S in color, which may have the following meaning… red: (current finding)… green: (prior finding)… red-green: (got worse)… green-red: (got better)… brown: (unchanged)… white: (disappeared)… temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings; Par. [0083-95]: R1 to R7 reference points and/or positions e.g. R1: lymph nodes, neck, right… S symbols in color, e.g. in red, green, red-green, green-red, brown, white; and generate a visualization of the schematic body model in which the anatomical position of the at least one piece of the examination information is highlighted (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize (i.e. generate a visualization) medical images as well as diagnosis available to the user at the diagnostic station, for example, by automatically detecting (i.e. identifying, recognizing, etc.) field markers, which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. the schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. the anatomical position for the at least one piece of the examination information), including at least one position (i.e. segment, portion, region, etc.) in the body model assigned to the examination data record (i.e. the anatomical position for the at least one piece of the examination information within the schematic body model), by way of the user interface, which is highlighted (i.e. emphasized, accentuated, etc.) by markings, icons, symbols and/or text, as shown in Figs. 2-6 (i.e. the anatomical position of the at least one piece of the medical examination information is highlighted), as indicated above), for example).
Gossler teachings above disclose providing an interactive whole body model (i.e. a schematic body model) which is used for the diagnosis of medical data of a patient (i.e. the patient), as indicated above, but does not expressly disclose building (i.e. creating, generating, etc.) the schematic body model of the patient, as recited in line 4 of claim 22.
However, Shoudy teaches building the schematic body model of the patient (Par. [0004]: medical imaging guidance system may have a patient sensor that may receive three-dimensional (3D) data associated with a patient and an imaging system that has an imaging hardware component that may acquire image data of an anatomical feature associated with the patient… The medical guidance system may also have a processor that generates a 3D surface map associated with the patient based on the 3D data, generates a 3D patient space from the 3D surface map associated with the patient, generates a 3D patient model by mapping an anatomical atlas to the 3D patient space… The 3D patient model may have one or more 3D representations of anatomical features of a human body within the 3D patient space; Par. [0020-32]: guidance system provided herein provide guidance to an operator via a three-dimensional (3D) patient model. For example, the 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient to the operator. The guidance system may generate the 3D patient model by generating a 3D surface map of the patient, identifying reference points (e.g., anatomical landmarks) based on the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient… the anatomical and/or physiological information associated with patient 22 may include the degrees of freedom associated with the desired anatomical feature to be imaged or any surrounding and/or adjacent anatomical features of the patient 22… the anatomical information and/or physiological information may include one or more anatomical models. For example, the anatomical models may be associated with anatomical features such a body part, an organ, a muscle, a bone, or the like. The anatomical models may include a polygonal or volumetric 3D model of the anatomical feature. The anatomical model may also be associated with an indexed list of anatomical components of the anatomical feature. The indexed list of anatomical components may include each body part, organ, muscle, bone, or the like, that is connected with each other body part, organ, muscle, bone, or the like, in the associated anatomical feature. Each anatomical component in the indexed list may share at least one point of correspondence to another anatomical component in the indexed list. For example, with respect to the anatomical feature of the hip-to-femur joint, the anatomical components may include the last lumbar vertebrae (L5), the sacrum (S1), the ilium, the ischium, and the femur. As such, each anatomical model may define the linkages between each of the anatomical components associated with each anatomical model. For example, in the 3D model of the anatomical feature, a point of correspondence for the femur ‘A’ and the point of correspondence for the ischium ‘B’; Par. [0037-39]: controller 24 may generate a three-dimensional (3D) patient model (e.g., an anatomical twin) associated with the patient 22 and provide visual guidance to the operator to position and/or orient the patient 22, the imaging hardware components, or both, via the 3D patient model. For example, after generating the 3D patient model, the controller 24 may send a command signal to the display 30 to present the 3D patient model associated with the patient 22 to the operator. The 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient 22 to the operator… The controller 24 may generate the 3D patient model by generating a 3D surface map of the patient 22… identifying reference points (e.g., anatomical landmarks) within the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient 22… based on the acquired sensor data of the patient 22… the controller 24 may estimate the pose (e.g., the position and/or orientation) of the patient 22 and identify one or more anatomical reference points. For example, the anatomical reference points may include the shoulders, the hips, the knees, or any other suitable anatomical landmark… the anatomical reference points may be inferred based on the 3D surface map of the patient. The controller 24 may then fuse the anatomical reference points with the acquired 3D surface map of the patient 22… Based on 3D surface map of the patient 22, the controller 24 may identify or extract 3D anatomical reference points. The controller 24 may then deform one or more anatomical features from an anatomical atlas to the 3D surface map of the patient 22 based on the extracted 3D anatomical reference points to generate the 3D patient model… the guidance system 10 may provide the operator with spatial awareness of expected anatomical features via the 3D patient model; building the schematic body model of the patient (e.g. system generates (i.e. builds, creates, etc.) a three-dimensional (3D) patient model (i.e. a schematic body model of the patient) by generating a 3D surface map of the patient, identifying reference points, such as anatomical landmarks, based on the 3D surface map, and deforming an anatomical atlas to the patient space (i.e. the patient data) defined by the 3D surface map of the patient (i.e. building the schematic body model of the patient), as indicated above), for example, including a programmed controller which generates a 3D patient model (e.g., an anatomical twin) associated with the patient and provides visual guidance to an operator to position and/or orient the patient, as indicated above), for example).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 1.
The combination of Gossler and Shoudy teachings, as a whole, teaches the method, as indicated above, but fails to teach the following as further recited in claim 22.
However, KRUECKER teaches the schematic body model is subdivided into multiple segments and assigning the at least one piece of the medical examination information to a segment of the multiple segments (Par. [0001-2]: Medical image segmentation divides medical images into regions with similar properties. The role of segmentation is to subdivide anatomical structures in the medical images, so as to, for example, study the anatomical structures, identify region(s) of interest, measure tissue volume, and so on. Anatomical structures include bones and organs in a human body, and medical images may include one such anatomical structure or multiple anatomical structures… Model-based segmentation is a tool for automated or semi-automated medical image segmentation. Models include multiple parts and/or nodes, and consist of a three-dimensional (3D) surface mesh and a set of features that detail anatomical structures. The models of anatomical structures are created based on previous measurements of the same types of anatomical structures from multiple patients. The types of anatomical structures in models are the same types of anatomical structures in the medical images. The 3D surface mesh represents the idealized geometries (e.g., geometric shapes) of the anatomical structures. The set of features describe the appearance of the 3D surface mesh at locations corresponding to different parts and/or nodes. In model-based segmentation, a segmentation algorithm optimizes the matching of features in the models with corresponding locations in the medical images to be segmented; Par. [0023-61]: FIG. 1A illustrates views of a structure with landmarks superimposed thereon for landmark visualization for medical image segmentation… elements in FIG. 1A may identify landmarks on a modeled tissue structure in a model or corresponding locations on a medical image… FIG. 1A may be views of either a medical image or a model, and the elements identified as landmarks for FIG. 1A may alternatively be locations on a medical image. As described herein, landmarks on a modeled tissue structure are leveraged to assist in identifying corresponding locations in a medical image, where the structure(s) in the model are of the same type as the structure(s) in the medical image… a computer system for landmark visualization for medical image segmentation… accentuating of landmarks on a screen with waiting for and recognizing identification of a location corresponding to an accentuated landmark on the same screen or a different screen. By intuitively presenting and correlating landmarks on a modeled tissue structure with locations on an image of a tissue structure of the same type, reliability of the mapping and subsequent segmentation can be improved. The accentuating described herein may be performed by highlighting or otherwise visually isolating and contrasting a landmark from its surroundings… process starts at S210 by loading and displaying a segmentation model. A segmentation model is a 2D or 3D model of one or more types of structures… The segmentation model and the medical images described herein can be displayed as 2D models and medical images. The segmentation model of any one structure is based on measurements of multiple structures of the same type… The basis of the segmentation model may be average measurements or median measurements of the multiple structures of the same type… The multiple structures may be tissue structures, such as internal organs. An example of a type of structure is a liver, or a pancreas…a current structure is displayed or highlighted for an organ/structure in the segmentation model of S210… Highlighting a structure may involve selectively brightening, outlining, overlaying, changing a color, or otherwise changing the display characteristics of an area on the modeled tissue structure screen 150A corresponding to the specific organ/structure being highlighted… a location in an image corresponding to the landmark in the model is identified. That is, a location in an image of a structure (i.e., the first tissue structure) corresponding to the landmark is identified… Mapping the landmarks on the modeled tissue structures to the locations of the tissue structures may involve transforming coordinates of the landmarks in the segmentation model to coordinates for the corresponding locations in an image, or vise versa…transformation of a medical image to be aligned with a segmentation model is referred to herein as fitting… Fitting is performed based on the mapping… the mapping provides a predetermined spatial relationship between a modeled tissue structure and a tissue structure in a medical image, i.e., due to the correlating between identified landmarks on the model and identified locations in the medical image… a region on the medical image where the corresponding location for the additional landmark is expected may be accentuated to help guide selection of the location… selection of a location after segmenting has already occurred in an iterative process may show how additional identifications of locations for additional landmarks will affect previous segmentation results. A next segmentation of a structure may be optimized using features such as intensities and gradients near a current segmentation. A region can be highlighted by, for example, changing colors where different colors correspond to different predetermined amounts of change… landmarks may be individually displayed and highlighted/accentuated one at a time. Similarly, each of the structures in the segmentation model of the multi-structural organ can be individually and sequentially displayed and highlighted/accentuated in order to assist the user in identifying which structure of the multi-structural organ to check for locations corresponding to a particular landmark… the landmarks on the modeled tissue structures are mapped to image(s) of the tissue structures at S577. Mapping may involve transforming coordinates of the landmark in the segmentation model to the corresponding location in an image. Once a predetermined number of landmarks in the model and locations in the image are identified, all locations in the image may have coordinates transformed to the coordinate system of the segmentation model in the segmentation at S580. In other words, a process in FIG. 5 may include mapping a predetermined number of landmarks to corresponding locations. In this way, a structure in an image from a patient may be segmented based on an idealized segmentation model from measurements of structures of the same type for previous patients… segmentation may be partially or fully performed. Additionally, segmentation may be performed again when additional landmarks and additional locations are identified. In other words, a process in FIG. 5 may include reperforming the mapping to include a predetermined number of additional landmarks when the mapping and segmenting are performed iteratively… an iterative process for a multi-structure segmentation model may be performed wherein landmarks for a first structure are sequentially identified, corresponding locations in a medical image are next identified, and the process switches to a second structure once all locations or a predetermined minimum number of locations for the first structure are identified; the schematic body model is subdivided into multiple segments and assigning the at least one piece of the medical examination information to a segment of the multiple segments (e.g. computer system for landmark visualization and medical image segmentation includes a model-based segmentation, in which model structures (i.e. multiple segments) in an image from a patient (i.e. schematic body model of a patient) are segmented (i.e. the schematic body model is subdivided into multiple segments), for example, and the model-based segmentation is used to identify landmarks (i.e. at least one piece of the medical examination information) by intuitively (i.e. automatically) presenting and correlating (i.e. assigning, associating, etc.) each of the landmarks on a modeled structure (i.e. a segment of the multiple segments) with locations on an image of a structure of the same type (i.e. and assigning the at least one piece of the medical examination information to a segment of the multiple segments), as indicated above), for example).
Gossler teachings above disclose that a relative position with respect to field markers is transmitted to the body model in order to determine the anatomical position of individual image diagnoses, including at least one position in the body model assigned to the examination data record, by way of the user interface, which is highlighted (i.e. emphasized, accentuated, etc.) by markings, icons, symbols and/or text, as shown in Figs. 2-6, for example, as indicated above, but does not expressly disclose highlighted, as recited in the claim.
However, KRUECKER teaches highlighted (e.g. accentuating is performed by highlighting or otherwise visually isolating and contrasting a landmark from its surroundings, including a current structure highlighted for an organ/structure in the segmentation model, as indicated above, for example).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 1.
Regarding claim 26, claim 1 is incorporated and discloses a computer-readable storage medium having readable and executable program sections, when executed by a controller of a system, cause the system to perform the method of claim 1 (Gossler. Par. [0026-27]: flowcharts describe the operations as sequential processes… The processes may correspond to methods, functions, procedures, subroutines, subprograms... Methods discussed below, some of which are illustrated by the flow charts, may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks will be stored in a machine or computer readable medium such as a storage medium or non-transitory computer readable medium. A processor(s) will perform the necessary tasks).
Claims 4-6 are rejected under 35 U.S.C. 103 as being unpatentable over Gossler, in view of Shoudy, in further view KRUECKER, as applied to claim 1 above, and in further view of SCHADEWALDT et al. (Chinese Patent Application Publication No. CN 110537227A), hereafter referred to as SCHADEWALDT.
Regarding claim 4, claim 3 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), but fails to teach the following as further recited in claim 4.
However, SCHADEWALDT teaches wherein the unique markers are based on a predetermined anatomical ontology (Pg. 1: a radiology observer comprises an electronic processor; at least one display; at least one user input device and a non-transitory storage medium storing instructions executable by the electronic processor to retrieve, from a radiology examination data storage device, a radiology examination comprising at least one radiology image and a radiology report; instructions executable by the electronic processor to retrieve or generate a set of image tags identifying anatomical features in the at least one radiological image and a set of reporting tags identifying clinical concepts in the radiological report … an electronic medical ontology to identify at least one relevant segment of the radiology report and to highlight at least one relevant segment of a radiology report in the reporting window… receive, via the at least one user input device, a selection of a segment of the radiology report shown in the reporting window and use the set of image tags, the set of reporting tags, and the electronic medical ontology to identify at least one relevant anatomical feature of the at least one radiological image and highlight at least one relevant anatomical feature of at least one radiological image in the image window; Pg. 2: a radiology observer includes at least one electronic processor, at least one display, and at least one user input device. The display shows at least a portion of the radiology image in the image window and at least a portion of the radiology report in the reporting window. A selection of the anatomical feature shown in the image window is received and a corresponding segment of the radiology report is identified and highlighted in the reporting window. A selection of a segment of the radiology report shown in the reporting window is received and a corresponding anatomical feature of the at least one radiological image is identified and highlighted in the image window. The highlighting operation uses the image anatomical feature tags and reports the clinical concept tags generated using the medical ontology and the anatomical atlas; Pg. 3: radiology observer workstation 10 retrieves the radiology examination 22 from the PACS 24, including the radiology image 30 (s) and the radiology report 32 observer workstation 10 to present the data in two windows: an image window 40, wherein at least a portion of the at least one radiology image 30 is displayed; and a reporting window 42, wherein at least a portion of the radiology report 32 is displayed… link component comprises an anatomical feature marker 52 for generating a set of image tags identifying anatomical features in the at least one radiological image 30, a set of report tags for generating a clinical concept in the segment identifying the radiology report 32, and a medical ontology 56 for linking the clinical concept and the anatomical feature… radiology viewer makes use of the thus generated image tag and the reporting tag to enable an automated link 70 between the user-selected anatomical feature of the display image 30 and the corresponding segment of the radiology report 32; Or, conversely, a link 70between a user-selected segment capable of automatically displaying the radiology report 32; Pg. 4: When displaying the link 70, the highlighting of the selected anatomical feature and the corresponding reporting segment (s), or the highlighted display of the corresponding anatomical feature (s), may be displayed in a highlighted manner. The term "highlighting" as used herein is intended to refer to any display feature that is used to emphasize highlighted image features in the radiological image displayed in the image window 40, or to any display feature that is used to emphasize the highlighted segment of the radiology report (part of)shown in the reporting window 42… keywords in the radiology report 32 are identified with entries of the medical ontology 56, and a set of report tags are generated by correlating the segments of the radiology report 32 containing the identified keywords with the clinical concepts described in the corresponding entries of the medical ontology 56… processing is performed on the radiology report 32 to identify a segment of the radiology report 32 corresponding to the entry of the medical ontology 56, and a set of report tags is generated by associating the identified segment of the radiology report 32 with the clinical concept described in the corresponding entry of the medical ontology 56… The reference medical ontology 56 May for example be a standard medical ontology, such as RADLEX or SNOMED CT… when the user selects the image location or the reporting fragment, the anatomical structure corresponding to the image location or the clinical concept contained in the fragment is determined by referring to the image tag or report tag, respectively, and with reference to the ontology 56 to identify the corresponding report fragment (s) or the image anatomical feature (s). Thus, the clinical concept and anatomical feature are linked via a common ontology 56; Pg. 5: the clinical concept described or mentioned in the selected fragment is identified by reference to the background tag of the radiology report 32. In operation S14, ontology 56 is consulted to identify corresponding anatomical features (s) related to the identified clinical concept… a copy of the medical ontology 56 (or at least a relevant portion thereof) is suitably stored on the observer workstation 10; wherein the unique markers are based on a predetermined anatomical ontology (e.g. radiology observer system and method include instructions executable by an electronic processor to retrieve or generate a set of image tags (i.e. unique markers, labels, landmarks, etc.) identifying anatomical features in at least one radiological image and a set of reporting tags identifying clinical concepts in the radiological report, for example, and the electronic medical ontology is used to identify at least one relevant anatomical feature of the at least one radiological image, by using reference (i.e. predetermined) medical ontology, including a standard medical ontology (i.e. wherein the unique markers are based on a predetermined anatomical ontology), such as RADLEX or SNOMED CT, as indicated above), for example).
Gossler, Shoudy, KRUECKER, and SCHADEWALDT are considered to be analogous art because they pertain to medical image processing applications. Therefore, the combined teachings of Gossler, Shoudy, KRUECKER, and SCHADEWALDT, as a whole, would have rendered obvious the invention recited in claim 4 with a reasonable expectation of success in order to modify the method for computer-assisted structuring of medical examination data (as disclosed by Gossler) with wherein the unique markers are based on a predetermined anatomical ontology (as taught by SCHADEWALDT, Pg. 1-5) in order to provide a radiological observer that provides intuitive visual links between radiology report content and related features of a radiology image as a subject of the radiology report, to provide a radiology viewer that facilitates the understanding of radiological examination by an external patient, to provide a radiological observer for radiological discovery of a visual representation of the anatomical background, and to provide a radiological observer that graphically links the clinical concept present in the radiology report to the anatomical feature represented in the underlying medical image (SCHADEWALDT, Pg. 1-2).
Regarding claim 5, claim 3 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), but fails to teach the following as further recited in claim 5.
However, SCHADEWALDT teaches further comprising:
identifying at least one relevance segment from the segments of the schematic body model based on at least one of the patient data, the at least one piece of the medical examination information or a diagnostic assessment task of the patient data, and
the generating the visualization includes at least one of highlighting the at least one relevance segment or limiting the visualization to the at least one relevance segment (Pg. 1: a radiology observer comprises an electronic processor; at least one display; at least one user input device and a non-transitory storage medium storing instructions executable by the electronic processor to retrieve, from a radiology examination data storage device, a radiology examination comprising at least one radiology image and a radiology report; instructions executable by the electronic processor to retrieve or generate a set of image tags identifying anatomical features in the at least one radiological image and a set of reporting tags identifying clinical concepts in the radiological report … an electronic medical ontology to identify at least one relevant segment of the radiology report and to highlight at least one relevant segment of a radiology report in the reporting window… receive, via the at least one user input device, a selection of a segment of the radiology report shown in the reporting window and use the set of image tags, the set of reporting tags, and the electronic medical ontology to identify at least one relevant anatomical feature of the at least one radiological image and highlight at least one relevant anatomical feature of at least one radiological image in the image window; Pg. 2: a radiology observer includes at least one electronic processor, at least one display, and at least one user input device. The display shows at least a portion of the radiology image in the image window and at least a portion of the radiology report in the reporting window. A selection of the anatomical feature shown in the image window is received and a corresponding segment of the radiology report is identified and highlighted in the reporting window. A selection of a segment of the radiology report shown in the reporting window is received and a corresponding anatomical feature of the at least one radiological image is identified and highlighted in the image window. The highlighting operation uses the image anatomical feature tags and reports the clinical concept tags generated using the medical ontology and the anatomical atlas; Pg. 3: radiology observer workstation 10 retrieves the radiology examination 22 from the PACS 24, including the radiology image 30 (s) and the radiology report 32 observer workstation 10 to present the data in two windows: an image window 40, wherein at least a portion of the at least one radiology image 30 is displayed; and a reporting window 42, wherein at least a portion of the radiology report 32 is displayed… link component comprises an anatomical feature marker 52 for generating a set of image tags identifying anatomical features in the at least one radiological image 30, a set of report tags for generating a clinical concept in the segment identifying the radiology report 32, and a medical ontology 56 for linking the clinical concept and the anatomical feature… radiology viewer makes use of the thus generated image tag and the reporting tag to enable an automated link 70 between the user-selected anatomical feature of the display image 30 and the corresponding segment of the radiology report 32; Or, conversely, a link 70between a user-selected segment capable of automatically displaying the radiology report 32; Pg. 4: When displaying the link 70, the highlighting of the selected anatomical feature and the corresponding reporting segment (s), or the highlighted display of the corresponding anatomical feature (s), may be displayed in a highlighted manner. The term "highlighting" as used herein is intended to refer to any display feature that is used to emphasize highlighted image features in the radiological image displayed in the image window 40, or to any display feature that is used to emphasize the highlighted segment of the radiology report (part of)shown in the reporting window 42… keywords in the radiology report 32 are identified with entries of the medical ontology 56, and a set of report tags are generated by correlating the segments of the radiology report 32 containing the identified keywords with the clinical concepts described in the corresponding entries of the medical ontology 56… processing is performed on the radiology report 32 to identify a segment of the radiology report 32 corresponding to the entry of the medical ontology 56, and a set of report tags is generated by associating the identified segment of the radiology report 32 with the clinical concept described in the corresponding entry of the medical ontology 56… The reference medical ontology 56 May for example be a standard medical ontology, such as RADLEX or SNOMED CT… when the user selects the image location or the reporting fragment, the anatomical structure corresponding to the image location or the clinical concept contained in the fragment is determined by referring to the image tag or report tag, respectively, and with reference to the ontology 56 to identify the corresponding report fragment (s) or the image anatomical feature (s). Thus, the clinical concept and anatomical feature are linked via a common ontology 56; Pg. 5: the clinical concept described or mentioned in the selected fragment is identified by reference to the background tag of the radiology report 32. In operation S14, ontology 56 is consulted to identify corresponding anatomical features (s) related to the identified clinical concept… a copy of the medical ontology 56 (or at least a relevant portion thereof) is suitably stored on the observer workstation 10; identifying at least one relevance segment from the segments of the schematic body model based on at least one of the patient data, the at least one piece of the medical examination information or a diagnostic assessment task of the patient, and the generating the visualization includes at least one of highlighting the at least one relevance segment or limiting the visualization to the at least one relevance segment (e.g. radiology observer system and method include instructions executable by an electronic processor to retrieve or generate a set of image tags (i.e. unique markers, labels, landmarks, etc.) identifying anatomical features in at least one radiological image and a set of reporting tags identifying clinical concepts in the radiological report, for example, and the electronic medical ontology is used to identify at least one relevant (i.e. important, interesting, etc.) anatomical feature of the at least one radiological image (i.e. identifying at least one relevance segment from the segments of the schematic body model based on at least one of the patient data, the at least one piece of the medical examination information or a diagnostic assessment task of the patient), for example, and highlight at least one relevant anatomical feature of at least one radiological image in the image window (i.e. the generating the visualization includes at least one of highlighting the at least one relevance segment or limiting the visualization to the at least one relevance segment), as indicated above), for example).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 4.
Regarding claim 6, claim 5 is incorporated and the combination of Gossler, Shoudy, KRUECKER, and SCHADEWALDT teaches as a whole, teaches the method (Gossler, Par. [0002-14]), further comprising:
identifying at least one further piece of the medical examination information in the patient data based on the relevance segment, and
displaying the at least one further piece of the medical examination information via the user interface (SCHADEWALDT, Pg. 1: a radiology observer comprises an electronic processor; at least one display; at least one user input device and a non-transitory storage medium storing instructions executable by the electronic processor to retrieve, from a radiology examination data storage device, a radiology examination comprising at least one radiology image and a radiology report; instructions executable by the electronic processor to retrieve or generate a set of image tags identifying anatomical features in the at least one radiological image and a set of reporting tags identifying clinical concepts in the radiological report … an electronic medical ontology to identify at least one relevant segment of the radiology report and to highlight at least one relevant segment of a radiology report in the reporting window… receive, via the at least one user input device, a selection of a segment of the radiology report shown in the reporting window and use the set of image tags, the set of reporting tags, and the electronic medical ontology to identify at least one relevant anatomical feature of the at least one radiological image and highlight at least one relevant anatomical feature of at least one radiological image in the image window; Pg. 2: a radiology observer includes at least one electronic processor, at least one display, and at least one user input device. The display shows at least a portion of the radiology image in the image window and at least a portion of the radiology report in the reporting window. A selection of the anatomical feature shown in the image window is received and a corresponding segment of the radiology report is identified and highlighted in the reporting window. A selection of a segment of the radiology report shown in the reporting window is received and a corresponding anatomical feature of the at least one radiological image is identified and highlighted in the image window. The highlighting operation uses the image anatomical feature tags and reports the clinical concept tags generated using the medical ontology and the anatomical atlas; Pg. 3: radiology observer workstation 10 retrieves the radiology examination 22 from the PACS 24, including the radiology image 30 (s) and the radiology report 32 observer workstation 10 to present the data in two windows: an image window 40, wherein at least a portion of the at least one radiology image 30 is displayed; and a reporting window 42, wherein at least a portion of the radiology report 32 is displayed… link component comprises an anatomical feature marker 52 for generating a set of image tags identifying anatomical features in the at least one radiological image 30, a set of report tags for generating a clinical concept in the segment identifying the radiology report 32, and a medical ontology 56 for linking the clinical concept and the anatomical feature… radiology viewer makes use of the thus generated image tag and the reporting tag to enable an automated link 70 between the user-selected anatomical feature of the display image 30 and the corresponding segment of the radiology report 32; Or, conversely, a link 70between a user-selected segment capable of automatically displaying the radiology report 32; Pg. 4: When displaying the link 70, the highlighting of the selected anatomical feature and the corresponding reporting segment (s), or the highlighted display of the corresponding anatomical feature (s), may be displayed in a highlighted manner. The term "highlighting" as used herein is intended to refer to any display feature that is used to emphasize highlighted image features in the radiological image displayed in the image window 40, or to any display feature that is used to emphasize the highlighted segment of the radiology report (part of)shown in the reporting window 42… keywords in the radiology report 32 are identified with entries of the medical ontology 56, and a set of report tags are generated by correlating the segments of the radiology report 32 containing the identified keywords with the clinical concepts described in the corresponding entries of the medical ontology 56… processing is performed on the radiology report 32 to identify a segment of the radiology report 32 corresponding to the entry of the medical ontology 56, and a set of report tags is generated by associating the identified segment of the radiology report 32 with the clinical concept described in the corresponding entry of the medical ontology 56… The reference medical ontology 56 May for example be a standard medical ontology, such as RADLEX or SNOMED CT… when the user selects the image location or the reporting fragment, the anatomical structure corresponding to the image location or the clinical concept contained in the fragment is determined by referring to the image tag or report tag, respectively, and with reference to the ontology 56 to identify the corresponding report fragment (s) or the image anatomical feature (s). Thus, the clinical concept and anatomical feature are linked via a common ontology 56; Pg. 5: the clinical concept described or mentioned in the selected fragment is identified by reference to the background tag of the radiology report 32. In operation S14, ontology 56 is consulted to identify corresponding anatomical features (s) related to the identified clinical concept… a copy of the medical ontology 56 (or at least a relevant portion thereof) is suitably stored on the observer workstation 10; identifying at least one further piece of the medical examination information in the patient data based on the relevance segment, and displaying the at least one further piece of the medical examination information via the user interface (e.g. radiology observer system and method include instructions executable by an electronic processor to retrieve or generate a set of image tags (i.e. unique markers, labels, landmarks, etc.) identifying anatomical features in at least one radiological image and a set of reporting tags identifying clinical concepts in the radiological report, for example, and the electronic medical ontology is used to identify relevant (i.e. important, interesting, etc.) anatomical features of the at least one radiological image (i.e. identifying at least one further piece of the medical examination information in the patient data based on the relevance segment), for example, in which radiology observer includes at least one electronic processor, at least one display, and at least one user input device, and the display shows at least a portion of the radiology image in the image window and at least a portion of the radiology report in the reporting window (i.e. and displaying the at least one further piece of the medical examination information via the user interface), as indicated above ).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 4.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Gossler, in view of Shoudy, in further view KRUECKER, as applied to claim 1 above, and in further view of SEIFERT et al. (US PG Publication No. US 2015/0161786 A1), hereafter referred to as SEIFERT.
Regarding claim 7, claim 1 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), wherein the at least one piece of the medical examination information includes a plurality of the medical examination information (Gossler, Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed. A method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; and registering the at least one examination data record with the body model, wherein at least one position in the body model is assigned to the examination data record; the position being made known for interaction by way of a user interface; Par. [0002-7]: computer-assisted structuring of medical examination data and/or one or more examination data records… evaluation of the image data records largely takes place in a computer-assisted fashion at diagnostic stations, which provide for observation and navigation through the image data record and a summary of the evaluation (for instance as text or dictation). The image data record is to this end stored in series of medical images, which a radiologist essentially observes sequentially, wherein he/she dictates the evaluation… the appearance, position and changes to pathological structures are described in the evaluation; Par. [0014-18]: medical data relating to a patient examination… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method; Par. [0040-49]: diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6. Image diagnoses are transmitted largely fully automatically into the body model, wherein different methods and/or algorithms A and/or services are used… a registration takes place for instance with the model based on automatically detected field markers. To this end, proximately automatically detected field markers are initially determined for the image diagnosis and the relative position with respect to these field markers is transmitted to the body model… the diagnoses are prestructured (e.g. in separate sections for head, neck/shoulder, thorax, abdomen/pelvis), this structure can be used to determine the anatomical position of individual image diagnoses… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization)… User interactions are shown in FIG. 4, which are identified with 1 and 2… If the user positions the mouse above an examination result, a preview pain appears with a detailed description of the result… if available, a preview image of the result can be shown. If the user clicks on this preview image, he navigates directly to this result in the original images; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; wherein the at least one piece of the medical examination information includes a plurality of the medical examination information (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. wherein the at least one piece of the medical examination information includes a plurality of the medical examination information), including a diagnostic station which enables a user to access image data records of the patient, as indicated above), for example), but fails to the following as further recited in claim 7.
However, SEIFERT teaches and the method further comprises:
establishing a prioritization of the plurality of the medical examination information based on at least one of the patient data, the examination information or a diagnostic assessment task of the patient data, wherein the prioritization is based on a relative relevance of a respective piece of the medical examination information within the plurality of the medical examination information, and the generating the visualization is based on the prioritization (Par. [0013-32]: a computer implemented tool which enables the user to get all the relevant three-dimensional medical images on one click in response to his query. The query relates to a specific anatomical structure (for example the liver, heart, kidney etc.) and the purpose is that he gets a composition of three-dimensional images which has been selected from a plurality of three-dimensional images (from different patients and/or from different acquisition times and/or acquisition modalities) and which all do comprise the relevant anatomical structure, the query refers to. Thus, it should no longer be necessary that the user manually clicks through the set of plurality of three-dimensional images, stored in an image archive in order to select the relevant images and subsequently to load these images and to again select the relevant part of the respective images, which is necessary for answering the respective query. All these steps, mentioned before, should be automated… As an example: If the user inputs a query: "show images of livers with liver tumors", then, the system should automatically parse this query and start a search in the database for images relating to (possibly: different) liver tumors. However, in order to provide the user only with relevant image information, it is necessary that only the liver-related parts of the images and tumor related images are to be considered. Further, it is necessary that a plurality of three-dimensional images is considered (for example from different patients and/or from the same patients at different acquisition times etc.). Accordingly, the system should provide a query-specific new image, covering the related anatomical structure (here: the liver tumor) and to merge these three-dimensional volume images to one common volume image… The method automatically aggregates all the different relevant volume images and extracts the relevant structures in it (the regions of interest, the query refers to) and aggregates these relevant structures in the plurality of different images to a common query-specific volume image; Par. [0075-117]: a computer-based implementation of a query-specific generation of medical volumes and to an automatic retrieval of volume sections to which a query refers to. A major advantage of at least one embodiment of the invention is that the method may be used for automatically localizing anatomical structures in a plurality of medical volumes without loading each of the volume images separately and manually searching the anatomical structure in the volume. The method automatically aggregates all the different relevant volume images and extracts the relevant structures in it (the regions of interest, the query refers to) and aggregates these relevant structures in the plurality of different images to a common query-specific volume image… After the workflow unit 1 has received this trigger event (e.g. a notification), it starts a landmark detection. The landmark detection is an automatic processing by a landmark detector unit 3. Several landmarks are used. For example 20 landmarks are enough for the system to work in high quality. It is important that enough representative landmarks in every body part or portion are present (for example in head, neck, thorax, abdomen, pelvis, extremities etc.). The workflow unit 1 uses a background knowledge database 2 to analyze the anatomical meaning (semantic content) of the landmarks detected by the landmark detector unit 3. The detected landmarks are stored… The task of the query expansion unit 11 is to expand the input concept into related meaningful concepts by the use of a background knowledge database 2. Resulting multiple sub-queries are parsed to a sub-image selector 12, which selects all the matching sub-image regions in the sub-image archive 8, consisting of a set of three-dimensional (volume) cells. The result is then returned from the sub-image selector 12 to graphical search interface 9 and adequately visualized to the user or provided for download. Accordingly, only a small fraction of volume data is to be provided for download. Only the relevant regions of interest of a plurality of volume data image files are to be presented as query-specific volume. It is no longer necessary to download all the volume data files, which cover the anatomical structure the query refers to (by contrast: this was necessary in state of the art systems); and the method further comprises: establishing a prioritization of the plurality of the medical examination information based on at least one of the patient data, the examination information or a diagnostic assessment task of the patient data, wherein the prioritization is based on a relative relevance of a respective piece of the medical examination information within the plurality of the medical examination information, and the generating the visualization is based on the prioritization (e.g. computer-based implementation of a query-specific generation of medical volumes and an automatic retrieval of volume sections (i.e. segments, extracts, portions, etc.) to which a query refers to, for example, includes automatically localizing anatomical structures by performing landmark detection (i.e. the plurality of the medical examination information based on at least one of the patient data), aggregating different relevant volume images, extracting relevant structures in it, such as regions of interest the query refers to, and aggregating these relevant structures in the plurality of different images to a common query-specific volume image (i.e. establishing a prioritization of the plurality of the medical examination information based on at least one of the patient data, wherein the prioritization is based on a relative relevance of a respective piece of the medical examination information within the plurality of the medical examination information), for example, and only the relevant regions of interest of a plurality of volume data image files are to be presented (i.e. visualized, displayed, etc.) as query-specific volume (i.e. and the generating the visualization is based on the prioritization), as indicate above), for example).
Gossler, Shoudy, KRUECKER, and SEIFERT are considered to be analogous art because they pertain to medical image processing applications. Therefore, the combined teachings of Gossler, Shoudy, KRUECKER, and SEIFERT, as a whole, would have rendered obvious the invention recited in claim 7 with a reasonable expectation of success in order to modify the method for computer-assisted structuring of medical examination data (as disclosed by Gossler) with and the method further comprises: establishing a prioritization of the plurality of the medical examination information based on at least one of the patient data, the examination information or a diagnostic assessment task of the patient data, wherein the prioritization is based on a relative relevance of a respective piece of the medical examination information within the plurality of the medical examination information, and the generating the visualization is based on the prioritization (as taught by SEIFERT, Abstract, Par. [0013-35, 75-117]) in order to enable a user to get all relevant three-dimensional medical images on one click in response to his query, to select the relevant part of respective images, which is necessary for answering the respective query, and to provide the user only with relevant image information (SEIFERT, Abstract, Par. [0004-15]).
Regarding claim 9, claim 1 is incorporated and the combination of Gossler, Shoudy, and KRUECKER, as a whole, teaches the method (Gossler, Par. [0002-14]), the method further comprising:
providing a predetermined number of different pictograms, each pictogram representing at least one of different attributes or different attribute combinations of a medical report;
displaying at least some of the predetermined different pictograms for the user via the user interface;
receiving, via the user interface, a user input from the user which comprises a pictogram selected from the displayed pictograms;
determining the anatomical position for the at least one piece of the medical examination information based on the pictogram;
determining one or more attributes of the different attributes or the different attribute combinations based on the selected pictogram;
ascertaining a further piece of the medical examination information based on the determined anatomical position and the one or more determined attributes; and
assigning the further piece of the medical examination information to the patient data (Gossler, Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0008-19]: the individual image diagnosis (pathological abnormalities, often extended by measurements of tumor sizes and degrees of stenosis) and also the summarized evaluation are subsequently verbalized in the form of a radiological examination report and forwarded to the treating physician for instance… interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0040-69]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 5, which are identified with 1 and 2… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… The user can select whether all results are shown or only those which correspond to certain criteria… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change… The user can display a history at each examination result… FIG. 6 shows the afore-cited progress mode having symbols S in color, which may have the following meaning… red: (current finding)… green: (prior finding)… red-green: (got worse)… green-red: (got better)… brown: (unchanged)… white: (disappeared)… temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings; Par. [0083-95]: R1 to R7 reference points and/or positions e.g. R1: lymph nodes, neck, right… S symbols in color, e.g. in red, green, red-green, green-red, brown, white; further comprising: providing a predetermined number of different pictograms, each pictogram representing at least one of different attributes or different attribute combinations of a medical report; displaying at least some of the predetermined different pictograms for the user via the user interface; receiving, via the user interface, a user input from the user which comprises a pictogram selected from the displayed pictograms; determining the anatomical position for the at least one piece of the medical examination information based on the pictogram; determining one or more attributes of the different attributes or the different attribute combinations based on the selected pictogram; ascertaining a further piece of the medical examination information based on the determined anatomical position and the one or more determined attributes; and assigning the further piece of the medical examination information to the patient data (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize (i.e. generating a visualization including pictograms representing at least one of different attributes or different attribute combinations) medical images as well as diagnosis available to the user at the diagnostic station including individual image diagnosis and a summarized evaluation are subsequently verbalized in the form of a radiological examination report and forwarded to the treating physician for instance (i.e. providing a predetermined number of different pictograms, each pictogram representing at least one of different attributes or different attribute combinations of a medical report), for example, by automatically detecting (i.e. identifying, determining, recognizing, etc.) field markers (i.e. one or more attributes), which are initially determined for image diagnosis (i.e. determining one or more attributes of the different attributes or the attribute combinations of the medical findings based on the selected pictogram), for example, and the relative position with respect to these field markers is transmitted to the body model in order to determine (i.e. ascertain) the anatomical position of individual image diagnoses (i.e. determining the anatomical position for the at least one piece of the medical examination information based on the pictogram and ascertaining a further piece of the medical examination information based on the determined anatomical position and the one or more determined attributes), including at least one position (i.e. segment, portion, region, etc.) in the body model assigned to the examination data record (i.e. the anatomical position of the at least one piece of the medical examination information), by way of the user interface (i.e. receiving, via the user interface, a user input from the user which comprises a pictogram selected from the displayed pictograms), which is highlighted (i.e. emphasized, accentuated, etc.) by markings, icons, symbols and/or text (i.e. pictograms, visualizations, etc.), shown on a display apparatus (i.e. displaying at least some of the predetermined different pictograms for the user via the user interface), as shown in Figs. 2-6), for example), but fails to teach the following as further recited in claim 9.
However, Suzuki teaches a user input which comprises dragging and dropping of a pictogram selected from the displayed pictograms onto a drop site in the displayed visualization (Par. [0001]: an examination information display device and method and, in particular, to screen display of medical images or medical examination information; Par. [0125-131]: A reference navigation function will be described on the basis of FIG. 15. FIG. 15 is a schematic diagram showing a display example of reference navigation. A reference navigation icon 80 in the history area 70 is an icon for selecting the examination information reference procedure of each radiologist… When the user designates a thumbnail image or an examination icon displayed in the history area 70 using the mouse 19 and drags and drops it to the share window 90, a medical image or examination data corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position; a user input which comprises dragging and dropping of a pictogram selected from the displayed pictograms onto a drop site in the displayed visualization (e.g. examination information display device and method to screen display of medical images or medical examination information includes reference navigation icons (i.e. pictograms, visualizations, thumbnails, etc.) for selecting the examination information reference procedure of each radiologist, for example, and when a user designates (i.e. a user input) a thumbnail image or an examination icon displayed using the mouse and drags and drops it to the share window, a medical image or examination data (i.e. at least one piece of the medical examination information) corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position (i.e. a user input which comprises dragging and dropping of a pictogram selected from the displayed pictograms onto a drop site (i.e. position, location, etc.) in the displayed visualization), as indicated above), for example);
determining an anatomical position [the anatomical position for the at least one piece of the medical examination information] based on the drop site (Par. [0001]: an examination information display device and method and, in particular, to screen display of medical images or medical examination information; Par. [0125-160]: A reference navigation function will be described on the basis of FIG. 15. FIG. 15 is a schematic diagram showing a display example of reference navigation. A reference navigation icon 80 in the history area 70 is an icon for selecting the examination information reference procedure of each radiologist… When the user designates a thumbnail image or an examination icon displayed in the history area 70 using the mouse 19 and drags and drops it to the share window 90, a medical image or examination data corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position… as shown in Table 171, a pattern rule of image addition or replacement when a thumbnail image is dropped to the position are attached to each of (a) to (m). Data indicating this rule is stored in the main memory 11 or the magnetic disk 12, and the divided region setting section 32c performs layout change processing by referring to the data appropriately… As described above, the user can display examinations corresponding to thumbnail images or examination icons of the history area 70 additionally in the share window 90 by dragging and dropping these thumbnail images or examination icons using the mouse 19… when performing the additional display, the display position of the examination displayed additionally can be designated by the dropping position. Therefore, additional display can be performed at the position according to the user's preference… A layout showing the display position of an examination in one or more share windows is stored in the layout storage section 33 for each routine, and the display position of the share window 90 may be changed along the layout selected by the user; determining the anatomical position for the at least one piece of the medical examination information based on the drop site (e.g. examination information display device and method to screen display of medical images or medical examination information includes reference navigation icons (i.e. pictograms, visualizations, thumbnails, etc.) for selecting the examination information reference procedure of each radiologist, for example, and when a user designates (i.e. a user input) a thumbnail image or an examination icon displayed using the mouse and drags and drops it to the share window, a medical image or examination data (i.e. at least one piece of the medical examination information) corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position (i.e. a user input which comprises dragging and dropping of a pictogram selected from the displayed pictograms onto a drop site (i.e. position, location, etc.) in the displayed visualization), in which the user displays examinations corresponding to thumbnail images or examination icons of the history area by dragging and dropping thumbnail images or examination icons using the mouse, for example, and the display position of the examination displayed is designated by the dropping position (i.e. determining the anatomical position for the at least one piece of the medical examination information based on the drop site), as indicated above), for example).
Gossler, Shoudy, KRUECKER, and Suzuki are considered to be analogous art because they pertain to medical image processing applications. Therefore, the combined teachings of Gossler, Shoudy, KRUECKER, and Suzuki, as a whole, would have rendered obvious the invention recited in claim 9 with a reasonable expectation of success in order to modify the method for computer-assisted structuring of medical examination data (as disclosed by Gossler) with a user input which comprises dragging and dropping of a pictogram selected from the displayed pictograms onto a drop site in the displayed visualization and determining the anatomical position for the at least one piece of the medical examination information based on the drop site (as taught by Suzuki, Abstract, Par. [0001, 125-160]) in order to provide an examination information display device and method capable of searching, specifying, and selecting the candidate examination information easily (Suzuki, Abstract, Par. [0001-9]).
Claims 16-17, 20, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Gossler, in view of Suzuki.
Regarding claim 16, Gossler discloses a computer-implemented method for ascertaining examination information during a diagnostic assessment of patient data relating to a patient (Par. [0002-14]: computer-assisted structuring of medical examination data and/or one or more examination data records… method and an apparatus as well as a computer program product according to the independent claims are disclosed… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method, which can be defined in a hardware and/or software relevant fashion and/or as a computer program product… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model), the method comprising:
receiving the patient data relating to the patient (Abstract: method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; Par. [0014-16]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data; Par. [0040]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; comprising: receiving the patient data relating to the patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. patient data assigned to the patient), including a diagnostic station which enables a user to access (i.e. receive, retrieve, obtain, etc.) image data records of a patient (i.e. receiving the patient data relating to the patient), as indicated above), for example);
generating a visualization based on the patient data, the visualization representing at least one anatomical region of the patient;
displaying the visualization for a user via a user interface (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0040-69]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 5, which are identified with 1 and 2… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… The user can select whether all results are shown or only those which correspond to certain criteria… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change… The user can display a history at each examination result… FIG. 6 shows the afore-cited progress mode having symbols S in color, which may have the following meaning… red: (current finding)… green: (prior finding)… red-green: (got worse)… green-red: (got better)… brown: (unchanged)… white: (disappeared)… temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings; Par. [0083-95]: R1 to R7 reference points and/or positions e.g. R1: lymph nodes, neck, right… S symbols in color, e.g. in red, green, red-green, green-red, brown, white; generating a visualization of the medical image data; displaying the visualization for a user via a user interface (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize (i.e. generating a visualization) medical images as well as diagnosis available to the user at the diagnostic station (i.e. generating a visualization of the medical image data), for example, by automatically detecting (i.e. identifying, recognizing, etc.) field markers, which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. the schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. the anatomical position for the at least one piece of the examination information), including at least one position (i.e. segment, portion, region, etc.) in the body model assigned to the examination data record (i.e. the anatomical position for the at least one piece of the examination information within the schematic body model), by way of the user interface, as shown in Figs. 2-6 (i.e. displaying the visualization for a user via a user interface), as indicated above), for example);
providing a predetermined number of different pictograms, each pictogram representing at least one of different attributes or attribute combinations of a possible medical report of the patient;
displaying at least some of the predetermined different pictograms to allow selection of individual pictograms by the user via the user interface;
receiving a user input from the user via the user interface, the user input comprises a pictogram selected from the displayed pictograms in the displayed visualization;
determining an anatomical position of medical findings of the patient with respect to the at least one anatomical region of the visualization;
determining one or more attributes of the different attributes or the attribute combinations of the medical findings based on the selected pictogram;
ascertaining the examination information based on the determined anatomical position and the one or more determined attributes; and
providing the examination information (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0040-69]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 5, which are identified with 1 and 2… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… The user can select whether all results are shown or only those which correspond to certain criteria… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change… The user can display a history at each examination result… FIG. 6 shows the afore-cited progress mode having symbols S in color, which may have the following meaning… red: (current finding)… green: (prior finding)… red-green: (got worse)… green-red: (got better)… brown: (unchanged)… white: (disappeared)… temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings; Par. [0083-95]: R1 to R7 reference points and/or positions e.g. R1: lymph nodes, neck, right… S symbols in color, e.g. in red, green, red-green, green-red, brown, white; further comprising: providing a predetermined number of different pictograms, each pictogram representing at least one of different attributes or different attribute combinations of a medical report; displaying at least some of the predetermined different pictograms for the user via the user interface; receiving a user input from the user via the user interface, the user input comprises a pictogram selected from the displayed pictograms in the displayed visualization; determining an anatomical position of medical findings of the patient with respect to the at least one anatomical region of the visualization; determining one or more attributes of the different attributes or the attribute combinations of the medical findings based on the selected pictogram; ascertaining the examination information based on the determined anatomical position and the one or more determined attributes; and providing the examination information (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize (i.e. generating a visualization including pictograms representing at least one of different attributes or different attribute combinations) medical images as well as diagnosis available to the user at the diagnostic station (i.e. providing a predetermined number of different pictograms, each pictogram representing at least one of different attributes or different attribute combinations of a medical report), for example, by automatically detecting (i.e. identifying, determining, recognizing, etc.) field markers (i.e. one or more attributes), which are initially determined for image diagnosis (i.e. determining one or more attributes of the different attributes or the attribute combinations of the medical findings based on the selected pictogram), for example, and the relative position with respect to these field markers is transmitted to the body model in order to determine (i.e. ascertain) the anatomical position of individual image diagnoses (i.e. determining an anatomical position of medical findings of the patient with respect to the at least one anatomical region of the visualization and ascertaining the examination information based on the determined anatomical position and the one or more determined attributes), including at least one position (i.e. segment, portion, region, etc.) in the body model assigned to the examination data record (i.e. the anatomical position of the at least one piece of the medical examination information), by way of the user interface (i.e. receiving a user input from the user via the user interface, the user input comprises a pictogram selected from the displayed pictograms in the displayed visualization), which is highlighted (i.e. emphasized, accentuated, etc.) by markings, icons, symbols and/or text (i.e. pictograms, visualizations, etc.), shown on a display apparatus (i.e. providing the examination information), as shown in Figs. 2-6), for example), but fails to teach the following as further recited in claim 16.
However, Suzuki teaches dragging and dropping of individual pictograms by the user via the user interface, user input comprises a dragging and dropping of a pictogram selected from the displayed pictograms onto a drop site in the displayed visualization (Par. [0001]: an examination information display device and method and, in particular, to screen display of medical images or medical examination information; Par. [0125-131]: A reference navigation function will be described on the basis of FIG. 15. FIG. 15 is a schematic diagram showing a display example of reference navigation. A reference navigation icon 80 in the history area 70 is an icon for selecting the examination information reference procedure of each radiologist… When the user designates a thumbnail image or an examination icon displayed in the history area 70 using the mouse 19 and drags and drops it to the share window 90, a medical image or examination data corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position; a user input which comprises dragging and dropping of a pictogram selected from the displayed pictograms onto a drop site in the displayed visualization (e.g. examination information display device and method to screen display of medical images or medical examination information includes reference navigation icons (i.e. pictograms, visualizations, thumbnails, etc.) for selecting the examination information reference procedure of each radiologist, for example, and when a user designates (i.e. a user input) a thumbnail image or an examination icon displayed using the mouse and drags and drops it to the share window, a medical image or examination data (i.e. at least one piece of the medical examination information) corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position (i.e. dragging and dropping of individual pictograms by the user via the user interface, user input which comprises dragging and dropping of a pictogram selected from the displayed pictograms onto a drop site (i.e. position, location, etc.) in the displayed visualization), as indicated above), for example);
determining an anatomical position of medical findings of the patient with respect to the at least one anatomical region of the visualization based on the drop site (Par. [0001]: an examination information display device and method and, in particular, to screen display of medical images or medical examination information; Par. [0125-160]: A reference navigation function will be described on the basis of FIG. 15. FIG. 15 is a schematic diagram showing a display example of reference navigation. A reference navigation icon 80 in the history area 70 is an icon for selecting the examination information reference procedure of each radiologist… When the user designates a thumbnail image or an examination icon displayed in the history area 70 using the mouse 19 and drags and drops it to the share window 90, a medical image or examination data corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position… as shown in Table 171, a pattern rule of image addition or replacement when a thumbnail image is dropped to the position are attached to each of (a) to (m). Data indicating this rule is stored in the main memory 11 or the magnetic disk 12, and the divided region setting section 32c performs layout change processing by referring to the data appropriately… As described above, the user can display examinations corresponding to thumbnail images or examination icons of the history area 70 additionally in the share window 90 by dragging and dropping these thumbnail images or examination icons using the mouse 19… when performing the additional display, the display position of the examination displayed additionally can be designated by the dropping position. Therefore, additional display can be performed at the position according to the user's preference… A layout showing the display position of an examination in one or more share windows is stored in the layout storage section 33 for each routine, and the display position of the share window 90 may be changed along the layout selected by the user; determining an anatomical position of medical findings of the patient with respect to the at least one anatomical region of the visualization based on the drop site (e.g. examination information display device and method to screen display of medical images or medical examination information includes reference navigation icons (i.e. pictograms, visualizations, thumbnails, etc.) for selecting the examination information reference procedure of each radiologist, for example, and when a user designates (i.e. a user input) a thumbnail image or an examination icon displayed using the mouse and drags and drops it to the share window, a medical image or examination data (i.e. at least one piece of the medical examination information) corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position (i.e. a user input which comprises dragging and dropping of a pictogram selected from the displayed pictograms onto a drop site (i.e. position, location, etc.) in the displayed visualization), in which the user displays examinations corresponding to thumbnail images or examination icons of the history area by dragging and dropping thumbnail images or examination icons using the mouse, for example, and the display position of the examination displayed is designated by the dropping position (i.e. determining an anatomical position of medical findings of the patient with respect to the at least one anatomical region of the visualization based on the drop site), as indicated above), for example).
Gossler and Suzuki are considered to be analogous art because they pertain to medical image processing applications. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the method for computer-assisted structuring of medical examination data (as disclosed by Gossler) with dragging and dropping of individual pictograms by the user via the user interface, user input comprises a dragging and dropping of a pictogram selected from the displayed pictograms onto a drop site in the displayed visualization and determining an anatomical position of medical findings of the patient with respect to the at least one anatomical region of the visualization based on the drop site (as taught by Suzuki, Abstract, Par. [0001, 125-160]) in order to provide an examination information display device and method capable of searching, specifying, and selecting the candidate examination information easily (Suzuki, Abstract, Par. [0001-9]).
Regarding claim 17, claim 16 is incorporated and the combination of Gossler and Suzuki teaches the method (Gossler, Par. [0002-14]), wherein the providing the examination information includes at least one of:
producing a medical report based on the examination information, or storing the examination information in the patient data (Gossler, Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0008-19]: the individual image diagnosis (pathological abnormalities, often extended by measurements of tumor sizes and degrees of stenosis) and also the summarized evaluation are subsequently verbalized in the form of a radiological examination report and forwarded to the treating physician for instance… interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; wherein the providing the examination information includes at least one of: producing a medical report based on the examination information, or storing the examination information in the patient data (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize (i.e. generating a visualization including pictograms representing at least one of different attributes or different attribute combinations) medical images as well as diagnosis available to the user at the diagnostic station including individual image diagnosis and a summarized evaluation are subsequently verbalized in the form of a radiological examination report and forwarded to the treating physician for instance (i.e. wherein the providing the examination information includes at least one of: producing a medical report based on the examination information), as indicated above), for example).
Regarding claim 20, claim 16 is incorporated and the combination of Gossler and Suzuki teaches the method (Gossler, Par. [0002-14]), wherein the generating the visualization further comprises: selecting the at least one anatomical region of the patient for the visualization, the visualization represents only the selected at least one anatomical region, and the selection is made based on at least one of the patient data or a diagnostic assessment task of the patient data (Gossler, Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0055-57]: User interactions are shown in FIG. 6… The user can select whether all results are shown or only those which correspond to certain criteria; wherein the generating the visualization further comprises: selecting the at least one anatomical region of the patient for the visualization, the visualization represents only the selected at least one anatomical region, and the selection is made based on at least one of the patient data or a diagnostic assessment task of the patient data (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient (i.e. at least one of the patient data) at least one of the patient data to visualize (i.e. generating a visualization including pictograms) medical images as well as diagnosis available to the user at the diagnostic station, by way of user interactions shown in FIG. 6, for example, including anatomical structures of the patient (i.e. the generating the visualization further comprises: selecting the at least one anatomical region of the patient for the visualization), in which a selects whether all results are shown or only those which correspond to certain criteria (i.e. wherein the generating the visualization further comprises: selecting the at least one anatomical region of the patient for the visualization, the visualization represents only the selected at least one anatomical region), as indicated above), for example)
Regarding claim 23, Gossler discloses a system for ascertaining examination information during a diagnostic assessment of patient data (Par. [0002-14]: computer-assisted structuring of medical examination data and/or one or more examination data records… method and an apparatus as well as a computer program product according to the independent claims are disclosed… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method, which can be defined in a hardware and/or software relevant fashion and/or as a computer program product… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model), the system comprising:
an interface (Par. [0040]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station); and
a controller, the controller is configured to cause the system (Par. [0002-14]: computer-assisted structuring of medical examination data and/or one or more examination data records… method and an apparatus as well as a computer program product according to the independent claims are disclosed… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method, which can be defined in a hardware and/or software relevant fashion and/or as a computer program product… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model) to,
receive the patient data via the interface (Abstract: method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; Par. [0014]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model; Par. [0040]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station; receive the patient data via the interface (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. patient data assigned to the patient), including a diagnostic station which enables a user to access (i.e. receive, retrieve, obtain, etc.) image data records of a patient at the diagnostic station (i.e. receive the patient data via the interface), as indicated above), for example),
generate a visualization based on the patient data and to provide it to a user via the interface, the visualization represents at least one anatomical region of a patient (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0040-69]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 5, which are identified with 1 and 2… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… The user can select whether all results are shown or only those which correspond to certain criteria… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change… The user can display a history at each examination result… FIG. 6 shows the afore-cited progress mode having symbols S in color, which may have the following meaning… red: (current finding)… green: (prior finding)… red-green: (got worse)… green-red: (got better)… brown: (unchanged)… white: (disappeared)… temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings; Par. [0083-95]: R1 to R7 reference points and/or positions e.g. R1: lymph nodes, neck, right… S symbols in color, e.g. in red, green, red-green, green-red, brown, white; generate a visualization of the medical image data and provide it to a user via the interface, the visualization represents at least one anatomical region of a patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize (i.e. generate a visualization) medical images as well as diagnosis available to the user at the diagnostic station, for example, by automatically detecting (i.e. identifying, recognizing, etc.) field markers, which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. a schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. the anatomical position for the at least one piece of the examination information), including at least one position (i.e. segment, portion, region, etc.) in the body model assigned to the examination data record (i.e. the anatomical position for the at least one piece of the examination information within the schematic body model), by way of the user interface, which is highlighted (i.e. emphasized, accentuated, etc.) by markings, icons, symbols and/or text, as shown in Figs. 2-6 (i.e. generate a visualization of the medical image data and provide it to a user via the interface, the visualization represents at least one anatomical region of a patient), as indicated above), for example),
provide a predetermined number of different pictograms, each pictogram representing at least one of different attributes or attribute combinations of a possible medical report of the patient,
provide the user with at least some of the predetermined different pictograms via the interface to allow selection,
receive a user input of the user via the interface,
determine an anatomical position,
determine one or more attributes of the different attributes or the attribute combinations based on the selected pictogram,
determine the examination information based on the determined anatomical position and the one or more determined attributes, and
provide the examination information (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0040-69]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 5, which are identified with 1 and 2… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… The user can select whether all results are shown or only those which correspond to certain criteria… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change… The user can display a history at each examination result… FIG. 6 shows the afore-cited progress mode having symbols S in color, which may have the following meaning… red: (current finding)… green: (prior finding)… red-green: (got worse)… green-red: (got better)… brown: (unchanged)… white: (disappeared)… temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings; Par. [0083-95]: R1 to R7 reference points and/or positions e.g. R1: lymph nodes, neck, right… S symbols in color, e.g. in red, green, red-green, green-red, brown, white; provide a predetermined number of different pictograms, each pictogram representing at least one of different attributes or attribute combinations of a possible medical report of the patient, provide the user with at least some of the predetermined different pictograms via the interface to allow selection, receive a user input of the user via the interface, determine an anatomical position, determine one or more attributes of the different attributes or the attribute combinations based on the selected pictogram, determine the examination information based on the determined anatomical position and the one or more determined attributes, and provide the examination information (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize (i.e. a visualization including pictograms representing at least one of different attributes or different attribute combinations) medical images as well as diagnosis available to the user at the diagnostic station (i.e. provide a predetermined number of different pictograms, each pictogram representing at least one of different attributes or attribute combinations of a possible medical report of the patient and provide the user with at least some of the predetermined different pictograms via the interface to allow selection), for example, including a diagnostic station which enables a user to access image data records of the patient (i.e. the examination information), by way of the user interface, for example, by interacting, as in the interactions shown in FIGS. 2 to 6 (i.e. (i.e. receive a user input of the user via the interface), for example, and automatically detecting (i.e. identifying, determining, recognizing, etc.) field markers (i.e. one or more attributes), which are initially determined for image diagnosis (i.e. determine one or more attributes of the different attributes or the attribute combinations based on the selected pictogram), for example, and the relative position with respect to these field markers is transmitted to the body model in order to determine (i.e. ascertain) the anatomical position of individual image diagnoses (i.e. determine an anatomical position), including at least one position (i.e. segment, portion, region, etc.) in the body model assigned to the examination data record (i.e. determine the examination information based on the determined anatomical position and the one or more determined attributes), by way of the user interface, which is highlighted (i.e. emphasized, accentuated, etc.) by markings, icons, symbols and/or text (i.e. pictograms, visualizations, etc.), shown on a display apparatus (i.e. and provide the examination information), as shown in Figs. 2-6), for example), but fails to teach the following as further recited in claim 23.
However, Suzuki teaches dragging and dropping of individual pictograms onto the visualization by the user, the user input comprises the dragging and dropping of a pictogram selected from the provided pictograms onto a drop site in the visualization (Par. [0001]: an examination information display device and method and, in particular, to screen display of medical images or medical examination information; Par. [0125-131]: A reference navigation function will be described on the basis of FIG. 15. FIG. 15 is a schematic diagram showing a display example of reference navigation. A reference navigation icon 80 in the history area 70 is an icon for selecting the examination information reference procedure of each radiologist… When the user designates a thumbnail image or an examination icon displayed in the history area 70 using the mouse 19 and drags and drops it to the share window 90, a medical image or examination data corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position; dragging and dropping of individual pictograms onto the visualization by the user, the user input comprises the dragging and dropping of a pictogram selected from the provided pictograms onto a drop site in the visualization (e.g. examination information display device and method to screen display of medical images or medical examination information includes reference navigation icons (i.e. pictograms, visualizations, thumbnails, etc.) for selecting the examination information reference procedure of each radiologist, for example, and when a user designates (i.e. a user input) a thumbnail image or an examination icon displayed using the mouse and drags and drops it to the share window, a medical image or examination data (i.e. at least one piece of the medical examination information) corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position (i.e. dragging and dropping of individual pictograms onto the visualization by the user, the user input comprises the dragging and dropping of a pictogram selected from the provided pictograms onto a drop site in (i.e. position, location, etc.) in the visualization), as indicated above), for example), determine an anatomical position based on the drop site (Par. [0001]: an examination information display device and method and, in particular, to screen display of medical images or medical examination information; Par. [0125-160]: A reference navigation function will be described on the basis of FIG. 15. FIG. 15 is a schematic diagram showing a display example of reference navigation. A reference navigation icon 80 in the history area 70 is an icon for selecting the examination information reference procedure of each radiologist… When the user designates a thumbnail image or an examination icon displayed in the history area 70 using the mouse 19 and drags and drops it to the share window 90, a medical image or examination data corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position… as shown in Table 171, a pattern rule of image addition or replacement when a thumbnail image is dropped to the position are attached to each of (a) to (m). Data indicating this rule is stored in the main memory 11 or the magnetic disk 12, and the divided region setting section 32c performs layout change processing by referring to the data appropriately… As described above, the user can display examinations corresponding to thumbnail images or examination icons of the history area 70 additionally in the share window 90 by dragging and dropping these thumbnail images or examination icons using the mouse 19… when performing the additional display, the display position of the examination displayed additionally can be designated by the dropping position. Therefore, additional display can be performed at the position according to the user's preference… A layout showing the display position of an examination in one or more share windows is stored in the layout storage section 33 for each routine, and the display position of the share window 90 may be changed along the layout selected by the user; determine an anatomical position based on the drop site (e.g. examination information display device and method to screen display of medical images or medical examination information includes reference navigation icons (i.e. pictograms, visualizations, thumbnails, etc.) for selecting the examination information reference procedure of each radiologist, for example, and when a user designates (i.e. a user input) a thumbnail image or an examination icon displayed using the mouse and drags and drops it to the share window, a medical image or examination data (i.e. at least one piece of the medical examination information) corresponding to the thumbnail image or the examination icon, which has been dragged and dropped, is additionally displayed at the dropped position (i.e. a user input which comprises dragging and dropping of a pictogram selected from the displayed pictograms onto a drop site (i.e. position, location, etc.) in the displayed visualization), in which the user displays examinations corresponding to thumbnail images or examination icons of the history area by dragging and dropping thumbnail images or examination icons using the mouse, for example, and the display position of the examination displayed is designated by the dropping position (i.e. determining an anatomical position of medical findings of the patient with respect to the at least one anatomical region of the visualization based on the drop site), as indicated above), for example).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 16.
Claims 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Gossler, in view of Suzuki, as applied to claim 16, in further view of Shoudy.
Regarding claim 18, claim 16 is incorporated and the combination of Gossler and Suzuki teaches the method (Gossler, Par. [0002-14]), wherein the generating the visualization includes building a schematic body model of the patient, the schematic body model schematically replicates at least one anatomy of the patient, and
the visualization comprises a visual representation of the schematic body model (Gossler, Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-16]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data; Par. [0040-44]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6… The interaction with the body model K consists inter alia of zooming and filtering the body model … Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; wherein the generating the visualization includes providing a schematic body model of the patient, the schematic body model schematically replicates at least one anatomy of the patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the patient data assigned to the patient), includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient (i.e. wherein the generating the visualization includes providing a schematic body model of the patient, the schematic body model schematically replicates at least one anatomy of the patient), as shown in Figs. 2-6, for example, which enables registration of image data on the model to assign (i.e. associate, relate, etc.) image diagnoses to correct anatomical structures in relation to the anatomy of the patient, for example, and the visualization comprises a visual representation of the schematic body model as shown in Figs. 2-6, as indicated above), for example).
Gossler teachings above disclose providing an interactive whole body model which is used for the diagnosis of medical data of a patient, as indicated above, but does not expressly disclose building (i.e. creating, generating, etc.) the schematic body model of the patient, as recited in line claim 18.
However, Shoudy teaches building (Par. [0004]: medical imaging guidance system may have a patient sensor that may receive three-dimensional (3D) data associated with a patient and an imaging system that has an imaging hardware component that may acquire image data of an anatomical feature associated with the patient… The medical guidance system may also have a processor that generates a 3D surface map associated with the patient based on the 3D data, generates a 3D patient space from the 3D surface map associated with the patient, generates a 3D patient model by mapping an anatomical atlas to the 3D patient space… The 3D patient model may have one or more 3D representations of anatomical features of a human body within the 3D patient space; Par. [0020-32]: guidance system provided herein provide guidance to an operator via a three-dimensional (3D) patient model. For example, the 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient to the operator. The guidance system may generate the 3D patient model by generating a 3D surface map of the patient, identifying reference points (e.g., anatomical landmarks) based on the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient… the anatomical and/or physiological information associated with patient 22 may include the degrees of freedom associated with the desired anatomical feature to be imaged or any surrounding and/or adjacent anatomical features of the patient 22… the anatomical information and/or physiological information may include one or more anatomical models. For example, the anatomical models may be associated with anatomical features such a body part, an organ, a muscle, a bone, or the like. The anatomical models may include a polygonal or volumetric 3D model of the anatomical feature. The anatomical model may also be associated with an indexed list of anatomical components of the anatomical feature. The indexed list of anatomical components may include each body part, organ, muscle, bone, or the like, that is connected with each other body part, organ, muscle, bone, or the like, in the associated anatomical feature. Each anatomical component in the indexed list may share at least one point of correspondence to another anatomical component in the indexed list. For example, with respect to the anatomical feature of the hip-to-femur joint, the anatomical components may include the last lumbar vertebrae (L5), the sacrum (S1), the ilium, the ischium, and the femur. As such, each anatomical model may define the linkages between each of the anatomical components associated with each anatomical model. For example, in the 3D model of the anatomical feature, a point of correspondence for the femur ‘A’ and the point of correspondence for the ischium ‘B’; Par. [0037-39]: controller 24 may generate a three-dimensional (3D) patient model (e.g., an anatomical twin) associated with the patient 22 and provide visual guidance to the operator to position and/or orient the patient 22, the imaging hardware components, or both, via the 3D patient model. For example, after generating the 3D patient model, the controller 24 may send a command signal to the display 30 to present the 3D patient model associated with the patient 22 to the operator. The 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient 22 to the operator… The controller 24 may generate the 3D patient model by generating a 3D surface map of the patient 22… identifying reference points (e.g., anatomical landmarks) within the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient 22… based on the acquired sensor data of the patient 22… the controller 24 may estimate the pose (e.g., the position and/or orientation) of the patient 22 and identify one or more anatomical reference points. For example, the anatomical reference points may include the shoulders, the hips, the knees, or any other suitable anatomical landmark… the anatomical reference points may be inferred based on the 3D surface map of the patient. The controller 24 may then fuse the anatomical reference points with the acquired 3D surface map of the patient 22… Based on 3D surface map of the patient 22, the controller 24 may identify or extract 3D anatomical reference points. The controller 24 may then deform one or more anatomical features from an anatomical atlas to the 3D surface map of the patient 22 based on the extracted 3D anatomical reference points to generate the 3D patient model… the guidance system 10 may provide the operator with spatial awareness of expected anatomical features via the 3D patient model; building (e.g. system generates (i.e. builds, creates, etc.) a three-dimensional (3D) patient model (i.e. a schematic body model of the patient) by generating a 3D surface map of the patient, identifying reference points, such as anatomical landmarks, based on the 3D surface map, and deforming an anatomical atlas to the patient space (i.e. the patient data) defined by the 3D surface map of the patient (i.e. building the schematic body model of the patient), as indicated above), for example, including a programmed controller which generates a 3D patient model (e.g., an anatomical twin) associated with the patient and provides visual guidance to an operator to position and/or orient the patient, as indicated above), for example).
Gossler, Suzuki, and Shoudy are considered to be analogous art because they pertain to medical image processing applications. Therefore, the combined teachings of Gossler, Suzuki, and Shoudy, as a whole, would have rendered obvious the invention recited in claim 18 with a reasonable expectation of success in order to modify the method for computer-assisted structuring of medical examination data (as disclosed by Gossler) building (as taught by Shoudy, Abstract, Par. [0004, 20-32, 37-39]) to provide guidance to an operator via a three-dimensional (3D) patient mode and to accurately perform imaging of a desired anatomical feature of a patient (Shoudy, Abstract, Par. [0002, 17, 28, 48]).
Regarding claim 19, claim 16 is incorporated and the combination of Gossler and Suzuki teaches the method (Gossler, Par. [0002-14]), wherein the patient data comprises medical image data representing the at least one anatomical region of the patient, and the generating the visualization generates a visualization of the medical image data (Gossler, Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0055-57]: User interactions are shown in FIG. 6… The user can select whether all results are shown or only those which correspond to certain criteria; wherein the patient data comprises medical image data representing the at least one anatomical region of the patient, and the generating the visualization generates a visualization of the medical image data (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, for example, and including a diagnostic station which enables a user to access image data records of a patient to visualize (i.e. generating a visualization) medical images as well as diagnosis available to the user at the diagnostic station (i.e. the generating the visualization generates a visualization of the medical image data), including a diagnostic station which enables a user to access image data records of a patient, including anatomical structures of the patient (i.e. wherein the patient data comprises medical image data representing the at least one anatomical region of the patient), as indicated above), for example), the method further comprising:
providing a schematic body model of the patient based on the patient data, the schematic body model schematically replicates at least one anatomy of the patient (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-16]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data; Par. [0040-44]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6… The interaction with the body model K consists inter alia of zooming and filtering the body model … Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; providing a schematic body model of the patient based on the patient data, the schematic body model schematically replicates at least one anatomy of the patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the patient data assigned to the patient), includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient (i.e. providing a schematic body model of the patient based on the patient data, such as a patient-specific whole body model, including schematic representations (i.e. the schematic body model schematically replicates at least one anatomy of the patient), as shown in Figs. 2-6, for example, which enables registration of image data on the model to assign (i.e. associate, relate, etc.) image diagnoses to correct anatomical structures in relation to the anatomy of the patient, as indicated above), for example); and
establishing a registration between the medical image data and the schematic body model, wherein the anatomical position is determined based on the registration, and the anatomical position is defined relative to the schematic body model (Par. [0010-17]: software for image diagnosis also enables the simultaneous representation of several image data records (adjacent to one another or superimposed). The image data records can herewith also originate from different imaging methods. Registration of the image data records herewith enables individual image diagnoses to be compared longitudinally or observed in extended representations (e.g. anatomical details by means of CT, functional information by means of MR, metabolic information by way of PET)… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model. With the subsequent diagnosis, the full context information relating to each individual diagnosis is therefore available at any time… The results of previous patient examinations are also registered with the same body model on this basis, so that changes to the diagnoses can be shown between different points in time (also animated as film). Registration of the results of different examinations on a body model also enables reference to be made to possible inconsistencies in the results… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data … a uniform type of information representation is enabled at any time and in any procedural context across all body regions, organs and image data records of different modalities. As a result, learning and synergy effects and higher efficiencies result during the further (development) and use of the system; Par. [0042-43]: automatically determined information relating to image diagnosis by further characteristics and interpretations… The position in the image (volume) can therefore take place by way of classical registration algorithms REGB (see 1a, 1b). In the simplest case, a registration takes place for instance with the model based on automatically detected field markers. To this end, proximately automatically detected field markers are initially determined for the image diagnosis and the relative position with respect to these field markers is transmitted to the body model… If the diagnoses are prestructured (e.g. in separate sections for head, neck/shoulder, thorax, abdomen/pelvis), this structure can be used to determine the anatomical position of individual image diagnoses. If this is not possible, the anatomical position of individual image diagnoses can generally be determined by means of text analysis REGM. If the anatomical position is determined, a (purely semantic) registration can likewise take place on the body model 2a, 2b. The interaction with the body model K consists inter alia of zooming and filtering the body model. The assistance for the user interaction such as also the function for charging and storing the models 3c, 3d including all contained image diagnoses is summarized in a component ML (model logic) which is likewise connected to the user interface (see 3a, 3b); Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; establishing a registration between the medical image data and the schematic body model, wherein the anatomical position is determined based on the registration, and the anatomical position is defined relative to the schematic body model (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the patient data assigned to the patient), includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient (i.e. providing a schematic body model of the patient based on the patient data), such as a patient-specific whole body model, including schematic representations (i.e. wherein the schematic body model schematically replicates at least one anatomy of the patient), as shown in Figs. 2-6, for example, which enables registration of image data on the model to assign (i.e. associate, relate, etc.) image diagnoses to correct anatomical structures in relation to the anatomy of the patient including at least one position (i.e. segment, portion, region, etc.) in the body model assigned (i.e. associated, related, etc.) to the examination data record (i.e. the examination information), which enables registration of image data on the model to assign image diagnoses to anatomical structures relative to the whole body model (i.e. establishing a registration between the medical image data and the schematic body model, wherein the anatomical position is determined based on the registration, and the anatomical position is defined relative to the schematic body model), as indicated above), for example), for example).
Gossler teachings above disclose providing an interactive whole body model which is used for the diagnosis of medical data of a patient, as indicated above, but does not expressly disclose building (i.e. creating, generating, etc.) the schematic body model of the patient, as recited in line claim 19.
However, Shoudy teaches building (Par. [0004]: medical imaging guidance system may have a patient sensor that may receive three-dimensional (3D) data associated with a patient and an imaging system that has an imaging hardware component that may acquire image data of an anatomical feature associated with the patient… The medical guidance system may also have a processor that generates a 3D surface map associated with the patient based on the 3D data, generates a 3D patient space from the 3D surface map associated with the patient, generates a 3D patient model by mapping an anatomical atlas to the 3D patient space… The 3D patient model may have one or more 3D representations of anatomical features of a human body within the 3D patient space; Par. [0020-32]: guidance system provided herein provide guidance to an operator via a three-dimensional (3D) patient model. For example, the 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient to the operator. The guidance system may generate the 3D patient model by generating a 3D surface map of the patient, identifying reference points (e.g., anatomical landmarks) based on the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient… the anatomical and/or physiological information associated with patient 22 may include the degrees of freedom associated with the desired anatomical feature to be imaged or any surrounding and/or adjacent anatomical features of the patient 22… the anatomical information and/or physiological information may include one or more anatomical models. For example, the anatomical models may be associated with anatomical features such a body part, an organ, a muscle, a bone, or the like. The anatomical models may include a polygonal or volumetric 3D model of the anatomical feature. The anatomical model may also be associated with an indexed list of anatomical components of the anatomical feature. The indexed list of anatomical components may include each body part, organ, muscle, bone, or the like, that is connected with each other body part, organ, muscle, bone, or the like, in the associated anatomical feature. Each anatomical component in the indexed list may share at least one point of correspondence to another anatomical component in the indexed list. For example, with respect to the anatomical feature of the hip-to-femur joint, the anatomical components may include the last lumbar vertebrae (L5), the sacrum (S1), the ilium, the ischium, and the femur. As such, each anatomical model may define the linkages between each of the anatomical components associated with each anatomical model. For example, in the 3D model of the anatomical feature, a point of correspondence for the femur ‘A’ and the point of correspondence for the ischium ‘B’; Par. [0037-39]: controller 24 may generate a three-dimensional (3D) patient model (e.g., an anatomical twin) associated with the patient 22 and provide visual guidance to the operator to position and/or orient the patient 22, the imaging hardware components, or both, via the 3D patient model. For example, after generating the 3D patient model, the controller 24 may send a command signal to the display 30 to present the 3D patient model associated with the patient 22 to the operator. The 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient 22 to the operator… The controller 24 may generate the 3D patient model by generating a 3D surface map of the patient 22… identifying reference points (e.g., anatomical landmarks) within the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient 22… based on the acquired sensor data of the patient 22… the controller 24 may estimate the pose (e.g., the position and/or orientation) of the patient 22 and identify one or more anatomical reference points. For example, the anatomical reference points may include the shoulders, the hips, the knees, or any other suitable anatomical landmark… the anatomical reference points may be inferred based on the 3D surface map of the patient. The controller 24 may then fuse the anatomical reference points with the acquired 3D surface map of the patient 22… Based on 3D surface map of the patient 22, the controller 24 may identify or extract 3D anatomical reference points. The controller 24 may then deform one or more anatomical features from an anatomical atlas to the 3D surface map of the patient 22 based on the extracted 3D anatomical reference points to generate the 3D patient model… the guidance system 10 may provide the operator with spatial awareness of expected anatomical features via the 3D patient model; building (e.g. system generates (i.e. builds, creates, etc.) a three-dimensional (3D) patient model (i.e. a schematic body model of the patient) by generating a 3D surface map of the patient, identifying reference points, such as anatomical landmarks, based on the 3D surface map, and deforming an anatomical atlas to the patient space (i.e. the patient data) defined by the 3D surface map of the patient (i.e. building the schematic body model of the patient), as indicated above), for example, including a programmed controller which generates a 3D patient model (e.g., an anatomical twin) associated with the patient and provides visual guidance to an operator to position and/or orient the patient, as indicated above), for example).
Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Gossler, in view of Shoudy.
Regarding claim 24, Gossler discloses a system for ascertaining examination information during a diagnostic assessment of patient data (Par. [0002-14]: computer-assisted structuring of medical examination data and/or one or more examination data records… method and an apparatus as well as a computer program product according to the independent claims are disclosed… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method, which can be defined in a hardware and/or software relevant fashion and/or as a computer program product… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model), the system comprising:
an interface (Par. [0040]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station); and
a controller (Par. [0002-14]: computer-assisted structuring of medical examination data and/or one or more examination data records… method and an apparatus as well as a computer program product according to the independent claims are disclosed… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method, which can be defined in a hardware and/or software relevant fashion and/or as a computer program product… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model), wherein the patient data comprises medical image data that represents an anatomical region of a patient (Abstract: method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; Par. [0014-16]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data; Par. [0040]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; wherein the patient data comprises medical image data that represents an anatomical region of a patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. patient data assigned to the patient), including a diagnostic station which enables a user to access (i.e. receive, retrieve, obtain, etc.) image data records of a patient (i.e. receiving the patient data relating to the patient), including anatomical structures of the patient (i.e. wherein the patient data comprises medical image data that represents an anatomical region of a patient), as indicated above), for example), and the controller is configured to cause the system to (Par. [0002-14]: computer-assisted structuring of medical examination data and/or one or more examination data records… method and an apparatus as well as a computer program product according to the independent claims are disclosed… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method, which can be defined in a hardware and/or software relevant fashion and/or as a computer program product… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model),
receive the patient data via the interface (Abstract: method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; Par. [0014]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model; Par. [0040]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station; receive the patient data via the interface (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. patient data assigned to the patient), including a diagnostic station which enables a user to access (i.e. receive, retrieve, obtain, etc.) image data records of a patient (i.e. receive the patient data via the interface), as indicated above), for example),
generate a visualization of the medical image data and provide it to a user via the interface (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-19]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data… The patient examination data and/or diagnosis data can be shown here on a display apparatus; Par. [0040-69]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… Some textual information relating to the description of the examination results in the lowest zoom stage are shown by way of example in FIG. 2… User interactions are shown in FIG. 5, which are identified with 1 and 2… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… The user can select whether all results are shown or only those which correspond to certain criteria… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change… The user can display a history at each examination result… FIG. 6 shows the afore-cited progress mode having symbols S in color, which may have the following meaning… red: (current finding)… green: (prior finding)… red-green: (got worse)… green-red: (got better)… brown: (unchanged)… white: (disappeared)… temporal progress can therefore not only be represented by special symbols, but instead also by the (automatically or manually triggered) continuous display of the model relating to the available time instants with continuous zoom and filter settings; Par. [0083-95]: R1 to R7 reference points and/or positions e.g. R1: lymph nodes, neck, right… S symbols in color, e.g. in red, green, red-green, green-red, brown, white; generate a visualization of the medical image data and provide it to a user via the interface (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of a patient to visualize (i.e. generate a visualization) medical images as well as diagnosis available to the user at the diagnostic station, for example, by automatically detecting (i.e. identifying, recognizing, etc.) field markers, which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. the schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. the anatomical position for the at least one piece of the examination information), including at least one position (i.e. segment, portion, region, etc.) in the body model assigned to the examination data record (i.e. the anatomical position for the at least one piece of the examination information within the schematic body model), by way of the user interface, which is highlighted (i.e. emphasized, accentuated, etc.) by markings, icons, symbols and/or text, as shown in Figs. 2-6 (i.e. generate a visualization of the medical image data and provide it to a user via the interface), as indicated above), for example),
provide a schematic body model of the patient based on the patient data, the schematic body model schematically replicates at least one anatomy of the patient (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed… providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; Par. [0014-16]: interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data; Par. [0040-44]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station… diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6… The interaction with the body model K consists inter alia of zooming and filtering the body model … Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; provide a schematic body model of the patient based on the patient data, wherein the schematic body model schematically replicates at least one anatomy of the patient (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the patient data assigned to the patient), includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient (i.e. provide a schematic body model of the patient based on the patient data), such as a patient-specific whole body model, including schematic representations (i.e. wherein the schematic body model schematically replicates at least one anatomy of the patient), as shown in Figs. 2-6, for example, which enables registration of image data on the model to assign (i.e. associate, relate, etc.) image diagnoses to correct anatomical structures in relation to the anatomy of the patient, as indicated above), for example),
establish a registration between the medical image data and the schematic body model (Par. [0010-17]: software for image diagnosis also enables the simultaneous representation of several image data records (adjacent to one another or superimposed). The image data records can herewith also originate from different imaging methods. Registration of the image data records herewith enables individual image diagnoses to be compared longitudinally or observed in extended representations (e.g. anatomical details by means of CT, functional information by means of MR, metabolic information by way of PET)… An interactive whole body model is used for the diagnosis of medical data of a patient. The entire quantity of medical data relating to a patient examination is registered with the body model. With the subsequent diagnosis, the full context information relating to each individual diagnosis is therefore available at any time… The results of previous patient examinations are also registered with the same body model on this basis, so that changes to the diagnoses can be shown between different points in time (also animated as film). Registration of the results of different examinations on a body model also enables reference to be made to possible inconsistencies in the results… With the aid of the semantic annotations which were generated within the scope of the registration and possibly preceding diagnosis and which render the medical significance of a diagnosis and/or an anatomical or pathological structure comprehensible to a computer, it is possible to intelligently navigate between the whole body model and the original image data … a uniform type of information representation is enabled at any time and in any procedural context across all body regions, organs and image data records of different modalities. As a result, learning and synergy effects and higher efficiencies result during the further (development) and use of the system; Par. [0042-43]: automatically determined information relating to image diagnosis by further characteristics and interpretations… The position in the image (volume) can therefore take place by way of classical registration algorithms REGB (see 1a, 1b). In the simplest case, a registration takes place for instance with the model based on automatically detected field markers. To this end, proximately automatically detected field markers are initially determined for the image diagnosis and the relative position with respect to these field markers is transmitted to the body model… If the diagnoses are prestructured (e.g. in separate sections for head, neck/shoulder, thorax, abdomen/pelvis), this structure can be used to determine the anatomical position of individual image diagnoses. If this is not possible, the anatomical position of individual image diagnoses can generally be determined by means of text analysis REGM. If the anatomical position is determined, a (purely semantic) registration can likewise take place on the body model 2a, 2b. The interaction with the body model K consists inter alia of zooming and filtering the body model. The assistance for the user interaction such as also the function for charging and storing the models 3c, 3d including all contained image diagnoses is summarized in a component ML (model logic) which is likewise connected to the user interface (see 3a, 3b); Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; establishing a registration between the medical image data and the schematic body model (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the patient data assigned to the patient), includes providing an interactive whole body model which is used for the diagnosis of medical data of a patient (i.e. providing a schematic body model of the patient based on the patient data), such as a patient-specific whole body model, including schematic representations (i.e. wherein the schematic body model schematically replicates at least one anatomy of the patient), as shown in Figs. 2-6, for example, which enables registration of image data on the model to assign (i.e. associate, relate, etc.) image diagnoses to correct anatomical structures in relation to the anatomy of the patient (i.e. establishing a registration between the medical image data and the schematic body model), as indicated above), for example), for example),
receive a user input from the user via the interface, the user input is directed to a generation of the examination information based on the visualization (Par. [0040-59]: FIG. 1 shows an example embodiment of the invention in the form of an architecture of a software or hardware implementation. A user interface B for instance in the form of a diagnostic station enables a user, in particular a radiologist, to access image data records which are stored in a database for data management purposes D. The possibilities of visualizing, image interpretation I and editing E of medical images as well as diagnosis are available to the user at the diagnostic station by means of dictation or text entry. This diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization). The type of visualization of the results (in this case as differently sized reference points R1 to R7) provides the user with an indication of the number of results per group… User interactions are shown in FIG. 3… The user can change the zoom settings, so that more or less details relating to the examination results are shown… The user can switch the labels on and/or off… User interactions are shown in FIG. 4… If the user positions the mouse above an examination result, a preview pain appears with a detailed description of the result… if available, a preview image of the result can be shown. If the user clicks on this preview image, he navigates directly to this result in the original images… User interactions are shown in FIG. 5… If the system represents inconsistencies between examination results, this is shown visually directly in the model so that the attention of the user is directed to the inconsistency. If the user moves the mouse to the marked inconsistency, the system specifies the underlying detailed information… The user can jump directly to the results in the original images… User interactions are shown in FIG. 6… The user can move to results of earlier examinations by way of a time bar. Furthermore, he/she can activate a comparison mode in order to select which time points are to be compared with one another… The user can select whether all results are shown or only those which correspond to certain criteria (e.g. change in size)… Progress mode: if this mode is activated, the model visualizes the results in terms of their progress (worsening, improvement, no change etc.)… The user can display a history at each examination result; receive a user input from the user via the interface, the user input is directed to a generation of the examination information based on the visualization (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient (i.e. the examination information), including a diagnostic station which enables a user to access image data records of the patient (i.e. a generation of the examination information), by way of the user interface, for example, by interacting, as in the interactions shown in FIGS. 2 to 6 (i.e. receive a user input from the user via the interface, the user input is directed to a generation of the examination information based on the visualization), as indicated above), for example),
determine an anatomical position for the examination information within the schematic body model based on the user input and the registration,
ascertain the examination information based on the determined anatomical position and on the user input, and
provide the examination information (Abstract: Computer-assisted structuring of medical examination data and/or one or more examination data records is disclosed. A method of at least one embodiment includes providing at least one medical examination data record, which includes patient-specific data described textually and/or symbolically and/or at least one image data record created with the aid of a radiological examination device; providing at least one body model image, which represents a body model matching the examination data; and registering the at least one examination data record with the body model, wherein at least one position in the body model is assigned to the examination data record; the position being made known for interaction by way of a user interface; Par. [0002-7]: computer-assisted structuring of medical examination data and/or one or more examination data records… evaluation of the image data records largely takes place in a computer-assisted fashion at diagnostic stations, which provide for observation and navigation through the image data record and a summary of the evaluation (for instance as text or dictation). The image data record is to this end stored in series of medical images, which a radiologist essentially observes sequentially, wherein he/she dictates the evaluation… the appearance, position and changes to pathological structures are described in the evaluation; Par. [0014-18]: medical data relating to a patient examination… one or several servers and/or computers, for computer-assisted structuring of medical examination data comprising means and/or modules for implementing the afore-cited method; Par. [0040-49]: diagnostic station is extended such that it also indicates the said body model K and enables the user to interact inter alia as in the interactions shown in FIGS. 2 to 6. Image diagnoses are transmitted largely fully automatically into the body model, wherein different methods and/or algorithms A and/or services are used… a registration takes place for instance with the model based on automatically detected field markers. To this end, proximately automatically detected field markers are initially determined for the image diagnosis and the relative position with respect to these field markers is transmitted to the body model… the diagnoses are prestructured (e.g. in separate sections for head, neck/shoulder, thorax, abdomen/pelvis), this structure can be used to determine the anatomical position of individual image diagnoses… Examples of examination results are shown in FIGS. 2 to 6, which are mapped onto a model of the overall body. Associated examination results may appear grouped on the lowest zoom stage (the grouping is based on the semantic annotations of the examination results, e.g. its anatomical localization)… User interactions are shown in FIG. 4, which are identified with 1 and 2… If the user positions the mouse above an examination result, a preview pain appears with a detailed description of the result… if available, a preview image of the result can be shown. If the user clicks on this preview image, he navigates directly to this result in the original images; Par. [0068-71]: a patient-specific whole body model and annotated with semantic metadata. This enables registration of the image data on the model which is to assign image diagnoses to the correct anatomical structures… The model enables an efficient navigation across various body regions and organs including continuous zooming, extensive filtering of the image diagnoses… The representation of various detailed stages, schematic representation of changes to the image diagnoses by means of special symbols is possible (newly occurring image diagnoses, image diagnoses which indicate an improvement or worsening, image diagnoses which have no correspondence in terms of current examination)… interactive representation of the diagnoses in relation to the anatomy of the patient enables, in the manner described above, an improved, comprehensive understanding of the clinical picture, since important context information is made public with each diagnosis… this approach enables the elimination of separate schematic representations of individual organs for the qualitative illustration when localizing the diagnoses; determine an anatomical position for the examination information within the schematic body model based on the user input and the registration; ascertain the examination information based on the determined anatomical position and on the user input; and provide the examination information (e.g. computer-assisted structuring of medical examination data and/or one or more examination data records of a patient, including a diagnostic station which enables a user to access image data records of the patient (i.e. the examination information in the patient data), by automatically detecting (i.e. identifying, recognizing, etc.) field markers (i.e. identifying at least one piece of the examination information in the patient data), which are initially determined for image diagnosis, for example, and the relative position with respect to these field markers is transmitted to the body model (i.e. the schematic body model) in order to determine the anatomical position of individual image diagnoses (i.e. determine an anatomical position for the at least one piece of the examination information within the schematic body model), for example, including at least one position (i.e. segment, portion, region, etc.) in the body model assigned (i.e. associated, related, etc.) to the examination data record (i.e. the examination information), which enables registration of image data on the model to assign image diagnoses to anatomical structures (i.e. determine an anatomical position for the examination information within the schematic body model based on the user input and the registration), for example, including a diagnostic station which enables a user to access image data records of the patient (i.e. provide the examination information), by way of the user interface, for example, by interacting, as in the interactions shown in FIGS. 2 to 6 (i.e. ascertain the examination information based on the determined anatomical position and on the user input), as indicated above), for example).
Gossler teachings above disclose providing an interactive whole body model which is used for the diagnosis of medical data of a patient, as indicated above, but does not expressly disclose building (i.e. creating, generating, etc.) the schematic body model of the patient, as recited in the claim.
However, Shoudy teaches building (Par. [0004]: medical imaging guidance system may have a patient sensor that may receive three-dimensional (3D) data associated with a patient and an imaging system that has an imaging hardware component that may acquire image data of an anatomical feature associated with the patient… The medical guidance system may also have a processor that generates a 3D surface map associated with the patient based on the 3D data, generates a 3D patient space from the 3D surface map associated with the patient, generates a 3D patient model by mapping an anatomical atlas to the 3D patient space… The 3D patient model may have one or more 3D representations of anatomical features of a human body within the 3D patient space; Par. [0020-32]: guidance system provided herein provide guidance to an operator via a three-dimensional (3D) patient model. For example, the 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient to the operator. The guidance system may generate the 3D patient model by generating a 3D surface map of the patient, identifying reference points (e.g., anatomical landmarks) based on the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient… the anatomical and/or physiological information associated with patient 22 may include the degrees of freedom associated with the desired anatomical feature to be imaged or any surrounding and/or adjacent anatomical features of the patient 22… the anatomical information and/or physiological information may include one or more anatomical models. For example, the anatomical models may be associated with anatomical features such a body part, an organ, a muscle, a bone, or the like. The anatomical models may include a polygonal or volumetric 3D model of the anatomical feature. The anatomical model may also be associated with an indexed list of anatomical components of the anatomical feature. The indexed list of anatomical components may include each body part, organ, muscle, bone, or the like, that is connected with each other body part, organ, muscle, bone, or the like, in the associated anatomical feature. Each anatomical component in the indexed list may share at least one point of correspondence to another anatomical component in the indexed list. For example, with respect to the anatomical feature of the hip-to-femur joint, the anatomical components may include the last lumbar vertebrae (L5), the sacrum (S1), the ilium, the ischium, and the femur. As such, each anatomical model may define the linkages between each of the anatomical components associated with each anatomical model. For example, in the 3D model of the anatomical feature, a point of correspondence for the femur ‘A’ and the point of correspondence for the ischium ‘B’; Par. [0037-39]: controller 24 may generate a three-dimensional (3D) patient model (e.g., an anatomical twin) associated with the patient 22 and provide visual guidance to the operator to position and/or orient the patient 22, the imaging hardware components, or both, via the 3D patient model. For example, after generating the 3D patient model, the controller 24 may send a command signal to the display 30 to present the 3D patient model associated with the patient 22 to the operator. The 3D patient model may visually present the expected position and/or orientation of anatomical features of the patient 22 to the operator… The controller 24 may generate the 3D patient model by generating a 3D surface map of the patient 22… identifying reference points (e.g., anatomical landmarks) within the 3D surface map, and deforming an anatomical atlas to the patient space defined by the 3D surface map of the patient 22… based on the acquired sensor data of the patient 22… the controller 24 may estimate the pose (e.g., the position and/or orientation) of the patient 22 and identify one or more anatomical reference points. For example, the anatomical reference points may include the shoulders, the hips, the knees, or any other suitable anatomical landmark… the anatomical reference points may be inferred based on the 3D surface map of the patient. The controller 24 may then fuse the anatomical reference points with the acquired 3D surface map of the patient 22… Based on 3D surface map of the patient 22, the controller 24 may identify or extract 3D anatomical reference points. The controller 24 may then deform one or more anatomical features from an anatomical atlas to the 3D surface map of the patient 22 based on the extracted 3D anatomical reference points to generate the 3D patient model… the guidance system 10 may provide the operator with spatial awareness of expected anatomical features via the 3D patient model; building (e.g. system generates (i.e. builds, creates, etc.) a three-dimensional (3D) patient model (i.e. a schematic body model of the patient) by generating a 3D surface map of the patient, identifying reference points, such as anatomical landmarks, based on the 3D surface map, and deforming an anatomical atlas to the patient space (i.e. the patient data) defined by the 3D surface map of the patient (i.e. building the schematic body model of the patient), as indicated above), for example, including a programmed controller which generates a 3D patient model (e.g., an anatomical twin) associated with the patient and provides visual guidance to an operator to position and/or orient the patient, as indicated above), for example).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 1.
Conclusion
The prior art made of record cited in PTO-892 and not relied upon is considered pertinent to applicant’s disclosure. In particular, US2018/0116518A1 traches “a body model can be matched to the examination object… localization of the organ to be examined involves a matching of the body model, which represents information about the localization of the organ to be examined in the body model, to the examination object… the body model can be divided into a number of body segments, so there can be an individual matching of the number of body segments to the examination object”. US 2017/0124771 A1 also teaches “medical imaging system configured to link acquired images to markers or tags on an anatomical illustration, based, at least in part on spatial and anatomical data associated with the acquired image. The medical imaging system may be further configured to generate a diagnostic report including the anatomical illustration containing the markers”, “an anatomical model representing a patient's anatomy including, e.g., various body parts, such as tissues, organs”, “2D or 3D images are coupled to the anatomical information processor 34. The anatomical information processor 34 operates as described below to encode anatomical location information from the images acquired with the ultrasound system. The anatomical information processor 34 may receive input from the user control panel 38, such as the type of exam performed and which standard view is being acquired. Output data from the anatomical information processor 34 is coupled to a graphics processor 36 for the reproduction of output data from the processor with the image on the display 40”, for example. US2020/0126648A1 further teaches “at least one user input device and a radiological examination data storage device operatively connected. In the radiological observation method, at least a portion of the at least one radiology image is displayed on the image window on the at least one display. at least a portion of the radiological report is displayed in shown in the report window of on the at least one display… using a group of image labels for identification of anatomical features in the at least one radiology image, a group of report label and electronic identification of clinical medical ontology conceptual segment of the radiological report is performed in at least one of the following operations: (1) at least one related segments via the at least one user input device receiving a selection of an anatomical feature in the image window shown, identifying the radiology report, and highlighting at least one related segments of the radiological report in the report window; and (2) via the at least one user input device receiving a selection of a fragment of the radiological report shown in the report window, identifying at least one relevant anatomical features of the at least one radiology image, and highlighting at least one relevant anatomical features at least one radiological image in the image window”, for example.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GUILLERMO M RIVERA-MARTINEZ whose telephone number is (571) 272-4979. The examiner can normally be reached on 9 am to 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on 571-270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GUILLERMO M RIVERA-MARTINEZ/ Primary Examiner, Art Unit 2677