DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is responsive to Applicant’s remarks received on January 05, 2026. Claims 1-7, 9-16 and 18-27 are pending.
Claim Interpretation
The previous conjunctive interpretation is withdrawn in light of Applicant’s amendment.
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “imaging acquisition subsystem configured to acquire” and “display subsystem configured to display” in claims 22 and 23.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Objections
The previous claim objections have been withdrawn in light of Applicant’s amendment.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7, 9-16 and 18-27 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Min et al. (US 2021/0217165) and Isgum et al. (US 2019/0318476).
Regarding claim 1, Min et al. discloses a method for assessing obstruction of a vessel of interest of a patient, comprising:
obtaining a volumetric image dataset for the vessel of interest (“In some embodiments, at block 104, the medical facility then obtains one or more medical images of the subject. For example, the medical image can be of the coronary region of the subject or patient” at paragraph 0124, line 1; “In some embodiments, the medical image comprises one or more of a contrast-enhanced CT image, non-contrast CT image, MR image, and/or an image obtained using any of the modalities described above” at paragraph 0137, last sentence; imaging studies generally contain a series of images that comprise a volumetric dataset for the patient);
analyzing the volumetric image dataset to extract data representing axial trajectory of the vessel of interest (“Features of embodiments of the system can include, for example, centerline and lumen/vessel extraction, plaque composition overlay, user identification of stenosis, vessel statistics calculated in real time, including vessel length, lesion length, vessel volume, lumen volume, plaque volume (non-calcified, calcified, low-density-non-calcified plaque and total), maximum remodeling index, and area/diameter stenosis (e.g., a percentage), two dimensional (2D) visualization of multi-planar reformatted vessel and cross-sectional views, interactive three dimensional (3D) rendered coronary artery tree, visualization of a cartoon artery tree that corresponds to actual vessels that appear in the CT images, semi-automatic vessel segmentation that is user modifiable, and user identification of stents and Chronic Total Occlusion (CTO)” at paragraph 0314; the centerline is the trajectory);
generating a multi-planar reformatted image based on the volumetric image dataset (“In general, arteries vessels are curvilinear in nature. Accordingly, the system can be configured to straighten out such curvilinear artery vessels into a substantially straight-line view of the artery, and in some embodiments, the foregoing is referred to as a straight multiplanar reformation (MPR) view” at paragraph 0300, line 1);
supplying the image as input to a first machine learning network that outputs feature data that characterizes a plurality of features of the vessel of interest along the axial trajectory of the vessel of interest given the image (“In some embodiments, the vessel identification algorithm, coronary artery identification algorithm, and/or plaque identification algorithm comprises an AI and/or ML algorithm” at paragraph 0127, line 10);
generating additional data that characterizes at least one additional feature of the vessel of interest along the axial trajectory of the vessel of interest by analysis separate and distinct from the first machine learning network (“In some embodiments, at block 112, the system can be further configured to analyze the identified vessels, coronary arteries, and/or plaque, for example using an AI and/or ML algorithm. In particular, in some embodiments, the system can be configured to determine one or more vascular morphology parameters, such as for example arterial remodeling, curvature, volume, width, diameter, length, and/or the like. In some embodiments, the system can be configured to determine one or more plaque parameters, such as for example volume, surface area, geometry, radiodensity, ratio or function of volume to surface area, heterogeneity index, and/or the like of one or more regions of plaque shown within the medical image” at paragraph 0129, line 1); and
supplying the data output by the first machine learning network and the additional data as input data to a second machine learning network that outputs data that characterizes anatomical lesion severity of the vessel of interest given the input data (“In some embodiments, the system can be configured to utilize one or more AI and/or ML algorithms to identify and/or analyze vessels or plaque, derive one or more quantification metrics and/or classifications, and/or generate a treatment plan” at paragraph 0130, last sentence).
Min et al. does not explicitly disclose the MPR image is based on data representing axial trajectory of the vessel of interest and supplying the MPR image as input to the first machine learning network wherein the data output by the second machine learning network includes a plurality of fractional flow reserve (FFR) values for centerline points along the vessel of interest, and the second machine learning network is trained by supervised learning using training data based on a plurality of FFR values associated with vessel centerline points for a plurality of patients
Isgum et al. teaches a method for assessing obstruction of a vessel of interest of a patient, comprising:
obtaining a volumetric image dataset for the vessel of interest (“As described in step 201 of FIG. 2, an image dataset is obtained. Such an image dataset represents a volumetric image dataset for instance a single contrast enhanced CCTA dataset” at paragraph 0073, line 1);
analyzing the volumetric image dataset to extract data representing axial trajectory of the vessel of interest (“Within step 202 of FIG. 2, the processors extract an axial trajectory extending along the vessel of interest. For example, the axial trajectory may correspond to a centerline extending along the vessel of interest” at paragraph 0074, line 1);
generating a multi-planar reformatted image based on the volumetric image dataset and the data representing axial trajectory of the vessel of interest (“As described further by step 203, the centerline is used to create the MPR image” at paragraph 0074, line 13);
supplying the MPR image as input to a first machine learning network that outputs feature data that characterizes a plurality of features of the vessel of interest along the axial trajectory of the vessel of interest given the MPR image (“After training the machine learning model, step 204 of FIG. 2 is configured to predict the coronary plaque type, and/or anatomical stenosis severity and/or the functional significance of the coronary of interest based on analysis of the MPR image as a result of step 204” at paragraph 0078, line 1);
generating additional data that characterizes at least one additional feature of the vessel of interest along the axial trajectory of the vessel of interest by analysis separate and distinct from the first machine learning network (“In case bifurcation and/or the coronary tree is analyzed, multiple centerline are extracted, as for example two coronary centerlines are extracted when analyzing one bifurcation; one coronary centerline identified by a proximal location to a distal location within the main branch of bifurcation, and one centerline identified by a proximal location to a distal location within the side branch of bifurcation” at paragraph 0074, line 11; a side branch is characterized separately from the main centerline branch),
wherein the data output by the second machine learning network includes a plurality of fractional flow reserve (FFR) values for centerline points along the vessel of interest (“Within step 1810 of FIG. 18 the processors apply a supervised classifier to train an FFR classifier” at paragraph 0129, line 1; “Finally step 2308 provides the output and present the results and is identical to step 206 of FIG. 2” at paragraph 0141; “Image 504 of FIG. 5a(iv) shows the FFR value along the MPR image, in which the y-axis represents the estimated FFR value and the x-axis the location along the length of the coronary of interest which corresponds to the x-axis of the MPR image 501” at paragraph 0079, line 20), and the second machine learning network is trained by supervised learning using training data based on a plurality of FFR values associated with vessel centerline points for a plurality of patients (“The reference standard is a database from multiple patients (step 1801). Each set within this database contains a) contrast enhanced CT datasets (step 1803) with belonging b) reference value (step 1802). In a preferred embodiment, the reference value indicative for functional significance coronary lesion (e.g. FFR) 1802, representing a fluid-dynamic parameter, is an (pullback) invasive fractional flow reserve (FFR) measurement as performed during X-ray angiography which belongs to the contrast enhanced CT dataset 1803” at paragraph 0129, line 29).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the MPR generation and analysis as taught by Isgum et al. to generate the vessel of interest information of Min et al. to enable the system to estimate FFR values along the centerline, which is further indicative of disease severity and patient risk (Isgum et al. at paragraph 0079).
Regarding claim 2, the Min et al. and Isgum et al. combination discloses a method further comprising:
displaying or outputting the data that characterizes anatomical lesion severity of the vessel of interest (“FIG. 9H illustrates a panel showing categories of the one or more stenosis marked on the SMPR based on the analysis. Color can be used to enhance the displayed information. In an example, stenosis in the LM>=50% diameter stenosis are marked in red. As illustrated in a panel 907 of the user interface in FIG. 9I, for each segment's greatest percentage diameter stenosis the minimum luminal diameter and lumen diameter at the reference can be displayed when a pointing device is “hovered” above the graphical vessel cross-section representation, as illustrated in FIG. 9J” Min et al. at paragraph 0381, line 1; “The output (step 206 of FIG. 2) is a prediction of the coronary plaque type, and/or anatomical stenosis severity and/or the functional significance of lesion(s) within the coronary of interest. This result can be presented to the user in various ways. FIGS. 5a(i)-5a(v) and FIG. 5b shows some examples of the presentation of the results to the user. Image 501 of FIG. 5a(i) represents the MPR image as a result of step 203 of FIG. 2. Image 502 of FIG. 5a(ii) shows the plaque type classification as a color or grey value superimposed on the MPR image, in which the colors represent different plaque types (e.g. no-plaque, calcified plaque, non-calcified plaque or mixed plaque). Image 503 of FIG. 5a(iii) shows the anatomical stenosis severity superimposed as a color or grey value on the MPR image, in which the colors represent different labels of anatomical stenosis severity. Within Image 503, three anatomical stenosis severity classes are visualized, no-stenosis, non-anatomical significant stenosis (with <50% luminal narrowing) or anatomical significant stenosis (with ≥50% luminal narrowing)” Isgum et al. at paragraph 0079, line 1).
Regarding claim 3, the Min et al. and Isgum et al. combination discloses a method wherein:
the additional data is generated from analysis of the MPR image; and/or
the additional data is generated from analysis of the volumetric image dataset (“In some embodiments, at block 112, the system can be further configured to analyze the identified vessels, coronary arteries, and/or plaque, for example using an AI and/or ML algorithm. In particular, in some embodiments, the system can be configured to determine one or more vascular morphology parameters, such as for example arterial remodeling, curvature, volume, width, diameter, length, and/or the like. In some embodiments, the system can be configured to determine one or more plaque parameters, such as for example volume, surface area, geometry, radiodensity, ratio or function of volume to surface area, heterogeneity index, and/or the like of one or more regions of plaque shown within the medical image” Min et al. at paragraph 0129, line 1); and/or
the additional data is generated from a coronary artery centerline tree derived from the volumetric image dataset.
Regarding claim 4, the Min et al. and Isgum et al. combination discloses a method wherein:
the additional data characterizes at least one of i) side branches along the axial trajectory of the vessel of interest or ii) bifurcations along the axial trajectory of the vessel of interest (“In case bifurcation and/or the coronary tree is analyzed, multiple centerline are extracted, as for example two coronary centerlines are extracted when analyzing one bifurcation; one coronary centerline identified by a proximal location to a distal location within the main branch of bifurcation, and one centerline identified by a proximal location to a distal location within the side branch of bifurcation” Isgum et al. at paragraph 0074, line 11).
Regarding claim 5, the Min et al. and Isgum et al. combination discloses a method wherein:
the additional data characterizes at least one of soft plaque area, mixed plaque area, or other characteristic feature along the axial trajectory of the vessel of interest (“In some embodiments, as part of block 208, the system can be configured to determine a radiodensity of plaque and/or a composition thereof at block 207. For example, a high radiodensity value can indicate that a plaque is highly calcified or stable, whereas a low radiodensity value can indicate that a plaque is less calcified or unstable. As such, in some embodiments, the system can be configured to determine that a radiodensity of a region of plaque above a predetermined threshold is indicative of stable stabilized plaque. In addition, different areas within a region of plaque can be calcified at different levels and thereby show different radiodensity values” Min et al. at paragraph 0144, line 1).
Regarding claim 6, the Min et al. and Isgum et al. combination discloses a method wherein:
the additional data further characterizes a localized part of the myocardium that is associated with the vessel of interest (“Stenosis and atherosclerosis data displayed on the user interface in panel 807 will update accordingly as various segments are selected, as illustrated in FIG. 8D. FIG. 8E illustrates an example of a portion of the per-territory summary panel 807 of the user interface. FIG. 8F also illustrates an example of portion of panel 807 showing the SMPR of a selected vessel and its associated statistics along the vessel at indicated locations (e.g., at locations indicated by a pointing device as it is moved along the SMPR visualization). That is, the user interface 600 is configured to provide plaque details and stenosis details in an SMPR visualization in panel 809 and a pop-up panel 810 that displays information as the user interface receives location information long the displayed vessel from the user, e.g., via a pointing device” Min et al. at paragraph 0376, line 1).
Regarding claim 7, the Min et al. and Isgum et al. combination discloses a method wherein:
the data output by the second machine learning network includes a fractional flow reserve value for the entire vessel of interest (“Within step 1810 of FIG. 18 the processors apply a supervised classifier to train an FFR classifier” Isgum et al. at paragraph 0129, line 1); and
the second machine learning network is trained by supervised learning using training data that includes a reference annotation based on measurement of FFR for a plurality of patients (“The reference standard is a database from multiple patients (step 1801). Each set within this database contains a) contrast enhanced CT datasets (step 1803) with belonging b) reference value (step 1802). In a preferred embodiment, the reference value indicative for functional significance coronary lesion (e.g. FFR) 1802, representing a fluid-dynamic parameter, is an (pullback) invasive fractional flow reserve (FFR) measurement as performed during X-ray angiography which belongs to the contrast enhanced CT dataset 1803” Isgum et al. at paragraph 0129, line 29).
Regarding claim 9, the Min et al. and Isgum et al. combination discloses a method wherein:
the data output by the second machine learning network further represents a prediction for the presence of a functionally significant stenosis (“FIG. 9H illustrates a panel showing categories of the one or more stenosis marked on the SMPR based on the analysis. Color can be used to enhance the displayed information. In an example, stenosis in the LM>=50% diameter stenosis are marked in red. As illustrated in a panel 907 of the user interface in FIG. 9I, for each segment's greatest percentage diameter stenosis the minimum luminal diameter and lumen diameter at the reference can be displayed when a pointing device is “hovered” above the graphical vessel cross-section representation, as illustrated in FIG. 9J” Min et al. at paragraph 0381, line 1; a large stenosis as designated by the red color is therefore a functionally significant stenosis); and
the second machine learning network is trained by supervised learning using training data that includes reference annotations representing presence of a functionally significant stenosis for a plurality of patients (“For example, in some embodiments, the vessel identification algorithm, coronary artery identification algorithm, and/or plaque identification algorithm can be trained on a plurality of medical images wherein one or more vessels, coronary arteries, and/or regions of plaque are pre-identified.” Min et al. at paragraph 0127, second to last sentence).
Regarding claim 10, the Min et al. and Isgum et al. combination discloses a method wherein:
the plurality of the features characterized by the feature data output by the first machine learning network includes at least one feature related to lumen characteristics of the vessel of interest along the axial trajectory of the vessel of interest (“In some embodiments, the system can be configured to identify a vessel wall and a lumen wall for each of the identified coronary arteries in the medical image. In some embodiments, the system is then configured to determine the volume in between the vessel wall and the lumen wall as plaque In some embodiments, the system can be configured to identify a vessel wall and a lumen wall for each of the identified coronary arteries in the medical image. In some embodiments, the system is then configured to determine the volume in between the vessel wall and the lumen wall as plaque” Min et al. at paragraph 0139, line 12).
Regarding claim 11, the Min et al. and Isgum et al. combination discloses a method wherein:
the plurality of the features characterized by the feature data output by the first machine learning network includes at least one feature related to plaque characteristics of the vessel of interest along the axial trajectory of the vessel of interest (“In some embodiments, the system can be configured to identify regions of plaque based on the radiodensity values typically associated with plaque, for example by setting a predetermined threshold or range of radiodensity values that are typically associated with plaque with or without normalizing using a normalization device” Min et al. at paragraph 0139, last sentence).
Regarding claim 12, the Min et al. and Isgum et al. combination discloses a method wherein:
the first machine learning network comprises a convolutional neural network, which is trained using training data that includes reference annotations for the plurality of the features characterized by the feature data output by the first machine learning network (“For example, in some embodiments, the one or more AI and/or ML algorithms can be trained using a Convolutional Neural Network (CNN) on a set of medical images on which arteries or coronary arteries have been identified, thereby allowing the AI and/or ML algorithm automatically identify arteries or coronary arteries directly from a medical image. In some embodiments, the arteries or coronary arteries are identified by size and/or location” Min et al. at paragraph 0138, second to last sentence; “The memory 118 may store memory configured to store a volumetric image dataset for a target organ that includes a vessel of interest. Optionally, the memory 118 may store a training database that includes volumetric imaging datasets for multiple patients and corresponding coronary artery disease (CAD) related reference values, the volumetric image data sets being for a target organ that includes a vessel of interest, the CAD related reference values corresponding to one or more points along a vessel of interest within the corresponding imaging data set” Isgum et al. at paragraph 0067).
Regarding claim 13, the Min et al. and Isgum et al. combination discloses a method wherein:
the reference annotations are derived by manual segmentation of corresponding volumetric image data and/or automatic segmentation of corresponding volumetric image data (“For example, in some embodiments, the one or more AI and/or ML algorithms can be trained using a Convolutional Neural Network (CNN) on a set of medical images on which arteries or coronary arteries have been identified, thereby allowing the AI and/or ML algorithm automatically identify arteries or coronary arteries directly from a medical image. In some embodiments, the arteries or coronary arteries are identified by size and/or location” Min et al. at paragraph 0138, second to last sentence).
Regarding claim 14, the Min et al. and Isgum et al. combination discloses a method wherein:
the second machine learning network comprises a convolutional neural network, which is trained using training data that includes volumetric image data and corresponding reference annotations for the output data that characterizes anatomical lesion severity of the vessel of interest (“In some embodiments, the system is configured to utilize an AI, ML, and/or other algorithm to characterize the change in calcium score based on one or more plaque parameters derived from a medical image. For example, in some embodiments, the system can be configured to utilize an AI and/or ML algorithm that is trained using a CNN and/or using a dataset of known medical images with identified plaque parameters combined with calcium scores. In some embodiments, the system can be configured to characterize a change in calcium score by accessing known datasets of the same stored in a database. For example, the known dataset may include datasets of changes in calcium scores and/or medical images and/or plaque parameters derived therefrom of other subjects in the past. In some embodiments, the system can be configured to characterize a change in calcium score and/or determine a cause thereof on a vessel-by-vessel basis, segment-by-segment basis, plaque-by-plaque basis, and/or a subject basis” Min et al. at paragraph 0250); and
the convolutional neural network of the second machine learning system includes an accumulator that outputs fractional flow reserve (FFR) values for centerline points along the vessel of interest (“Various other ways of visualization can be used, for instance the results can be superimposed on the curved MPR, on the orthogonal views or the results can be visualized on the volumetric rendering of the image dataset (201 of FIG. 2) as shown by image 505 of FIG. 5a(v) in which 506 illustrates a color-coded result of for instance the FFR value along the coronary centerline” Isgum et al. at paragraph 0079, line 24).
Regarding claim 15, the Min et al. and Isgum et al. combination discloses a method wherein:
the reference annotations are derived by manual segmentation of the corresponding volumetric image data and/or automatic segmentation of the corresponding volumetric image data (“For example, in some embodiments, the one or more AI and/or ML algorithms can be trained using a Convolutional Neural Network (CNN) on a set of medical images on which arteries or coronary arteries have been identified, thereby allowing the AI and/or ML algorithm automatically identify arteries or coronary arteries directly from a medical image. In some embodiments, the arteries or coronary arteries are identified by size and/or location” Min et al. at paragraph 0138, second to last sentence).
Regarding claim 16, the Min et al. and Isgum et al. combination discloses a method wherein:
the convolutional neural network of the second machine learning system includes a regression head that outputs a fractional flow reserve value for the entire vessel of interest (“Within step 1810 of FIG. 18 the processors apply a supervised classifier to train an FFR classifier” Isgum et al. at paragraph 0129, line 1).
Regarding claim 18, the Min et al. and Isgum et al. combination discloses a method wherein:
the convolutional neural network of the second machine learning system further includes a classification head that outputs data representing a prediction for the presence of a functionally significant stenosis (“In some embodiments, the system is configured to utilize an AI, ML, and/or other algorithm to characterize the change in calcium score based on one or more plaque parameters derived from a medical image. For example, in some embodiments, the system can be configured to utilize an AI and/or ML algorithm that is trained using a CNN and/or using a dataset of known medical images with identified plaque parameters combined with calcium scores. In some embodiments, the system can be configured to characterize a change in calcium score by accessing known datasets of the same stored in a database. For example, the known dataset may include datasets of changes in calcium scores and/or medical images and/or plaque parameters derived therefrom of other subjects in the past. In some embodiments, the system can be configured to characterize a change in calcium score and/or determine a cause thereof on a vessel-by-vessel basis, segment-by-segment basis, plaque-by-plaque basis, and/or a subject basis” Min et al. at paragraph 0250).
Regarding claim 19, the Min et al. and Isgum et al. combination discloses a method wherein:
the vessel of interest comprises a coronary artery or a coronary tree (“In some embodiments, at block 104, the medical facility then obtains one or more medical images of the subject. For example, the medical image can be of the coronary region of the subject or patient” Min et al. at paragraph 0124, line 1; “When the vessel of interest represents the coronary artery, the axial trajectory may correspond to the coronary centerline, in which case the processors extract the coronary centerline” Isgum et al. at paragraph 0074, line 4).
Regarding claim 20, the Min et al. and Isgum et al. combination discloses a method wherein:
the volumetric image dataset comprises CCTA image data (“In some embodiments, the system is configured as a web-based software application that is intended to be used by trained medical professionals as an interactive tool for viewing and analyzing cardiac CT data for determining the presence and extent of coronary plaques (i.e., atherosclerosis) and stenosis in patients who underwent Coronary Computed Tomography Angiography (CCTA) for evaluation of coronary artery disease (CAD)” Min et al. at paragraph 0313, line 1; “As described in step 201 of FIG. 2, an image dataset is obtained. Such an image dataset represents a volumetric image dataset for instance a single contrast enhanced CCTA dataset” Isgum et al. at paragraph 0073, line 1).
Regarding claim 21, the Min et al. and Isgum et al. combination discloses a system for assessing obstruction of a vessel of interest of a patient, the system comprising:
at least one processor that (“The computer system 1402 includes one or more processing units (CPU) 1406, which may comprise a microprocessor” Min et al. at paragraph 0448, line 1), when executing program instructions stored in memory (“Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems, and may be stored on or within any suitable computer readable medium, or implemented in-whole or in-part within special designed hardware or firmware” Min et al. at paragraph 0447, line 1), is configured to perform the method of claim 1 (see claim 1).
Regarding claim 22, the Min et al. and Isgum et al. combination discloses a system further comprising:
an imaging acquisition subsystem configured to acquire the volumetric image dataset (“In some embodiments, at block 104, the medical facility then obtains one or more medical images of the subject. For example, the medical image can be of the coronary region of the subject or patient. In some embodiments, the systems disclosed herein can be configured to take in CT data from the image domain” Min et al. at paragraph 0124, line 1; “The CT imaging apparatus 112 captures a CT scan of the organ of interest” Isgum et al. at paragraph 0062, line 1).
Regarding claim 23, the Min et al. and Isgum et al. combination discloses a system further comprising:
a display subsystem configured to display the data that characterizes anatomical lesion severity of the vessel of interest, which includes fractional flow reserve values for centerline points along the vessel of interest (“In some embodiments, the system is configured to analyze arteries present in the CT scan data and display various views of the arteries present in the patient” Min et al. at paragraph 0132, line 8; “Various other ways of visualization can be used, for instance the results can be superimposed on the curved MPR, on the orthogonal views or the results can be visualized on the volumetric rendering of the image dataset (201 of FIG. 2) as shown by image 505 of FIG. 5a(v) in which 506 illustrates a color-coded result of for instance the FFR value along the coronary centerline” Isgum et al. at paragraph 0079, line 24).
Regarding claim 24, the Min et al. and Isgum et al. combination discloses a non-transitory program storage device tangibly embodying a program of instructions that are executable on a machine (“Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems, and may be stored on or within any suitable computer readable medium, or implemented in-whole or in-part within special designed hardware or firmware” Min et al. at paragraph 0447, line 1) to perform the operations of claim 1 for assessing obstruction of a vessel of interest of a patient (see claim 1).
Regarding claim 25, the Min et al. and Isgum et al. combination discloses a method wherein:
the training data used to train the second machine learning network includes reference annotations based on measurements of a plurality of FFR values associated with vessel centerline points by automatic or manual pullback during measurement of FFR along a vessel (“For example, the database contains, for each patent, a) contrast enhanced imaging (e.g. CT) datasets (step 1803) and corresponding b) invasively measured fractional flow reserve reference values (step 1802). The fractional flow reserve reference values correspond to pressure measurements that are invasively measured during a pressure wire pullback operation” Isgum et al. at paragraph 0118, line 6).
Regarding claim 26, the Min et al. and Isgum et al. combination discloses a method wherein:
the reference annotations of the training data used to train the second machine learning network are generated by aligning the measurements of the plurality of FFR values to spatial coordinates of an MPR image (“In step 1806 of FIG. 18 the processors align the FFR reference value 1802 to the spatial coordinates of the MPR image as a result of step 1805” Isgum et al. at paragraph 0121, line 1).
Regarding claim 27, the Min et al. and Isgum et al. combination discloses a method wherein:
the training data used to train the second machine learning network includes reference annotations based on a plurality of FFR values associated with vessel centerline points, wherein the plurality of FFR values are calculated from three-dimensional coronary reconstruction using x-ray angiography (“The vFFR method of CAAS Workstation generates a 3D coronary reconstruction using 2 angiographic x-ray projections with at least 30 degrees apart. vFFR is calculated instantaneously by utilizing a proprietary algorithm which incorporates the morphology of the 3D coronary reconstruction and routinely measured patient specific aortic pressure. FIG. 27 shows an example of obtaining a computed FFR pullback of the coronary circumflex by using CAAS Workstation. 2701 shows the segmentation of the coronary circumflex in each 2D X-ray angiographic image, resulting in a 3D reconstruction of the coronary artery (2702). The graph 2703 shows the computed vFFR value along the length of the 3D reconstructed coronary artery” Isgum et al. at paragraph 0150, line 1).
Response to Arguments
Summary of Remarks (@ response page labeled 9): “Igsum et al. does not teach or suggest any machine learning network that outputs data that includes a plurality of fractional flow reserve (FFR) values for centerline points along the vessel of interest, let alone where the machine learning network is "trained by supervised learning using training data based on a plurality of FFR values associated with vessel centerline points for a plurality of patients" as recited in amended claim 1. Instead, the system of Igsum et al. uses machine learning to generate data that includes a fractional flow reserve (FFR) value for an entire vessel of interest (see block 1405 of FIG. 4 and paragraph [0103]). Furthermore, the machine learning system of Igsum et al. is trained using a single reference FFR value for the target vessel, which is assessed by means of a manual or automatic pullback in the distal part of the target vessel (See FIG. 18 and paragraphs [0117] and [0129].”
Examiner’s Response: As demonstrated above, Isgum et al. classifies the input image data and outputs data in the form of an FFR graph showing FFR values at each point along the centerline. FFR values are obtained during a pullback operation and are aligned to the generated MPR data to encompass the training data input to the FFR classifier for training. These are shown in claims 1 and 25 above as supported by the citation in paragraphs 0079, 0118 and 0129.
Summary of Remarks (@ response page labeled 11): “In still another example, dependent claim 4 has been amended to recite that "the additional data characterizes at least one of i) side branches along the axial trajectory of the vessel of interest or ii) bifurcations along the axial trajectory of the vessel of interest. Nowhere does the cited prior art teach or suggest these features. In rejecting original claim 4, the Examiner points to paragraph [0074] of Igsum et al. as suggesting this feature. This analysis is misguided. In paragraph [0074] of Igsum et al., the identification of a bifurcation is used to extract multiple vessel centerlines, which are then used to create multiple MPR images. The bifurcation information is not used as additional data input to any machine learning system as recited in dependent claim 4.”
Examiner’s Response: As Applicant points out, the bifurcation information is used in generation of the MPR images. As previously noted, the MPR images are used along with the FFR values extracted during pullback and form the training data which is input to the FFR classifier.
Summary of Remarks (@ response page labeled 11): “In yet another example, dependent claim 14 has been amended to include features of original claim 17 (new cancelled) and recites that "the convolutional neural network of the second machine learning system includes an accumulator that outputs the plurality of fractional flow reserve (FFR) values for centerline points along the vessel of interest." Nowhere does the cited prior art teach or suggest these features. In rejecting original claim 17, the Examiner points to Fig. 5(a)(v) of Igsum et al. (which shows the visualization of FFR values along the length of an MPR image) as suggesting this feature. This analysis is misguided as it treats MPR visualization (as used in Isgum et al.) as equivalent to the structure (accumulator stage) of a convolutional neural network. This assumption is not supported by the cited prior art references and reflects impermissible hindsight bias. This flawed analysis undermines the foundation of the rejection of dependent claim 14.”
Examiner’s Response: There is nothing in claim 14 that defines the accumulator by particular structure, nor are there details on how this accumulator outputs the FFR values. Therefore, this is equivalent to a black box that takes data and outputs different data without elaborating on how the output is generated. As the output of the classifier is the FFR graph, this meets the requirements of claim 14.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATRINA R FUJITA whose telephone number is (571)270-1574. The examiner can normally be reached Monday - Friday 9:30-5:30 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 5712723638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATRINA R FUJITA/Primary Examiner, Art Unit 2672