DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status:
Claims 1-15, 17-19 have been canceled.
Claims 16, 20-30 are pending and examined below.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/24/2024 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 16 and 24-26 is/are rejected under 35 U.S.C. 103 as being unpatentable over WO2018/178272 to Gauriau et al. “Gauriau”, in view of US2021/0133977 to Yamazaki.
Regarding claim 16 and 24, Gauriau discloses a method (Abstract, method that derives at least one hemodynamic parameter), and a non-transitory computer readable storage medium (nonvolatile computer readable storage device, Page 7, lines 26-34) comprising a computer program stored thereon (storing data, some of which represent executable instructions used by the image processing system 100 in executing the modules of the image processing system, the method described, and the data flow process, Page 7, last Paragraph into the top of page 8) wherein, when executed, the computer program executes the method, comprising:
receiving a first time series of diagnostic images of blood flow through a vasculature receiving a first time series of diagnostic images of blood flow through a vasculature (sequence of images 104 with a stream of images 102, Page 10, Paragraph starting at line 20; wherein the images are interpreted as the 2D imaging data 42 generated by image acquisition machine 30, Page 6, Paragraph starting at line17);
generating, using a subset of the time series of diagnostic images (the group of images that includes contrast agent, that are grouped as a temporal cluster, Page 10 last paragraph to the top of Page 11), a quantitative fluid dynamics parameter that is descriptive of the blood flow through the vasculature (See Fig. 4, wherein at least one hemodynamic parameter is obtained, step 214, which is derived from a simulation, step 212, based on a combined model, step 210, that includes the 2D imaging data found in step 204, which as disclosed in Page 6, Paragraph starting at line17, in an embodiment is a sequence of images acquired during a contrast injection),
wherein the subset depicts a contrast agent progression through the vasculature (the images are of the region of interest of vasculature, therefore the contrast agent must be through the vasculature, Page 2, lines 23-25), wherein the quantitative fluid dynamics parameter is generated (See Fig. 4, wherein at least one hemodynamic parameter is obtained, step 214, which is derived from a simulation, step 212, based on a combined model, step 210, that includes the 2D imaging data found in step 204, which as disclosed in Page 6, Paragraph starting at line17, in an embodiment is a sequence of images acquired during a contrast injection) based on tracking the contrast agent progression in the subset (the group of images that includes contrast agent, that are grouped as a temporal cluster, Page 10 last paragraph to the top of Page 11; the temporal nature of the cluster would read on a progression of the contrast agent in the subset); and
outputting, to a display, a graphical representation of the quantitative fluid dynamics parameter (The display device 26 is any monitor, screen, or the like suitable for presenting a graphical user interface (GUI) capable of presenting an enhanced 3D model and results of a hemodynamic simulation as described herein, Page 6, lines 25-27) to provide an assessment of the blood flow through the vasculature to a user (Page 10, Paragraph starting at line 7, determining hemodynamic values such as pressure ratios across a stenosis for classification of vessel disease).
Gauriau further discloses that for determining the subset of images that includes the contrast agent, a score vector can be used, or using deep-learning neural network, to identify the contrast enhanced images, which equates to the group of images that includes contrast agent. (Page 11, Paragraph starting at line 7).
However, Gauriau does not disclose normalizing the first time series of diagnostic images to generate a second time series of diagnostic images, wherein the normalizing is of the image resolution.
Yamazaki teaches a similar method of using a trained classifier for a determination result from input medical images (Abstract, more specifically using a trained classifier to determine a target region of the image, and classify the image). Yamazaki teaches the input image is subjected to arbitrary preprocessing such as changes in the image size and resolution normalization, wherein the arbitrary preprocessing is of similar value as that used for training the classifier (Paragraph 0061).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system as described by Gauriau, wherein the normalization is normalizing the image resolution, as taught by Yamazaki, in order to have similar values for the imaging attributes, such as image resolution, for the input images as those used for training, to make it easier to acquire a result of the machine-learning algorithm with high accuracy (Yamazaki, Paragraph 0061).
Therefore, in the combination of Gauriau and Yamazaki, preprocessing the images would be preprocessing the stream of images 102 of Gauriau (Page 10, Paragraph starting at line 20), wherein the preprocessing is normalization of the image resolution to values similar to those used for training the machine learning algorithm (as taught by Yamazaki, Paragraph 0061). Therefore, in normalizing the stream of images 102 of Gauriau, the images before normalization that reads on the first time series of diagnostic images would have a first value of an image resolution, since they were acquired in a sequence or stream, and infers common acquisition parameters, and after normalization, would read on a second time series of diagnostic images, with a second value of the image resolution, that is similar to that used for training.
Further, since Gauriau uses as the input for their algorithm, the group of images that includes contrast agent, that are grouped as a temporal cluster, Page 10 last paragraph to the top of Page 11, and is referenced as the sequence of images 104 (Page 10, Paragraph starting at line 20 and Page 10 last paragraph to the top of Page 11), the images 104 would read on the subset of the second time series of diagnostic images, with the second value of the image resolution, since Yamazaki teaches that the image to be input into the classification unit is subject to the arbitrary processing (i.e. the normalization of image resolution), which would read on all the images that are inputted into the classifier having to undergo the image resolution normalization, and thus the images of the time series set of Gauriau would all undergo image resolution normalization to create a second time series of diagnostic images, with a second value of the image resolution, and a second time series of diagnostic images, with a second value of the image resolution.
Regarding claim 25, the modifications of Gauriau, and Yamazaki disclose all the features of claim 16 above.
As disclosed in the claim 16 rejection above, in the combination of Gauriau, Chen, and Yamazaki, preprocessing the images would be preprocessing the stream of images 102 of Gauriau (Page 10, Paragraph starting at line 20), wherein the preprocessing is normalization (as taught by Chen, Paragraph 0028), wherein the normalization is normalizing the image resolution to values similar to those used for training the machine learning algorithm (as taught by Yamazaki, Paragraph 0061). Therefore, in normalizing the stream of images 102 of Gauriau, the images before normalization that reads on the first time series of diagnostic images would have a first value of an image resolution, since they were acquired in a sequence or stream, and infers common acquisition parameters, and after normalization, would read on a second time series of diagnostic images, with a second value of the image resolution, that is similar to that used for training.
Further, since Gauriau uses as the input for their algorithm, the group of images that includes contrast agent, that are grouped as a temporal cluster, Page 10 last paragraph to the top of Page 11, and is referenced as the sequence of images 104 (Page 10, Paragraph starting at line 20 and Page 10 last paragraph to the top of Page 11), the images 104 would read on the subset of the second time series of diagnostic images, with the second value of the image resolution, since what is used as input, as taught by Chen and Yamazaki, must be preprocessed using normalization of the image resolution to a value similar to that used for training.
Regarding claim 26, the modifications of Gauriau, and Yamazaki disclose all the features of claim 25 above.
Gauriau further discloses a contrast agent detector that detects from the stream of images 102, the group of images that include contrast agent, which is referenced as images 104 (Page 10, Paragraph starting at line 20 and Page 10 last paragraph to the top of Page 11), and as discussed above, in the claim 25 rejection, the combination of Gauriau, Chen, and Yamazaki teaches preprocessing occurs to the stream of images 102. Therefore, grouping of the images (which reads on the subset of images) would occur after the image resolution is normalized for the stream of images 104.
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gauriau, in view of Yamazaki, as applied to claim 16 above, and further in view of US2012/0230558 to Chen et al. “Chen”
Regarding claim 20, the modifications of Gauriau and Yamazaki disclose all the features of claim 16 above.
As disclosed in the claim 16 rejection above, Gauriau discloses that the subset of images are the group of images that includes contrast agent, that are grouped as a temporal cluster (Page 10 last paragraph to the top of Page 11), wherein as discussed in the claim 16 rejection above, would read on the subset of the second time series of diagnostic images.
However, the modifications of Gauriau and Yamazaki do not disclose identifying, the subset of the second time series of diagnostic images, wherein the subset comprises an image indicative of an initial injection of contrast agent into the vasculature.
Chen teaches using the trained algorithm (trained contrast inflow detector, Paragraph 0027), to detect when a contrast agent injection is present in the fluoroscopic image sequence, the method then detects at which frame in the sequence the contrast begins to be present (i.e., at which frame the contrast inflow begins) (Paragraph 0027). This reads on identifying, the subset of the second time series of diagnostic images, wherein the subset comprises an image indicative of an initial injection of contrast agent into the vasculature.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system as described by Gauriau, and Yamazaki, wherein the method includes identifying, in the second time series of diagnostic images, the subset of the second time series of diagnostic images, wherein the subset comprises an image indicative of an initial injection of contrast agent into the vasculature, as taught by Chen, in order to provide an automatic inflow detection method for computer-aided procedures (Chen, Paragraph 0004).
Claim(s) 21, 22, 27, 29, and 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gauriau, in view of Yamazaki, as applied to claim 16 above, and further in view of US2016/0148371 to Itu et al. “Itu”.
Regarding claim 21, the modifications of Gauriau, and Yamazaki disclose all the features of claim 16 above.
As disclosed in the claim 16 rejection above, Gauriau discloses generating, using a subset of the time series of diagnostic images (the group of images that includes contrast agent, that are grouped as a temporal cluster, Page 10 last paragraph to the top of Page 11 ), as input to determine a quantitative fluid dynamics parameter that is descriptive of the blood flow through the vasculature (See Fig. 4, wherein at least one hemodynamic parameter is obtained, step 214, which is derived from a simulation, step 212, based on a combined model, step 210, that includes the 2D imaging data found in step 204, which as disclosed in Page 6, Paragraph starting at line17, in an embodiment is a sequence of images acquired during a contrast injection), wherein as further discussed above in the claim 16 rejection above, the subset would read on the subset of the second time series of diagnostic images.
However, the modifications of Gauriau and Yamazaki do not disclose: wherein in the generating of the quantitative fluid dynamics parameter, the input is placed into a neural network; generating, by operation of the neural network, an output that the neural network is trained to generate, wherein the output comprises the quantitative fluid dynamics parameter.
ltu teaches inputting diagnostic images to a neural network, (acquiring medical scan data representing a vessel structure of a patient, which is processed with a feature extraction that is input into a machine trained-classifier, Paragraph 0006; wherein the machine trained classifier can be neural networks, Paragraph 0147); and
generating, by operation of the neural network, an output that the neural network is trained to generate (application of the machine-trained classifier results in an output of hemodynamic metric, Paragraph 0006; wherein the machine trained-classifier can be neural networks, Paragraph 0147),
wherein the output comprises the quantitative fluid dynamics parameter (Paragraph 0006, output a hemodynamic metric; wherein examples of the hemodynamic metric are indices such as fractional flow reserve, coronary flow reserve, and instantaneous wave free ratio, Paragraph 0060).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system as described by Gauriau and Yamazaki, wherein in the generating of the quantitative fluid dynamics parameter, the input is placed into a neural network; generating, by operation of the neural network, an output that the neural network is trained to generate, wherein the output comprises the quantitative fluid dynamics parameter, as taught by ltu, in order more rapidly predict the hemodynamic metric value, in comparison to computational flow dynamics, which can allow for predicting during a surgical procedure, during therapy planning, or during diagnosis by a medical professional (ltu, Paragraph 0202-203).
Regarding claim 22, the modifications of Gauriau, Yamazaki, and ltu disclose all the features of claim 21 above.
ltu teaches wherein the neural network (the machine learning algorithm can be artificial neural networks, Paragraph 0147) is trained with a ground truth quantitative fluid dynamics parameter (patient-specific flow is measured and used as a ground truth, Paragraph 0186; hemodynamic ground truth are used to train the classifier, Paragraph 0193) and a virtual time series of diagnostic images indicative of a contrast agent dynamic through vasculature (training data are determined from simulation, synthetically created images, Paragraph 0101; wherein the synthetic images include contrast agent propagation, Paragraph 0210, which would read on a temporal images of contrast agent dynamics).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system as described by Gauriau, Yamazaki, and ltu, wherein the neural network is trained with a ground truth quantitative fluid dynamics parameter and a virtual time series of diagnostic images indicative of a contrast agent dynamic through vasculature, as taught by ltu, in order generate a wide range of examples to train the learning algorithm, that are not readily available from patient measured examples (ltu, Paragraphs 0004, 0005).
Regarding claim 27, the modifications of Gauriau, Yamazaki, and Itu disclose all the features of claim 21 above.
As disclosed in the claim 21 rejection above, Itu teaches wherein the neural network comprises a classifier (acquiring medical scan data representing a vessel structure of a patient, which is processed with a feature extraction that is input into a machine trained-classifier, Paragraph 0006; wherein the machine trained classifier can be neural networks, Paragraph 0147; this reads on the neural network comprises a classifier).
Regarding claims 29 and 30, the modifications of Gauriau, Yamazaki, and Itu disclose all the features of claim 21 above.
As disclosed in the claim 21 rejection above, Itu teaches wherein the quantitative fluid dynamics parameter comprises a hemodynamic index, and comprises at least one of coronary flow reserve (CFR) or fractional flow reserve (FFR). (Paragraph 0006, output a hemodynamic metric; wherein examples of the hemodynamic metric are indices such as fractional flow reserve, coronary flow reserve, and instantaneous wave free ratio, Paragraph 0060; this reads on the output as the claimed hemodynamic index such as fractional flow reserve or coronary flow reserve).
Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gauriau, in view Yamazaki, and further in view of Itu, as applied to claim 22 above, and further in view of US2020/0085394 to Turcea et al. “Turcea”.
Regarding claim 23, the modifications of Gauriau, Yamazaki, and ltu disclose all the features of claim 22 above.
ltu teaches wherein the training images are generated by: defining at least one virtual vessel tree (generating synthetic arterial trees as training data, Paragraphs 0060, 0066); and modeling a flow speed through the least one vessel tree based on a fluid dynamics model (In act 16, computational fluid dynamics (CFO) computations for the in silica anatomical models or flow experiments for the in vitro anatomical models are performed to determine a ground truth or value of the hemodynamic metric for each example, Paragraph 0060).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system as described by Gauriau, Yamazaki, and ltu, wherein the method includes defining at least one virtual vessel tree, and modelling a flow speed through the least one vessel tree based on a fluid dynamics model, as taught by ltu, in order to extract measures of interest such as diagnostic indices such as fractional flow reserve and coronary flow reserve, to then further train the model (See Fig. 2, Refs. 10, 16, 24, and 14).
However, ltu does not explicitly teach defining a virtual contrast agent injection rate.
Turcea teaches for a generated set of synthetic arterial trees, performing contrast agent simulations, resulting in the time-resolved results of the contrast agent propagation in a plurality of synthetic angiograms (Paragraph 0023), wherein the contrast agent simulations can use different contrast agent injection rates profiles (Paragraph 0091).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system as described by Gauriau, Yamazaki, and ltu, defining a virtual contrast agent injection rate, as taught by Turcea, in order to generate synthetic images (coronary angiograms) depicting contrast agent propagation, which can then be used as a source of synthetic ground truth inform for training the neural network (Turcea, Paragraphs 0092,0093).
Claim(s) 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gauriau, in view Yamazaki, and further in view of Itu, as applied to claim 27 above, and further in view of Non-Patent Literature: “Deep Learning Algorithms for Coronary Artery Plaque Characterisation from CCTA Scans” to Denzinger et al. “Denzinger”.
Regarding claim 28, the modifications of Gauriau, Yamazaki, and Itu disclose all the features of claim 27 above.
However, the modifications of Gauriau, Yamazaki, and Itu do not disclose neural network comprises a 2.5D encoder architecture.
Denzinger teaches a similar method of using deep learning for coronary artery plaque characterization (Title) with a potential for predicting fractional flow reserve (Page 6, Discussion). Denzinger teaches using a 2.5D CNN (convolutional neural network) (Page 3, Section: 2.5D-CNN), which reads on the claimed 2.5D encoder architecture.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the system as described by Gauriau, Yamazaki, and Itu, wherein the neural network comprises a 2.5D encoder architecture, as taught by Denzinger. Using a 2.5D encoder architecture mitigates the computational expense to obtain and process the imaging data that is usually associated with 3D representations or a multitude of 2d representations (Page 3, Section: 2.5D-CNN).
Response to Arguments
Applicant's arguments filed 11/24/2025 have been fully considered but they are not persuasive. As stated in the advisory action mailed on 12/24/2025, Applicant argues (Page 6 of Arguments) that the cited references fail to teach or suggest "normalizing the image resolution of the first time series of diagnostic images to generate a second time series of diagnostic images comprising a different, second value of the image resolution."
More specifically, Applicant argues (Page 6 of Arguments) that prior art to Chen does not disclose "normalizing the first time series of diagnostic images to generate a second time series of diagnostic images", and instead normalizes the score vector S. Applicant's arguments are moot since as previously cited in the office action filed on 09/24/2025, and repeated above in the 35 U.S.C. 103 rejection, primary prior art to Gauriau discloses receiving a first time series of diagnostic images of blood flow through a vasculature (sequence of images 104 with a stream of images 102, Page 10, Paragraph starting at line 20; wherein the images are interpreted as the 2D imaging data 42 generated by image acquisition machine 30, Page 6, Paragraph starting at line 17). Additionally, Gauriau discloses generating a subset of the time series of diagnostic images, as group of images that includes contrast agent, that are grouped as a temporal cluster (Page 10 last paragraph to top of Page 11 ). This grouping as cited previously (Page 11 , Paragraph starting at line 7) can be determined by a score vector, or as stated in the same paragraph, using deep-learning neural network, to identify the contrast enhanced images, which equates to the group of images that includes contrast agent.
Yamazaki teaches a similar method of using a trained classifier for a determination result from input medical images (Abstract, more specifically using a trained classifier to determine a target region of the image, and classify the image). Yamazaki teaches the input image is subjected to arbitrary preprocessing such as changes in the image size and resolution, wherein the arbitrary preprocessing is of similar value as that used for training the classifier (Paragraph 0061). It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine Yamazaki with Gauriau, in order to perform the preprocessing (such as changes in the image resolution) on the images before the contrast enhanced image detection (which is then used to determine the temporal clusters) so that the images undergoes the same preprocessing as the images used to train the image detection algorithm (Yamazaki, Paragraph 0061) to achieve more accuracy.
Applicant argues that nothing in Yamazaki suggests any time-series-wide image resolution normalization, nor suggests any description related to "generating ... a quantitative fluid dynamics parameter that is descriptive of the blood flow through the vasculature" (Page 8) and that Yamazaki is not concerned with the same challenges that the presented disclosure is solving and therefore it would not have been obvious to modify with Gauriau.
The examiner respectfully disagrees. Although Yamazaki is not directed towards blood flow images or calculation of quantitative fluid dynamics, Yamazaki is directed towards the steps of processing images before an detection/classification algorithm is applied. Gauriau has similar steps of applying a detection algorithm to group the series of images that contains the contrast enhanced images, before they are used further down the procedural flow chart to calculate the quantitative fluid dynamics parameters. Therefore, benefits as discussed above, as taught by Yamazaki, would be relevant and beneficial (i.e. improve accuracy) to the disclosure of Gauriau. Additionally, since Yamazaki teaches normalizing all the images (i.e. the input images) that will be processed by the detection algorithm, and Gauriau teaches a sequence of images are used as the starting point, the normalization of the sequence of images used for input would read on the image resolution normalization of the time series. For the reasons above, the examiner does not find applicant's arguments convincing, and claims 16 and 24 remain rejected. The remaining claims are rejected for at least the reason that they inherit the deficiencies by nature of their dependency on either claim 16 or claim 24.
Additionally, as stated in MPEP 1207.03(a).II, point #3 under Factual situations that do not constitute a new ground of rejection, the examiner's reliance on the teachings of Gauriau and Yamazaki, and omitting the teachings of Chen, do not constitute a new ground of rejection since the examiner relies on the same teachings of Gauriau and Yamazaki, and no new paragraphs of Gauriau and Yamazaki are cited/discussed in this response.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Milton Truong whose telephone number is (571)272-2158. The examiner can normally be reached 9AM - 5PM, MON-FRI.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at (571) 270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MT/Examiner, Art Unit 3798
/KEITH M RAYMOND/Supervisory Patent Examiner, Art Unit 3798