Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3, 11, 12 and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 3 and 15, the claim recites “wherein the anti-aliasing is carried out by selecting a limited number of random samples such that multiple samples end up in each voxel within the segment,” It is unclear from the context of the claim how the anti-aliasing is performed, since it says that multiple samples end up in each voxel, it is interpreted that a voxel is a combination of different samples. One of ordinary skill in the art would ask “How can multiple samples end up in a single voxel if the single voxel is the representation of a single sample, how is this performed?” “Isn’t a method like this usually performed using voxel masks?” Therefore one of ordinary skill in the art would not be able to apprise the scope of the claim for reasons regarding clarity.
Claim 11 recites the limitation "the real data". There is insufficient antecedent basis for this limitation in the claim.
Claim 12 recites the limitation "the real data". There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 4, 5, 8, 9, 10, 11, 12, 13, 16, 18, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Nadakuditi et. al., hereafter Nadakuditi (US Publication No. 20240386547 A1 ) in view of Hannes et. al., hereafter Hannes (EP Publication No. 3982324 A1) and Ramsay et. al., hereafter Ramsay (US Publication No. 20240104701 A1).
As per claim 1, Nadakuditi teaches “A system for generating synthetic training data for blood vessels… the system comprising: a parametric blood vessel branching simulation module configured to generate a 3D vessel model..., and… to yield the synthetic training data.”, (See paragraph 15 [0015] In example embodiments, the present techniques provide processes to train and utilize a multi-stage neural network ML model for performing 3D reconstruction of coronary vessel trees. The training of the ML model is performed by generating synthetic coronary vessel trees derived from statistical models of 3D image data such as magnetic resonance angiography and/or computed tomography angiography image data, generating binarized angiography images from different conical projections of the synthetic vessel trees, and using the binarized angiography images to train the ML Model to perform 3D reconstruction from binarized angiography images, (e.g., clinical angiography images).”, See also paragraph 38 and fig.5, it shows that it can be considered parametric i.e. utilizes parameters. See also paragraph 52 which shows the implementation of stenosis, which can be interpreted as a type of lesion. Nadakuditi ) however Nadakuditi does not completely teach “and/or lesions segmentation… and/or a parametric lesion simulation module configured to generate a 3D lesion model… add a background to… and/or lesion model”
Hannes teaches “and/or lesions segmentation… to generate a 3d.. and/or a parametric lesion simulation module configured to generate a 3D lesion model… and/or lesion model, to yield the synthetic training data” (See last 5 paragraphs of page 3 “The computer-implemented method comprises: obtaining an image of blood vessels in a region of interest of a subject; performing image segmentation on the image to identify one or more parameters of the blood vessels from the image; synthesizing a set of one or more lesions for one or more of the blood vessels of the image by processing the identified one or more parameters of the blood vessels; and generating the synthetic image, for training a machine-learning method for assessing lesions in blood vessels of an image, by combining the image and the synthesized set of one or more lesions for one or more of the blood vessels contained in the image; and generating one or more annotations for the synthetic image, based upon the characteristics of the set of one or more synthetic lesions in the synthetic image.” Since the generation of synthetic images is dependent on parameters, it is parametric. See also page 7 last paragraph “In particular, step 132 may comprise defining which parts (e.g. which pixels and/or voxels) of the image represent the exterior bounds of a synthetic lesion. For instance, in one example, the image is a 3D image of the region of interest, step 130 obtains a voxelized and explicit geometric 3D lesion outline from the determined parameters of the blood vessel(s).” See also page 11 paragraph 3 “In more complex procedures, e.g. with 3D images, a 3D lesion may be defined using multiple sets of the above described shape parameters, each representing a shape of part of the lesion at different positions (and therefore different cross-sections) along the centerline of a blood vessel…” Overall, the lesions form part of blood vessels, therefore it also covers the BRI (broadest reasonable interpretation) of blood vessels when also producing synthetic lesions in blood vessels. Hannes)
It would have been obvious to one of ordinary skill in the art before the effective filing
date of the claimed invention to combine the teachings of Nadakuditi with the teachings of Hannes to include machine learning for a parametric lesion segmentation module to produce a 3d lesion model in addition to the blood vessel model. The modification would have been motivated by the desire to improve the availability of ground truth data for training machine learning models to identify or classify lesions in blood vessels, therefore it is an improvement, as suggested by Hannes (See page 3 paragraphs 2-5 “One area of medical image processing (with machine-learning methods) that faces the above-described problem is in applying machine learning models to images of blood vessels, and in particular to determining the presence (and characteristics) of lesions within blood vessels. There is therefore a desire to improve the availability of ground-truth data for training a machine-learning method suitable for analyzing images to identify, classify or otherwise determine characteristics of blood vessel lesions in the images. ”)
Ramsay teaches “an augmentation module configured to add a background to the respective… model,” (See abstract and paragraphs 151-163. The paragraphs present that a background image made of tissue samples is created to correctly contrast it with simulated vessels. “[0151] Performance testing is standardized by having a range of “targets” of varying size, shape, and contrast embedded in a range of “tissue” backgrounds. In X-ray Angiography, the relative contrast between the vessel and surrounding tissue is dependent on a number of factors, including the amount of dye in the vessel, the shape of the vessel, the angle of the X-ray plane, the presence of other overlying structures, etc….” “[0156] To create a single area of tissue background comprised of clinical sub-images, a group of approximately 1800 sub-images with a common mean was selected amongst all the sub-images and randomly arranged as a 125×15 matrix for a new sub-image size of 3000×360… With this design, the phantom has a standardized range of background values and includes representative noise commonly encountered in the angiograms. FIG. 18 shows the distribution of each of these 11 full tissue groups contained in the digital phantom images.” And paragraph 157 shows “[0157] The “targets” embedded in the digital phantom are meant to represent a range of vessels encountered in the clinical image. For this clinical application, the targets simulate dark vessels surrounded by lighter tissue, as opposed to other applications and modalities, which often have test targets with both lighter and darker contrast.” Ramsay)
It would have been obvious to one of ordinary skill in the art before the effective filing
date of the claimed invention to combine the teachings of Nadakuditi with the teachings of Ramsay to add a background to the respective model. The modification would have been motivated by the desire to have a clearer view of the blood vessels through better contrasting and better standardization, therefore it is an improvement as suggested by Ramsay (See paragraphs 3 and 152 “[0003] Analysis of heart and peripheral vascular health is commonly assessed through the use of angiograms, which are serially captured individual X-ray images of blood vessels that show blood flow through arteries. A contrast dye is injected into the blood to cause the blood vessels to appear more clearly in the image. Vessels appear dark against a lighter background in the acquired image wherever blood flows”. “[0152]… The goal of the use of a digital phantom in Imago's testing is to simulate a complete range of background (non-vessel) areas that may be encountered but in a more standardized environment.” See also paragraphs 157 and 166.)
Claim 13 is rejected under the same analysis as claim 1.
Claim 20 is rejected under the same analysis as claim 1.
As per claim 4, Nadakuditi in view of Hannes and Ramsay already teaches “the system of claim 1, comprising the parametric lesion simulation module.” (See last 5 paragraphs of page 3. See also page 7 last paragraph. It shows the module used for generation. Hannes).
Claim 16 is rejected under the same analysis as claim 4.
As per claim 5, Nadakuditi in view of Hannes and Ramsay already teaches “the system of claim 4, wherein the augmentation module is further configured to add the 3D vessel model to the 3D lesion model, to yield the synthetic training data.” (See last 5 paragraphs of page 3. See also page 7 last paragraph. The 3d lesion model already includes the 3d vessel model to yield training data. See also page 11 paragraph 3 “In more complex procedures, e.g. with 3D images, a 3D lesion may be defined using multiple sets of the above described shape parameters, each representing a shape of part of the lesion at different positions (and therefore different cross-sections) along the centerline of a blood vessel….” Hannes.) (See also on Nadakuditi paragraphs 15, 38 and fig. 5, most importantly paragraph 52 shows stenosis can also be implemented in this model which can be considered a lesion. Nadakuditi)
As per claim 8, Nadakuditi in view of Hannes and Ramsay already teaches “the system of claim 1, further configured to train a deep neural network (DNN) using the synthetic training data.” (See paragraph 15 in Nadakuditi “0015] In example embodiments, the present techniques provide processes to train and utilize a multi-stage neural network ML model for performing 3D reconstruction of coronary vessel trees. The training of the ML model is performed by generating synthetic coronary vessel trees derived from statistical models of 3D image…” Nadakuditi ) (See also page 12 paragraphs 2-7 in Hannes)
As per claim 9, Nadakuditi in view of Hannes and Ramsay already teaches “a DNN training system configured to train a DNN for blood vessel and/or lesion segmentation using the synthetic training data generated by the system of claim 1.” (See paragraph 15 in Nadakuditi “0015] In example embodiments, the present techniques provide processes to train and utilize a multi-stage neural network ML model for performing 3D reconstruction of coronary vessel trees. The training of the ML model is performed by generating synthetic coronary vessel trees derived from statistical models of 3D image…”, see also paragraph 40 “[0040]… More generally, the computer-readable media 106 may store trained deep learning models, including vessel segmentation machine learning models… ”. See also paragraphs 79-81 Nadakuditi ) (See also page 12 paragraphs 2-7 in and page 5 paragraph 5. Hannes)
Claim 18 is rejected under the same analysis as claim 9.
As per claim 10, Nadakuditi in view of Hannes and Ramsay already teaches “the DNN training system of claim 9, further configured to receive real data and use the real data to enhance the training of the DNN using the real data in addition to the synthetic training data.” (See paragraphs 37, 79, 65, 62 and 56, the neural network is trained with ground truth data/clinical data i.e. real data “[0037] The described methods include training of the neural network ML model using a dataset of synthetic coronary trees from a vessel generator using both clinical image data (e.g., MRA and CTA image data) and literature values on coronary anatomy. While the training is described as performed using synthetic vessel trees, the ML model may be trained using images of clinically obtained images of vessel trees and verified using 3D reconstructions of the clinically obtained images.” “[0079] To train the proposed multi-staged neural network, hundreds to thousands of ground truth 3D coronary trees can be used with their corresponding segmented 2D angiograms. In practice, this means that thousands of patients with both 3D CTA data and 2D X-ray angiograms must be identified, which is typically not feasible in many medical centers and for many studies…To produce a large enough dataset and eliminate external sources of error such as temporal registration, a method to produce a sufficiently large training dataset consisting of 5,000 static 3D coronary tree geometries and their corresponding sets of 2D projections was devised. While synthetic data has been used to train and validate the 3D reconstruction multi-stage neural network described herein, the use of synthetic projection images as input does not preclude future clinical application. ” Nadakuditi)
Claim 19 is rejected under the same analysis as claim 10.
As per claim 11, Nadakuditi in view of Hannes and Ramsay “the DNN training system of claim 9, further configured to use the real data together with the synthetic training data for the training of the DNN.” (See paragraphs 37, 79, 65, 62 and 56, the neural network is trained with ground truth data/clinical data i.e. real data “[0037] The described methods include training of the neural network ML model using a dataset of synthetic coronary trees from a vessel generator using both clinical image data (e.g., MRA and CTA image data) and literature values on coronary anatomy. While the training is described as performed using synthetic vessel trees, the ML model may be trained using images of clinically obtained images of vessel trees and verified using 3D reconstructions of the clinically obtained images.” “[0079] To train the proposed multi-staged neural network, hundreds to thousands of ground truth 3D coronary trees can be used with their corresponding segmented 2D angiograms. In practice, this means that thousands of patients with both 3D CTA data and 2D X-ray angiograms must be identified, which is typically not feasible in many medical centers and for many studies…To produce a large enough dataset and eliminate external sources of error such as temporal registration, a method to produce a sufficiently large training dataset consisting of 5,000 static 3D coronary tree geometries and their corresponding sets of 2D projections was devised. While synthetic data has been used to train and validate the 3D reconstruction multi-stage neural network described herein, the use of synthetic projection images as input does not preclude future clinical application. ” Nadakuditi)
As per claim 12, Nadakuditi in view of Hannes and Ramsay already teaches “the DNN training system of claim 9, further configured to use the real data to improve the training of the DNN.” (See paragraphs 37, 79, 65, 56 and 62, the neural network is trained with ground truth data/clinical data i.e. real data “[0037] The described methods include training of the neural network ML model using a dataset of synthetic coronary trees from a vessel generator using both clinical image data (e.g., MRA and CTA image data) and literature values on coronary anatomy. While the training is described as performed using synthetic vessel trees, the ML model may be trained using images of clinically obtained images of vessel trees and verified using 3D reconstructions of the clinically obtained images.” “[0079] To train the proposed multi-staged neural network, hundreds to thousands of ground truth 3D coronary trees can be used with their corresponding segmented 2D angiograms. In practice, this means that thousands of patients with both 3D CTA data and 2D X-ray angiograms must be identified, which is typically not feasible in many medical centers and for many studies…To produce a large enough dataset and eliminate external sources of error such as temporal registration, a method to produce a sufficiently large training dataset consisting of 5,000 static 3D coronary tree geometries and their corresponding sets of 2D projections was devised. While synthetic data has been used to train and validate the 3D reconstruction multi-stage neural network described herein, the use of synthetic projection images as input does not preclude future clinical application. ” Nadakuditi)
Claims 2, 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Nadakuditi in view of Hannes and Ramsay, and further in view of Kobayashi et. al., hereafter Kobayashi (US Publication No. 20240087113 A1).
As per claim 2, Nadakuditi in view of Hannes and Ramsay already teaches “the system of claim 1, wherein the augmentation module is further configured to add noise…to the 3D vessel model and/or lesion model, to yield the synthetic training data.” (On Hannes see last paragraph on page 9 and first paragraph on page 10, “This means that the synthesized image will share the same high-frequency component as the original image. This results in the synthesized image having a similar noise characteristic to the original image, so that the synthesized image more closely resembles a true or real-life image of one or more blood vessel(s) having one or more lesion(s) for improved training of the machine-learning method.” Hannes) (Ramsay also teaches adding noise on paragraphs 163 and 156. “[0163]… However, once the background tissue is added in, the natural variation (noise), which is inherent to Angiogram images, results in variation among the targets which is visible in the cross-sections.” “[0156]… With this design, the phantom has a standardized range of background values and includes representative noise commonly encountered in the angiograms.” Ramsay) , however Nadakuditi in view of Hannes and Ramsay does not teach “organ boundaries”
Kobayashi teaches “organ boundaries” (See paragraphs 152, 155 and 156. “[0155] FIG. 24 is an explanatory diagram illustrating a method of specifying the organ boundary. The control unit 201 of the support apparatus 200 acquires the recognition result of the learning model 350 by inputting the operative field image to the learning model 350 that has completed training. The control unit 201 generates a recognition image of the surface blood vessel appearing on the surface of the organ by referring to the recognition result of the learning model 350. The solid line in FIG. 24 indicates the surface blood vessel recognized by the learning model 350.” Kobayashi )
It would have been obvious to one of ordinary skill in the art before the effective filing
date of the claimed invention to combine the teachings of Nadakuditi and Hannes with Ramsay with the teachings of Kobayashi to include organ boundaries in the model. The modification would have been motivated by the desire to use machine learning to determine surface blood vessels, have the ability to clearly distinguish between organ tissue and blood vessel tissue, and in addition support surgery planification, therefore it is an improvement, as suggested by Kobayashi (See paragraphs 150, 151, 152, 153, 154 and 161. “[0150] On the other hand, in the present embodiment, the recognized ureter tissue can be displayed in a distinguishable manner in units of pixels. Therefore, the recognized ureter tissue can be displayed in an easy-to-see manner. In particular, in the present embodiment, since the ureter tissue and the blood vessel tissue (surface blood vessel) appearing on the surface of the ureter tissue are displayed so as to be distinguished from each other, the presence of the surface blood vessel that moves with the peristalsis of the ureter is highlighted.” “[0151]… The surface blood vessel appearing on the surface of the ureter has a pattern unique to the ureter, and is different from the patterns of surface blood vessels appearing on other organs.” “[0154]… That is, the learning model 350 according to the eighth embodiment is trained so as to recognize the surface blood vessel and other tissues in a distinguishable manner. ” “0161] As described above, in the eighth embodiment, the boundary of an organ can be specified by using surface blood vessels appearing on the surface of the organ as clues. The support apparatus 200 can support surgery by presenting the information of the specified boundary to the operator.”)
Claim 14 is rejected under the same analysis as claim 2.
As per claim 6, Nadakuditi in view of Hannes and Ramsay teaches “the system of claim 4, wherein the augmentation module is further configured to add noise… to the 3D lesion model, to yield the synthetic training data.” (On Hannes see last paragraph on page 9 and first paragraph on page 10, “This means that the synthesized image will share the same high-frequency component as the original image. This results in the synthesized image having a similar noise characteristic to the original image, so that the synthesized image more closely resembles a true or real-life image of one or more blood vessel(s) having one or more lesion(s) for improved training of the machine-learning method.” Hannes) (Ramsay also teaches adding noise on paragraphs 163 and 156. “[0163]… However, once the background tissue is added in, the natural variation (noise), which is inherent to Angiogram images, results in variation among the targets which is visible in the cross-sections.” “[0156]… With this design, the phantom has a standardized range of background values and includes representative noise commonly encountered in the angiograms.” Ramsay), however Nadakuditi in view of Hannes and Ramsay does not teach “organ boundaries”
Kobayashi teaches “organ boundaries” (See paragraphs 152, 155 and 156. “[0155] FIG. 24 is an explanatory diagram illustrating a method of specifying the organ boundary. The control unit 201 of the support apparatus 200 acquires the recognition result of the learning model 350 by inputting the operative field image to the learning model 350 that has completed training. The control unit 201 generates a recognition image of the surface blood vessel appearing on the surface of the organ by referring to the recognition result of the learning model 350. The solid line in FIG. 24 indicates the surface blood vessel recognized by the learning model 350.” Kobayashi )
It would have been obvious to one of ordinary skill in the art before the effective filing
date of the claimed invention to combine the teachings of Nadakuditi and Hannes with Ramsay with the teachings of Kobayashi to include organ boundaries in the model. The modification would have been motivated by the desire to use machine learning to determine surface blood vessels, have the ability to clearly distinguish between organ tissue and blood vessel tissue, and in addition support surgery planification, therefore it is an improvement, as suggested by Kobayashi (See paragraphs 150, 151, 152, 153, 154 and 161. “[0150] On the other hand, in the present embodiment, the recognized ureter tissue can be displayed in a distinguishable manner in units of pixels. Therefore, the recognized ureter tissue can be displayed in an easy-to-see manner. In particular, in the present embodiment, since the ureter tissue and the blood vessel tissue (surface blood vessel) appearing on the surface of the ureter tissue are displayed so as to be distinguished from each other, the presence of the surface blood vessel that moves with the peristalsis of the ureter is highlighted.” “[0151]… The surface blood vessel appearing on the surface of the ureter has a pattern unique to the ureter, and is different from the patterns of surface blood vessels appearing on other organs.” “[0154]… That is, the learning model 350 according to the eighth embodiment is trained so as to recognize the surface blood vessel and other tissues in a distinguishable manner. ” “0161] As described above, in the eighth embodiment, the boundary of an organ can be specified by using surface blood vessels appearing on the surface of the organ as clues. The support apparatus 200 can support surgery by presenting the information of the specified boundary to the operator.”)
Claims 3 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Nadakuditi in view of Hannes and Ramsay, and further in view of Crassin et. al. (US Publication No. 20140267266 A1)
As per claim 3, Nadakuditi in view of Hannes and Ramsay teaches “the system of claim 1, wherein:
the system comprises the parametric blood vessel branching simulation module, which is configured to generate the 3D vessel model as a hierarchical tree comprising a plurality of segments,” (See paragraph 15 “[0015] In example embodiments, the present techniques provide processes to train and utilize a multi-stage neural network ML model for performing 3D reconstruction of coronary vessel trees. The training of the ML model is performed by generating synthetic coronary vessel trees derived from statistical models of 3D image data such as magnetic resonance angiography and/or computed tomography angiography image data, generating binarized angiography images from different conical projections of the synthetic vessel trees, and using the binarized angiography images to train the ML Model to perform 3D reconstruction from binarized angiography images, (e.g., clinical angiography images).” Nadakuditi)
the segments are generated as… lines, each having a specified length and specified start and end thicknesses, (See paragraph 71, the length and thickness (radius) are specified. “The low-fidelity, or 1D vessel tree model representation, is given by the centerline coordinates and values of radius of each point Ni for each branch Mi. The high-fidelity reconstruction representation includes the volume bounded by the smooth analytical surface formed between the radii at each centerline coordinate point with the volume encompassing all centerline points Ni. In the illustrated example, the vessel tree i is represented by a tree matrix Mi×Ni×4, where Mi is the number of branches in the vessel tree i, Ni is the number of points on each branch centerline, and 4 is the numerical dimension of the data encoded in each point of the branch centerline, specifically its three-dimensional spatial coordinates (x, y, z) and radius r.” The three-dimensional spatial coordinates permit it to have a specified end and start thickness (radius), and are also lines such as seen on fig. 5. On paragraph 52 it is also shows that the stenosis length can be specified (which includes the length of the blood vessel). See also paragraph 62 “[0062]… The data associated with the set of synthetic vessel trees and the reconstructed vessel trees may include data corresponding to vessel tree length, one or more vessel diameters, vessel tortuosity, and/or stenosis patterns.” See also paragraph 88 which shows a regularized length parameter (therefore specified) “[0088]… The MAE in vessel length was 8.83±4.81 mm. Optimal values for vessel length reconstruction were obtained with a regularization length parameter λ=0.1 (see Eq. 2), which led to a 47% decrease in vessel length error (16.36±2.88 mm) compared to the same network trained without length regularization in the loss function.” ) “…, and wherein the specified end thickness is equal or smaller than the specified start thickness, (See paragraph 71, the length and thickness (radius) are specified. “The low-fidelity, or 1D vessel tree model representation, is given by the centerline coordinates and values of radius of each point Ni for each branch Mi. The high-fidelity reconstruction representation includes the volume bounded by the smooth analytical surface formed between the radii at each centerline coordinate point with the volume encompassing all centerline points Ni. In the illustrated example, the vessel tree i is represented by a tree matrix Mi×Ni×4, where Mi is the number of branches in the vessel tree i, Ni is the number of points on each branch centerline, and 4 is the numerical dimension of the data encoded in each point of the branch centerline, specifically its three-dimensional spatial coordinates (x, y, z) and radius r.” The three-dimensional spatial coordinates permit it to have a specified end and start thickness (radius), such as seen on fig. 5 which shows a thicker start and thinner end. See also paragraph 52 which says it can specify the radius to be narrower when including stenosis and also says that it includes the use of linear tapering. See also paragraphs 81, and 68-77 to see how the radius works, see also fig. 6 and fig. 9 which shows how stenosis can be applied to have a thicker start and thinner end to the segment/branch. Nadakuditi )
“the segments are elongated by at least one of:
addition of a segment having a specified start thickness that is equal or smaller than the specified end thickness of the segment that is elongated, and/or branching into two segments having equal or smaller thickness than the segment that is branched, wherein a string of branched segments follows a semi- linear or a curved line, and” (See paragraph 71, the length and thickness (radius) are specified. “The low-fidelity, or 1D vessel tree model representation, is given by the centerline coordinates and values of radius of each point Ni for each branch Mi. The high-fidelity reconstruction representation includes the volume bounded by the smooth analytical surface formed between the radii at each centerline coordinate point with the volume encompassing all centerline points Ni. In the illustrated example, the vessel tree i is represented by a tree matrix Mi×Ni×4, where Mi is the number of branches in the vessel tree i, Ni is the number of points on each branch centerline, and 4 is the numerical dimension of the data encoded in each point of the branch centerline, specifically its three-dimensional spatial coordinates (x, y, z) and radius r.” The three-dimensional spatial coordinates permit it to have a specified end and start thickness (radius), such as seen on fig. 5 which shows a thicker start and thinner end. See also paragraph 52 which says it can specify the radius to be narrower when including stenosis and also says that it includes the use of linear tapering with branching. See also paragraphs 81, and 68-77 to see how the radius works, see also fig. 6 and fig. 9 which shows how stenosis can be applied to have a thicker start and thinner end to the segment/branch. Figure 5 and fig. 8 also shows branching into two segments with smaller thickness from it is branched. Each of those shown in the figures are also semi linear or curved. Nadakuditi)
“the segments are non-overlapping.” (Paragraph 37 says that this method is robust to overlapping “[0037]… The multi-stage neural network ML model also achieves a 52% and 38% reduction in vessel centerline reconstruction errors compared to single-stage neural networks methods and projective geometry-based methods, respectively. The described methods are robust to challenges faced by other 3D reconstruction methods, such as vessel foreshortening and overlap in the input images. ” Such as those seen in figures 5, 8, 9, 13 and 11. They also prevent overlapping with it being a 3d reconstruction as seen in fig. 5. Nadakuditi), however Nadakuditi in view of Hannes and Ramsay does not teach “anti-aliased lines” and “wherein the anti-aliasing is carried out by selecting a limited number of random samples such that multiple samples end up in each voxel within the segment”
Crassin teaches “anti-aliased lines” and “wherein the anti-aliasing is carried out by selecting a limited number of random samples such that multiple samples end up in each voxel within the segment” (See paragraphs 2, 56, 59, 62, 63, 65, 66 and 101. Paragraph 62 shows the use of multi-sample anti-aliasing in voxels. Paragraphs 65 and 66 shows that any sample point (therefore random) and any limited number of sample points can be used, see also paragraphs 67-70, and also 91/92 (shows that any pattern of samples points can be applied, therefore also random). Paragraph 56 shows that multi-sample antialiasing is performed by using graphic primitives, and paragraph 54 also shows one of the graphic primitives are line segments. See also figs. 6A, 6B, &A and 7B for explanation of the multi-sample anti-aliasing in voxels. “[0062] FIGS. 6A and 6B illustrate a technique for performing multi-sample anti-aliased (MSAA) voxelization, according to one embodiment of the present invention. MSAA voxelization may be performed by analyzing each sample point 610 (e.g., 610-1) within the voxel 510-1 to determine whether the sample point 610 is on the front side 635 or the back side 630 of a primitive 520-1.” “[0066] Any number of sample points 610 may be distributed within the voxel 510-1. The number of sample points 610 may be based on, for example, a desired granularity, accuracy, processing workload, etc. In one embodiment, 64 sample points 610 (e.g., 4.times.4.times.4 sample points) may be distributed in the voxel 510-1 such that the computed occupancy of the voxel is quantized to 1/64…” [0101]… “The multi-sample anti-aliasing technique for performing voxelization distributes sample points within a voxel, determines which primitives intersect the voxel, and analyzes the intersecting primitives to determine whether each sample point is inside or outside of the geometric object…” )
It would have been obvious to one of ordinary skill in the art before the effective filing
date of the claimed invention to combine the teachings of Nadakuditi with Hannes and Ramsay with the teachings of Crassin to use anti-aliasing and also have a limited number of sample points when using it for voxels. The modification would have been motivated by the desire to have smoother graphics (it is well known in the art that anti-aliasing provides smoother edges and overall smoother perceived graphics (can be more realistic) in graphics) and to also have the ability to correctly select the number of samples in order to prevent unwanted results such as popping and increased processing requirements, in addition, also provide better voxelization, therefore it is an improvement, as suggested by Cassin (See paragraphs 66 “ [0066]… In one embodiment, 64 sample points 610 (e.g., 4.times.4.times.4 sample points) may be distributed in the voxel 510-1 such that the computed occupancy of the voxel is quantized to 1/64. Selecting too few sample points 610 may result in "popping" when voxelizing small animated objects, such as objects having small, sharp features. On the other hand, selecting too many sample points 610 may increase processing requirements above a desired level.”. See also paragraphs 4-6 that shows the need for better voxelization. Cassin)
Claim 15 is rejected under the same analysis as claim 3.
Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Nadakuditi in view of Hannes and Ramsay, and further in view of Shiroishi (US Publication No. 20240122563 A1 )
As per claim 7, Nadakuditi in view of Hannes and Ramsay already teaches “the system of claim 4, further configured to generate the 3D lesion model” , however Nadakuditi in view of Hannes and Ramsay does not teach “using multiple image phases.”
Shiroishi teaches “using multiple image phases” (See paragraphs 5 and 6 “[0005]… If CT imaging is performed continuously in time according to the injection of the contrast medium, a multi-time-phase 4D-CTA image showing the state of the inflow of the contrast medium can also be obtained.” Shiroishi)
It would have been obvious to one of ordinary skill in the art before the effective filing
date of the claimed invention to combine the teachings of Nadakuditi with Hannes and Ramsay with the teachings of Shiroishi to include the use of image phases. The modification would have been motivated by the desire to have the ability to determine and show the state of inflow through the blood vessels, in addition to confirm the shape and size of a blood vessel, and also to perform better identification of diseases for better treatment strategies, therefore it is an improvement, as suggested by Shiroishi (“[0005]… If CT imaging is performed continuously in time according to the injection of the contrast medium, a multi-time-phase 4D-CTA image showing the state of the inflow of the contrast medium can also be obtained.” “[0003] Although a standard shape or blood flow is known for a cerebral blood vessel, a subject may have a shape or a blood flow different from a standard according to individual differences and diseases of the subject. In the diagnosis of cerebral blood vessel areas, it is important for physicians to ascertain a shape of a cerebral blood vessel or a blood flow specific to the subject for identification of the cause of a disease and decision of treatment strategies.” See also paragraphs 6-9. “[0007] Physicians visually observe and ascertain the shape of a blood vessel using CTA and MRA images. To confirm the shape of a blood vessel with CTA and MRA images, a physician needs to confirm a large number of images (slices) in order to avoid missing abnormalities such as slight blood vessel defects. This process is time-consuming. [0008] Also, a physician visually observes and ascertains the state of a blood flow using 4D-CTA and 4D-MRA images. To confirm a blood flow is confirmed with 4D-CTA or 4D-MRA images, a physician needs to confirm images of all time phases to avoid missing even the slightest interruption or regurgitation of a blood flow. This process is also time-consuming.” Shiroishi)
Claim 17 is rejected under the same analysis as claim 7.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN J MENDEZ MUNIZ whose telephone number is (703)756-5672. The examiner can normally be reached M-F, 8AM - 5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DYLAN JOHN MENDEZ MUNIZ/Examiner, Art Unit 2675
/ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675