DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
The restriction requirement between Groups I and II, as set forth in the Office action mailed on October 2, 2025, has been reconsidered in view of amended claims filed on October 28, 2005. The restriction requirement is hereby withdrawn as there is now unity of invention.
Claim Objections
Claims 1-2, 4, 6 and 13 are objected to because of the following informalities:
In claim 1, in line 2, “Ultrasound, US,” should be replaced with --- ultrasound (US) ---. Claim 13 is similarly objected to.
In claim 1, in line 4, “machine learning, ML,” should be replaced with --- machine learning (ML) ---. Claim 13 is similarly objected to.
In claim 2, in line 2, “motions” should be replaced with “movements”.
In claim 4, in line 3, --- first – should be inserted before “clip”.
In cliam 6, in line 3, --- first – should be inserted before “clip”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With regards to claim 1, in lines 11-12, the limitation “repeating steps i), ii) and iii) to obtain a plurality of clips of different view planes..” is recited. However, step (ii) and (iii) specifically refer to obtaining “a first cluster” that corresponds to “a first view plane” and using the first cluster as a “first clip”, and thus it is unclear as to how repeating the steps of obtaining a first cluster corresponding to a first view plane and using the first cluster as a first clip can result in obtaining “different” view planes if repeating the steps would result in obtaining the “first view plane” repeatedly. Claim 13 is similarly rejected.
Claim 4 recites the limitation "the periodic motion" in lines 2-3. There is insufficient antecedent basis for this limitation in the claim.
Claim 5 recites the limitation "the next consecutive turning point" in line 4. There is insufficient antecedent basis for this limitation in the claim.
With regards to claim 10, in line 1, “wherein step v) comprises” is recited. However, claim 10 is dependent upon claim 8 which is dependent upon claim 1, wherein there is no recitation of a “step v)” in either claim 1 or claim 8, thus rendering the claim indefinite. For examination purposes, it is assumed that claim 10 is referring to additional steps of the method of claim 8. Claim 11 is similarly rejected (see line 1, referring to “step vi)”.
Claim 11 recites the limitation "the periodic signal" in line 3. There is insufficient antecedent basis for this limitation in the claim.
Claim 11 recites the limitation "the periodic motions" in line 4. There is insufficient antecedent basis for this limitation in the claim.
Claim 11 the recites the limitation "the detected peaks" in lines 4-5. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 8-10 and 13-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Deo et al. (US Pub No. 2021/0000449).
With regards to claims 1 and 13, Deo et al. disclose an apparatus and a computer implemented method of processing a sequence of Ultrasound, US, images of an anatomical feature with periodic movements, the method comprising:
using a first machine learning, ML, model (i.e. “convolutional neural networks (CNNs)”) to label detections of the anatomical feature (i.e. heart) in the images in the sequence according to view plane of the anatomical feature visible in each respective image (paragraph [0033], referring to convolutional neural networks (CNNs) being used for automatically determining echo views (i.e. apical 2-, 3-, and 4-chamber (A2c, A3c, and A4c), parasternal long axis (PLAX), parasternal short axis at the level of the PSAX and the IVC, etc.), wherein the classification/labeling of which view a particular set of images corresponds can be used later in the pipeline for identifying a structure in the heart; paragraph [0043], referring to block 192 wherein view classification is performed; paragraphs [0045]-[0050], referring to the convolutional neural network successfully discriminating echo views, wherein an initial model comprising of labeled different views is used for the CNN; Figures 1-2, 10);
obtaining a first cluster of consecutive images in the sequence that all correspond to a first view plane, based on the labelling (paragraphs [0034]- [0035], referring to discarding of views with a low probability of being in a view, and thus it follows that images/views with a high probability of being in a view would be kept and grouped/clustered together and/or referring to, based on the identified views, videos can be routed to specific image segmentation models, thus resulting in a “clustering” of images in a sequence that all correspond to a first view plane; paragraph [0048], referring to the grouping of images corresponding to six different echocardiographic views; paragraph [0040], referring to parasternal long-axis (PLAX) videos being phased to identify images at the end of cardiac systole and diastole, wherein the resulting image pairs/clusters can be used to detect a disease; paragraph [0041], referring to images from a same part of the cycle can be bundled/”cluster[ed]” together; paragraph [0043], referring to block 198 wherein images can be phased to identify images corresponding to particular parts of the cardiac cycle; Figures 1-2, 10); and
using the first cluster as a first clip of the first view plane of the anatomical feature (paragraph [0035], referring to, based on the identified views, videos/clips can be routed to specific segmentation models; paragraph [0037], referring to sorting videos in a broad sense of what the view is, but also what structures are sufficiently visible for use in further measurements; paragraph [0040], referring to the view-classified videos/clips; Figures 1-2, 10);
characterized by:
repeating steps i), ii) and iii) to obtain a plurality of clips of different view planes of the anatomical feature (paragraphs [0033], [0035], referring to the CNNs being used for automatically determining echo views which include a plurality of different views, such as the A2c, A3c, A4c, PLAX views, etc., and therefore the steps of using the ML model to label/classify detections of the anatomical feature, obtaining a first cluster, and using the first cluster as a first clip of the first view plane would be repeated for each of the plurality of different views; Figures 1-2, 10); and
selecting the first clip as a preferred clip of the anatomical feature from the plurality of clips, if the first clip comprises a cluster of consecutive images for which the respective labels are more statistically significant compared to other labels in the plurality of clips (paragraph [0034], referring to providing a quality score for the echo/video measurement, wherein a quality score can be determined using an average, median, or maximum assigned probability of a view across every video in the study, and if the best guess for a view still has a low probability, then the measurement might be discarded, and thus it would follow that a view having high probability would be selected as a preferred clip/video; paragraphs [0036]-[0037], referring to probabilities being assigned to the images and using separate classes for views with obscured structures and those with unobscured structures, wherein embodiments can compare the probabilities to determine whether the video should be used to estimate atrial size, and thus the video/clip that is determined to be used to estimate atrial size is viewed as the “preferred clip”; paragraph [0040], referring to, for the detection of a disease characterized by abnormal cardiac thickening, such as hypertrophic cardiomyopathy (HCM) and cardiac amyloidosis, the videos corresponding to the specific parasternal long-axis (PLAX) view can be phased to identify images at the end of the cardiac systole and diastole in order to detect the disease, and therefore the PLAX view video/clip is selected as the preferred clip for the disease detection as it is implicitly “more statistically significant” for the disease detection compared to other labels/classifications in the plurality of clips; paragraph [0043], referring to the view classification providing a probability score (193) of an image being in each of a plurality of views; paragraph [0050]; Figures 1-2,10).
Additionally, with regards to claim 13, Deo et al. disclose that the apparatus comprises a memory (72) comprising instruction data representing a set of instructions (paragraphs [0158]-[0160], referring to the execution of a plurality of instructions from system memory (72); Figure 11); and a processor (73) configured to communicate with the memory and to execute the set of instructions (paragraphs [0157]-[0160], referring to the central processor (73) which communicates with each subsystem and controls the execution of a plurality of instructions from system memory (72); Figure 11), wherein the set of instructions, when executed by the processor, cause the processor to perform the above steps as set forth in claim 1 [see rejection of claim 1].
With regards to claim 2, Deo et al. disclose determining a frequency of the periodic motions from the preferred clip (paragraphs [0041]-[0042], [0123], referring to structure measurements being plotted over time as a set of cycles (e.g, as a wave), and points along that curve can define different aspects of the cardiac cycle (e.g., peaks/maximum and valleys/minimum of the volume of a particular chamber), wherein such plotting of the cycle and tracking/defining of peaks or valleys of the cycle results in a determination of the rate at which the peaks/valleys are repeated over a particular period of time, which thus corresponds to a “frequency”, wherein such tracked peaks/valleys are representative of the periodic motions).
With regards to claim 8, Deo et al. disclose that the method further comprises converting each image in the first clip into a feature vector (i.e. size, mass, length, volume, etc.) to obtain a sequence of feature vectors (paragraph [0042], referring to structure measurements being plotted over time as a set of cycles, wherein, for instance, the view identification can provide an input to the segmentation module, so as to identify a chamber accurately, which then allows tracking its size, which then allows selecting a part of the cycle, e.g., where it is the largest or smallest; paragraph [0043], referring to metrics of cardiac structure (e.g., mass, length, volume) can be performed using the segmentation results of the view-classified images, wherein the variations in the metric can be used to identify images corresponding to particular positions in a cardiac cycle; note that the metrics plotted over time corresponds to a sequence of the feature vectors); determining correlations between the feature vectors in the sequence of feature vectors (paragraph [0042]-[0043], referring to metrics of cardiac structure (e.g., mass, length, volume) can be performed using the segmentation results of the view-classified images, wherein the variations in the metric can be used to identify images corresponding to particular positions in a cardiac cycle; note that determining variations, such as determining peaks/valleys in the metric, requires determining correlations (i.e. matching of peaks/valleys) between the feature vectors (i.e. metrics) plotted over time (i.e. sequence of feature vectors)); and using the correlations to determine a third subset of images from the first clip corresponding to one period of the periodic movements (paragraphs [0042]-[0043], referring to using the variations in the metric to identify images corresponding to particular positions in a cardiac cycle, which corresponds to periods of the periodic movements).
With regards to claim 9, Deo et al. disclose that the feature vector comprises an encoding of a spatial pattern in a respective image; and/or one or more features of: a histogram of oriented gradients in the respective image; a scale invariant feature transform of the respective image; and a local binary pattern of the respective image (paragraphs [0042]-[0043], referring to segmentation being used to identify metrics, such as size, which is used for the phasing; paragraph [0055], referring to segmentation being performed by identifying each pixel as being in a structure or not, e.g., as signified by 0 or 1, which corresponds to a local binary pattern of the respective image).
With regards to claim 10, Deo et al. disclose that step v) comprises selecting a first feature vector, fp, in the sequence of feature vectors; correlating the first feature vector fp with each of the other feature vectors in the sequence of feature vectors to obtain an N dimensional correlation vector c, wherein N is the number of images in the first clip (paragraphs [0042]-[0043], referring to identifying peaks/maximums and valleys/minimums of the metrics plotted over time, which would ultimately provide a N dimensional correlation vector c (i.e. feature vectors vs. time) with N being the number of images in the first clip (i.e. each time point correspond to an image) and wherein determining a peak or valley would inherently require a correlation/comparison/matching of a first selected feature vector (i.e. size measurement at one particular time point) to the other feature vectors (i.e. other size measurements) over time).
With regards to claim 14, Deo et al. disclose an ultrasound imaging system, comprising an ultrasound probe for transmitting ultrasound waves and receiving echo information (paragraphs [0026], [0045], referring to the ultrasound probe/transducer for acquiring the images/views); and an apparatus as in claim 13 for processing a sequence of US images of an anatomical feature with periodic movements obtained based on the received echo information [see rejection of claim 13].
With regards to claim 15, Deo et al. disclose a computer program product comprising a computer readable medium, the computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method as claimed in claim 1 (paragraphs [0005], [0007], referring to the computer readable media associated with the methods; see rejection of claim 1).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Deo et al. as applied to claim 1 above, and further in view of Guracar (US Pub No. 2015/0031995).
With regards to claim 3, as discussed above, Deo et al. meet the limitations of claim 1. However, they do not specifically disclose that their method further comprises determining a minimum intensity image, Imin, from the first cluster of images in the first clip, wherein the intensity of each image component in the minimum intensity image is determined as the minimum intensity of image components in equivalent positions in each of the images in the first cluster of images.
Guracar discloses suppressing motion artifacts in ultrasonic imaging, wherein motion tracking is performed to derive parameter values over time, wherein for each voxel, a value is elected as a function of data from each of the frames of data (Abstract; paragraphs [0064]-[0066]; Figure 1). As an example, a minimum or other data in relation to data of the selected frames is selected based on comparison, wherein the frames of the selected subset are combined into a persisted frame or single frame (paragraph [0066], note that by combining the minimum values from each of the frames to form a persisted, single frame, a minimum intensity image is determined).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have the method of the above combined references further comprise determining a minimum intensity image, Imin, from the first cluster of images in the first clip, wherein the intensity of each image component in the minimum intensity image is determined as the minimum intensity of image components in equivalent positions in each of the images in the first cluster of images, as taught by Guracar, in order to suppress motion artifacts and obtain a persisted image (Abstract; paragraph [0066]).
Claim(s) 4-7 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Deo et al. in view of Guracar as applied to claim 3 above, and further in view of Kasahara et al. (US Pub No. 2010/0262006).
With regards to claim 4, as discussed above, the above combined references meet the limitations of claim 3. However, though Guracar does disclose that the Imin image serves as a reference image as it is the image from which parameter values are derived (paragraphs [0064]-[0068]; Figure 2), the above combined references do not specifically disclose that the method further comprises determining a first image, Ipivot, in the first clip that represents a turning point in the periodic motion, by comparing each image in the clip to Imin and selecting Ipivot as an image having either minimal or maximal intensity deviations from Imin.
Kasahara et al. disclose an ultrasound diagnostic apparatus for forming display images of an object in periodic motion, wherein a virtual period setting unit detects peak values (local maximum values) of the mutual difference values and determines an interval (heartbeat, HB) between peak values to be a period of the heart (period of a heartbeat), wherein the mutual difference values are determined by determining the difference in pixel values of two adjacent sets of tomographic image data in the Z-axis direction (Abstract; paragraphs [0038]-[0039], [0042], note that the “peak values” correspond to maximal intensity deviations from a reference image, wherein in the above combined references, the reference image correspond to Imin). The mutual difference values allows for the dilation and contraction (i.e. turning points in periodic motion of the heart) of the heart to be distinguished from one another (paragraph [0047]; Figures 1-4). Images can be searched/selected using the virtual period, wherein tomographic image data corresponding to the position wherein the mutual difference value becomes a maximum can be selected as a representative base image which is then used to select new base images (paragraphs [0050]-[0055], note that the selected tomographic image data corresponds to the claimed Ipivot image which represents a “turning point in the periodic motion”; Figure 5).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have the method of the above combined references further comprise determining a first image, Ipivot, in the first clip that represents a turning point in the periodic motion, by comparing each image in the clip to Imin and selecting Ipivot as an image having either minimal or maximal intensity deviations from Imin, as taught by Kasahara et al., in order to distinguish different periodic events, such as dilation/contraction of the heart, from one another, thus providing a useful diagnostic aid (Abstract; paragraph [0038]).
With regards to claim 5, Kasahara et al. disclose that the method further comprises determining a first subset of images in the first clip corresponding to one period of the periodic movements, as images lying between the first image, and a second image representing the next consecutive turning point in the periodic motion (paragraphs [0050]-[0055]; Figure 5, in particular, see Figure 5C, wherein the arrows of dotted lines show the positions of a plurality of base images which depict boundaries for the division bases and represent different image groups corresponding to a different cardiac periods, wherein the first dotted arrow line can represent the “first image” and the next dotted arrow line can represent the “second image”).
With regards to claim 6, Kasahara et al. disclose that the method further comprises determining an image number of a third image at a predefined phase of the periodic motion in the clip; and determining a relative location of the third image in the sequence compared to the first and second images; and determining a second subset of images in the first clip that start and end at the predefined phase of the motion in the clip by selecting the second subset of images relative to the first subset of images, shifted by the relative location of the third image (paragraphs [0050]-[0055]; Figure 5, in particular, see Figure 5C, wherein the arrows of dotted lines show the positions of a plurality of base images which depict boundaries for the division bases and represent different image groups corresponding to a different cardiac periods/phases, wherein the third (or greater than third) dotted arrow line can represent the “third image” which is at a relative location in the sequence compared to the first and second images, represented respectively by the first and second dotted arrows).
With regards to claim 7, Deo et al. disclose that the anatomical feature is a heart (Abstract; paragraphs [0033], [0036]-[0038]), and the method further comprises repeating steps i), ii) and iii) for a plurality of different predefined phases of the periodic motion; and/or repeating steps i), ii) and iii) for a plurality of different view planes to obtain a plurality single cycle clips that are all synchronized to a common cardiac phase for display to a user (paragraphs [0040]-[0042], referring to the videos being phased to identify images at the end of cardiac systole and diastole, wherein segmentation and structure information over cycles is used to identify which part (stage/phase) of a cycle a given image corresponds (systole or diastole), and thus steps i), (ii) and (iii) would necessarily be repeated for a plurality of the different predefined phases (i.e. systole/diastole) of the periodic motion of the heart; paragraphs [0033], [0035], referring to the CNNs being used for automatically determining echo views which include a plurality of different views, such as the A2c, A3c, A4c, PLAX views, etc., and therefore the steps of using the ML model to label/classify detections of the anatomical feature, obtaining a first cluster, and using the first cluster as a first clip of the first view plane would be repeated for each of the plurality of different views; Figures 1-2, 10).
With regard to claim 12, as discussed above, the above combined references meet the limitations of claim 1. However, though Dao et al. disclose that the anatomical feature is an object in periodic motion, such as a heart (Abstract; paragraphs [0033], [0036]-[0038]), Dao et al. do not specifically disclose that the anatomical feature is fetal heart.
Kasahara et al. disclose an ultrasound diagnostic apparatus for forming display of images of an object in periodic motion, wherein ultrasound images are formed of an object in unstable periodic motion, such as the heart of a fetus (Abstract; paragraphs [0007]-[0008], [0098]).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to substitute the object in periodic motion of Dao et al. with an object in periodic motion comprising of a fetal heart, as taught by Kasahara et al., as the substitution of one known object in periodic motion for another yields predictable results (i.e. effective imaging of an anatomical object exhibiting periodic motion) to one of ordinary skill in the art. One of ordinary skill in the art would have been able to carry out such a substitution and the results are reasonably predictable.
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dao et al. as applied to claim 8 above, and further in view of Florin et al. (US Pub No. 2006/0251307).
With regards to claim 11, as discussed above, Dao et al. meet the limitations of claim 8. Further, Dao et al. disclose that step vi) of the method comprises determining an average number of images in a period of the periodic motions, from the detected peaks (paragraphs [0042]-[0043], referring to phasing being performed by defining peaks/valleys of the curve defining different aspects of the cardiac cycle; paragraphs [0065], [0102]-[0103], referring to measurements can be averaged across every cardiac cycle of every relevant video).
However, though Dao et al. do disclose that the method comprises performing segmentation/edge detection to detect peaks in the periodic signal by detecting peaks of the correlations (paragraphs [0042]-[0043]), Dao et al. do not specifically disclose that detecting the peaks is performed by determining zero-crossings in a one-dimensional Laplacian domain of the correlations.
Florin et al. disclose that intensity peaks of a curve may be detected by using a zero-crossing of the Laplacian (paragraphs [0036]-[0041]).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to substitute the technique for detecting peaks of Dao et al. with a technique for detecting peaks comprising of determining zero-crossings in a one-dimensional Laplacian domain of the correlations, as taught by Florin et al., as the substitution of one known technique for detecting peaks for another yields predictable results (i.e. provide effective peak detection) to one of ordinary skill in the art. One of ordinary skill in the art would have been able to carry out such a substitution and the results are reasonably predictable.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHERINE L FERNANDEZ whose telephone number is (571)272-1957. The examiner can normally be reached Monday-Friday 9:00 AM - 5:30 PM (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATHERINE L FERNANDEZ/Primary Examiner, Art Unit 3798