DETAILED ACTION
Claims 1-20 are pending in this application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2 and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claims recite the limitation of “determining and extent of the ultrasound image” where extent is a relative term of degree and it is unclear what it meant by the extent. For the purposes of examination, the examiner is interpreting this limitation to be analogous to the region of interest of the ultrasound. The applicant is encouraged to amend the claims to clarify what is meant by this.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 6-8, 12-15 and 17-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Choi (20180330518 A1).
Regarding claim 1 Choi discloses; A method for analyzing ultrasound image data obtained from ultrasonic imaging, the method comprising (Choi, abstract, the system is for processing ultrasound image data and analyzing a region of interest):
obtaining, by an ultrasound probe, the ultrasound image data (Choi, [0024] the transducer probe is configured to acquire an image of a selected anatomical portion);
transforming, by a processor (Choi, [0026] the system has at least one processor), the ultrasound image data with at least one transform to generate at least one set of transformed data (Choi, [0037] the system performs a Fourier transform on the ultrasound data to generate FFT data (as shown in figure 4));
inputting the ultrasound image data and the at least one set of transformed data into a machine-learning model (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (transformed) image data as the inputs and generate feature maps using the input image data as shown in figure 4), wherein the machine-learning model is implemented by the processor (Choi, [0044] the machine learning functions may be executed by a processor);
PNG
media_image1.png
282
510
media_image1.png
Greyscale
(Choi, Figure 4, Emphasis added)
implementing, by the processor (Choi, [0044] the machine learning functions may be executed by a processor), the machine-learning model with the ultrasound image data and the at least one set of transformed data (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (transformed) image data as the inputs and generate feature maps using the input image data as shown in figure 4);
and identifying, by the processor (Choi, [0044] the machine learning functions may be executed by a processor), at least one feature in the ultrasound image data as determined by the machine-learning model (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (transformed) image data as the inputs and generate feature maps (at least one feature using the input image data as shown in figure 4).
PNG
media_image2.png
282
510
media_image2.png
Greyscale
(Choi, Figure 4, Emphasis added)
Regarding claim 2 Choi discloses; The method of claim 1, further comprising determining an extent of the ultrasound image data according to a region of interest (Choi, [0024]-[0025] the probe ultrasound signal ranges may be determined and adjusted based upon the region of interest being imaged).
Regarding claim 3 Choi discloses; The method of claim 1, wherein the machine-learning model comprises a convolutional neural network (Choi, [0028] the system uses a Convolutional Neural Network).
Regarding claim 4 Choi discloses; The method of claim 3, further comprising training the machine-learning model by:
inputting annotated ultrasound image data into the machine-learning model (Choi, [0037] the system may take b-mode images and feature maps as input (annotated image data)), wherein the ultrasound image data indicates the presence or absence of the at least one feature (Choi, [0073] the features identify a target of interest [0076] features are to be used in determining whether or not that the pixel belongs to the organ of interest, presences of target or not is determined by the features);
transforming the ultrasound image data using at least one transform to generate transformed data (Choi, [0037] the system performs a Fourier transform on the ultrasound data to generate FFT data (as shown in figure 4));
and inputting the transformed data into the machine-learning model (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (transformed) image data as the inputs and generate feature maps using the input image data as shown in figure 4).
Regarding claim 6 Choi discloses; The method of claim 1, wherein the ultrasound image data comprises spatial B-mode image data (Choi, [0016] the image is a B-mode image, [0029] the b-mode image is generated based on return echoes, and is 2D making it a spatial b-mode image).
Regarding claim 7 Choi discloses; The method of claim 1, wherein the at least one set of transformed data comprises at least one of Fourier transformed data, slant transformed data, or Hadamard transformed data (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (Fourier transform) (transformed) image data as the inputs and generate feature maps using the input image data as shown in figure 4).
Regarding claim 8 Choi discloses; The method of claim 7, wherein the at least one set of transformed data comprises only one of Fourier transformed data, slant transformed data, or Hadamard transformed data (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (Fourier transform) (transformed) image data as the inputs and generate feature maps using the input image data as shown in figure 4, where only an FFT (Fourier transform) is done ).
Regarding claim 12 Choi discloses; A system for analyzing ultrasound image data obtained from ultrasonic imaging, the system comprising:
an ultrasound probe configured to obtain the ultrasound image data (Choi, [0024] the transducer probe is configured to acquire an image of a selected anatomical portion);
and a processor (Choi, [0026] the system has at least one processor) configured to transform the ultrasound image data with at least one transform to generate at least one set of transformed data (Choi, [0037] the system performs a Fourier transform on the ultrasound data to generate FFT data (as shown in figure 4)),
input the ultrasound image data and the at least one set of transformed data into a machine-learning model (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (transformed) image data as the inputs and generate feature maps using the input image data as shown in figure 4), wherein the machine-learning model is implemented by the processor(Choi, [0044] the machine learning functions may be executed by a processor),
implement the machine-learning model with the ultrasound image data and the at least one set of transformed data (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (transformed) image data as the inputs and generate feature maps using the input image data as shown in figure 4),
and identify at least one feature in the ultrasound image data as determined by the machine-learning model (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (transformed) image data as the inputs and generate feature maps (at least one feature using the input image data as shown in figure 4).
Regarding claim 13 Choi discloses; The system of claim 12, wherein the processor is further configured to determine an extent of the ultrasound image data according to a region of interest (Choi, [0024]-[0025] the probe ultrasound signal ranges may be determined and adjusted based upon the region of interest being imaged).
Regarding claim 14 Choi discloses; The system of claim 12, wherein the machine-learning model comprises a convolutional neural network(Choi, [0028] the system uses a Convolutional Neural Network).
Regarding claim 15 Choi discloses; The method of claim 14, wherein the processor is further configured to train the machine-learning model by inputting annotated ultrasound image data into the machine-learning model (Choi, [0037] the system may take b-mode images and feature maps as input (annotated image data)), wherein the ultrasound image data indicates the presence or absence of the at least one feature (Choi, [0073] the features identify a target of interest [0076] features are to be used in determining whether or not that the pixel belongs to the organ of interest, presences of target or not is determined by the features), transforming the ultrasound image data using at least one transform to generate transformed data (Choi, [0037] the system performs a Fourier transform on the ultrasound data to generate FFT data (as shown in figure 4)), and inputting the transformed data into the machine-learning model (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (transformed) image data as the inputs and generate feature maps using the input image data as shown in figure 4).
Regarding claim 17 Choi discloses; The system of claim 12, wherein the ultrasound image data comprises spatial B-mode image data (Choi, [0016] the image is a B-mode image, [0029] the b-mode image is generated based on return echoes, and is 2D making it a spatial b-mode image).
Regarding claim 18 Choi discloses; The system of claim 12, wherein the at least one set of transformed data comprises at least one of Fourier transformed data, slant transformed data, or Hadamard transformed data (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (Fourier transform) (transformed) image data as the inputs and generate feature maps using the input image data as shown in figure 4, where only an FFT (Fourier transform) is done ).
Regarding claim 19 Choi discloses; The method of claim 12, wherein the at least one set of transformed data comprises only one of Fourier transformed data, slant transformed data, or Hadamard transformed data (Choi, [0037] the CNN autoencoder unit may take the input spatial ultrasound image, and the FFT (Fourier transform) (transformed) image data as the inputs and generate feature maps using the input image data as shown in figure 4, where only an FFT (Fourier transform) is done).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
2. Claims 5, 11, 16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Choi (20180330518 A1) in view of Vega (20240273726 A1).
Regarding claim 5 Choi does not disclose; The method of claim 4, further comprising updating the machine-learning model to reduce a loss function as the machine-learning model receives additional ultrasound image data indicating the presence or absence of the at least one feature.
However, in the same field of endeavor, Vega teaches; further comprising updating the machine-learning model to reduce a loss function as the machine-learning model receives additional ultrasound image data indicating the presence or absence of the at least one feature (Vega, [0104] the network is trained to detect features, and a loss function is computed during training, then the loss function is used to update the model weights to reduce loss).
The combination of Choi and Vega would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Choi and Vega both teach methods of training a machine learning model to identify features of b-mode ultrasound images, however Choi does not teach updating the model to reduce loss. The motivation to add the function to reduce loss in this way is advantageous to help increase the accuracy of the predictions (Vega, [0103]-[0105]).
Regarding claim 11 the combination of Choi and Vega teaches; The method of claim 1, wherein the at least one feature comprises at least one of an organ of a patient having a benign lesion or a malignant lesion (Vega, [0119] the method monitors an attribute (feature) of interest such as a tumor in a body region such as an organ using a machine learning algorithm).
The combination of Choi and Vega would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Choi teaches a system with the capacity to assess ultrasound images for target organs, but does not teach assessing target organs for lesions. Vega teaches a method of automatically identifying tumors or lesions on organs from ultrasound images. The motivation for the addition of a method of automatically identifying tumors or lesions on organs lies in that this allows for tracking and assessment of tumor growth for prognostic determination. (Vega, [0005]-[0008])
Regarding claim 16 the combination of Choi and Vega teaches; The method of claim 15, wherein the processor is further configured to implement the machine-learning model to reduce a loss function as the machine-learning model receives additional ultrasound image data indicating the presence or absence of the at least one feature(Vega, [0104] the network is trained to detect features, and a loss function is computed during training, then the loss function is used to update the model weights to reduce loss).
The combination of Choi and Vega would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Choi and Vega both teach methods of training a machine learning model to identify features of b-mode ultrasound images, however Choi does not teach updating the model to reduce loss. The motivation to add the function to reduce loss in this way is advantageous to help increase the accuracy of the predictions (Vega, [0103]-[0105]).
Regarding claim 20 the combination of Choi and Vega; The method of claim 1, wherein the at least one feature comprises at least one of an organ of a patient having a benign lesion or a malignant lesion(Vega, [0119] the method monitors an attribute (feature) of interest such as a tumor in a body region such as an organ using a machine learning algorithm).
The combination of Choi and Vega would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Choi teaches a system with the capacity to assess ultrasound images for target organs, but does not teach assessing target organs for lesions. Vega teaches a method of automatically identifying tumors or lesions on organs from ultrasound images. The motivation for the addition of a method of automatically identifying tumors or lesions on organs lies in that this allows for tracking and assessment of tumor growth for prognostic determination. (Vega, [0005]-[0008])
3. Claims 9 and 10 rejected under 35 U.S.C. 103 as being unpatentable over Choi (20180330518 A1) in view of Powell (EP 0126148 B1).
Regarding claim 9 Choi does not teach; The method of claim 7, wherein the at least one set of transformed data comprises only two of Fourier transformed data, slant transformed data, or Hadamard transformed data.
However, in the same field of endeavor of image processing, Powell teaches; wherein the at least one set of transformed data comprises only two of Fourier transformed data, slant transformed data, or Hadamard transformed data (Powell, Column 17, lines 30-58, the system performs 2 stage denoising, where the first stage is a slant transform and the second is a Walsh-Hadamard transform).
PNG
media_image3.png
340
278
media_image3.png
Greyscale
(Powell, Column 17)
The combination of Choi and Powell would have been obvious to one of ordinary skill in the art prior to the filing date of the presently claimed invention. Choi teaches a method of processing an ultrasound image using machine learning, but does not teach the use of multiple different transforms to generate transformed data. Powell teaches this deficiency, teaching a method of using multi-stage image transforms to denoise images. The motivation for the addition of the use of multiple types of data transforms as taught by Powell is that using multiple image transforms over multiple stages can help reduce image noise visibly and minimize loss. (Powell, Abstract)
Regarding claim 10 the combination of Choi and Powell teaches; The method of claim 7, wherein the at least one set of transformed data comprises Fourier transformed data, slant transformed data, and Hadamard transformed data (Powell figures 1 and 2 show that the system is a multi-stage transform method for denoising images, where it feeds image data into multiple direct transform networks (3 transforms), Column 8 line 64 through column 9 line 20 notes that these “direct transform blocks” may be any transform including a Fourier, Slant or Walsh-Hadamard.
PNG
media_image4.png
210
292
media_image4.png
Greyscale
(Powell, column 9)
PNG
media_image5.png
330
580
media_image5.png
Greyscale
(Powell, figure 2, emphasis added)
The combination of Choi and Powell would have been obvious to one of ordinary skill in the art prior to the filing date of the presently claimed invention. Choi teaches a method of processing an ultrasound image using machine learning, but does not teach the use of multiple different transforms to generate transformed data. Powell teaches this deficiency, teaching a method of using multi-stage image transforms to denoise images. The motivation for the addition of the use of multiple types of data transforms as taught by Powell is that using multiple image transforms over multiple stages can help reduce image noise visibly and minimize loss. (Powell, Abstract)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For a listing of analogous prior art as cited by the examiner, please see the attached PTO 892- Notice of References Cited page.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.E./Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666