DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (foreign application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of Foreign Application KR10-2019-016616, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. Foreign Application KR10-2019-016616 does not disclose training a neural network that reconstructs an image using an image pair that is generated using a simulated change in time of flight or signal strength change of ultrasound data. Therefore, the priority date of the present application is considered to be July 21, 2020 which is seen as the effective filing date and corresponds to the Foreign Application KR10-2020-0090261 which is properly certified.
Claim Objections
Claims 5 and 8 are objected to because of the following informalities:
Claim 5, line 6, “TOF” should read “time of flight (TOF)”,
Claim 8, line 9, “TOF” should read “time of flight (TOF)”.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3-4, and 8-10 is/are rejected under 35 U.S.C. 102(a)(1) as being unpatentable by Cheng et al. (“Deep Learning Image Reconstruction Method for Limited-Angle Ultrasound Tomography in Prostate Cancer”, hereinafter Cheng).
Regarding claim 1, Cheng teaches a method of operating an image device operated by at least one processor (Abstract), the method comprising:
receiving an input of virtual tissues modeled with an arbitrary shape and a quantitative feature (pg. 3, section 1.1 discloses obtaining input data with specific parameters and values (quantitative features) and shapes);
simulating a change in time of flight (TOF) or a signal strength change of ultrasound data having penetrated the virtual tissues modeled with a speed-of-sound distribution or an attenuation coefficient distribution in a first and in a second direction (pgs. 3-4, section 1.1 discloses simulating a time of flight (TOF) image using the speed of sound (SOS) parameters), and creating an image pair representing the TOF change or the signal strength change (pgs. 3-4, section 1.1, the generated TOF images are considered the image pair created to represent the TOF change);
creating a speed-of-sound distribution image or an attenuation coefficient distribution image of each of the virtual tissues as a ground truth of an image pair created in the corresponding virtual tissue (pgs. 3-4, section 1.1 discloses generating SOS images which is then used to generate the corresponding TOF images, therefore the generated SOS images represent a ground truth of the image pair); and
training a first neural network that reconstruct the speed-of-sound distribution image from an input image pair or training a second neural network that reconstructs the attenuation coefficient distribution image from the input image pair, by using training data including an image pair of each virtual tissue and the ground truth (fig. 2 on pg.2 and para. 1-2 of pg. 3 discloses “a DL-based framework that takes in time-of-flight (ToF) measurements derived from the sensor data as the input and returns a SoS image of the medium as the output”. Section 1.1 further discloses training the deep learning network using the TOF and SOS data).
Regarding claim 3, Cheng teaches the method of claim 1, as set forth above. Cheng further teaches creating a geometric image representing a modeled shape of each virtual tissue (pgs. 3-4, section 1.1, discloses “the prostate and ROI shapes are simulated as ellipses based on equation 1 as a representation of what one might see in the body without using complicated higher order shapes); and adding the geometric image of each virtual tissue to the training data (pgs. 3-4, section 1.1, discloses the shapes are part of the generated training data).
Regarding claim 4, Cheng teaches the method of claim 3, as set forth above. Cheng further teaches training comprises training the first neural network or the second neural network using the geometric image as a priori information (pgs. 3-4, section 1.1, discloses training the neural network using the training data which includes the simulated shapes (geometric image) and the shape data is used for simulating the SOS images and is therefore used as a priori information).
Regarding claim 8, Cheng teaches a method of operating an image device operated by at least one processor (Abstract), the method comprising:
receiving input images created from each virtual tissue and a priori information as training data (pgs. 3-4, section 1.1 discloses obtaining SOS images from the tissues and simulated shapes of the prostate and ROI (Priori information)); and
training a neural network that reconstructs a quantitative feature of the virtual tissue from the input images under a guidance of the a priori information (pgs. 3-4, section 1.1, discloses training the neural network that reconstructs a time of flight image using the training data which includes the simulated shapes),
wherein the a priori information is a geometric image representing a modeled shape of each virtual tissue (pgs. 3-4, section 1.1, discloses “the prostate and ROI shapes are simulated as ellipses based on equation 1 as a representation of what one might see in the body without using complicated higher order shapes), and
wherein the input images are images representing a TOF change or a signal strength change of ultrasound data having penetrated a virtual tissue modeled with speed-of-sound distribution, in different directions (pgs. 3-4, section 1.1 discloses training data includes simulated time of flight (TOF) images that were simulated using speed of sound (SOS) parameters).
Regarding claim 9, Cheng teaches the method of claim 8, as set forth above. Cheng further teaches training the neural network comprises inputting the input images into an encoder of the neural network, and training the neural network to minimize a loss between a ground truth and a result that the decoder reconstructs a feature extracted by the encoder under a guidance of the a priori information (Section 1.2 on pgs. 4-5 discloses the use of an encoder and a decoder. Pgs. 5-6, section 2 further disclose the SOS images represent ground truth data and fig. 5 shows the neural network is trained to minimize loss between the ground truth and the result of the decoder).
Regarding claim 10, Cheng teaches the method of claim 9, as set forth above. Cheng further teaches if the input images are images representing the TOF change, the ground truth is an image representing a modeled speed-of-sound distribution of each virtual tissue (Pgs. 5-6, section 2 further disclose the SOS images represent ground truth data).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cheng in view of Rothberg et al. (US 20140180099, hereinafter Rothberg).
Regarding claim 2, Cheng teaches the method of claim 1, as set forth above. Cheng does not specifically teach the image pair comprises images representing the TOF change of an ultrasound signal in a relationship matrix between transducer channels and receiver channels in a corresponding direction.
However,
Rothberg in a similar field of endeavor teaches the image pair comprises images representing the TOF change of an ultrasound signal in a relationship matrix between transducer channels and receiver channels in a corresponding direction ([0353] discloses the time of flight measurement is represented by a matrix along the path length (direction).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the known technique of having the image pair comprise images representing the TOF change of an ultrasound signal in a relationship matrix between transducer channels and receiver channels in a corresponding direction of Rothberg to the method of Cheng to allow for the predictable results of providing the information in a format that is easier to use for training the neural network, thereby increasing the efficiency of the training process.
Claim(s) 5-6 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cheng in view of Shi et al. (US 20220207791, hereinafter Shi).
Regarding claim 5, Cheng teaches a method of operating an image device operated by at least one processor (Abstract), the method comprising:
receiving images created from virtual tissues as training data (pgs. 3-4, section 1.1 discloses obtaining TOF images from simulated (virtual) SOS images to train the deep learning network); and
training a neural network with an encoder and a decoder, by using the training data (fig. 2 on pg.2 and para. 1-2 of pg. 3 discloses “a DL-based framework that takes in time-of-flight (ToF) measurements derived from the sensor data as the input and returns a SoS image of the medium as the output”. Section 1.1 further discloses training the deep learning network using the TOF and SOS data. Section 1.2 on pgs. 4-5 discloses the use of an encoder and a decoder),
wherein training the neural network comprises inputting a TOF image pair or a signal strength image pair included in the training data to the encoder (pgs. 3-4, section 1.1 discloses training the neural network comprises inputting the TOF images), and training the neural network to minimize a loss between a ground truth and a result that the decoder reconstructs a feature extracted by the encoder (Section 1.2 on pgs. 4-5 discloses the use of a decoder. Pgs. 5-6, section 2 discloses the SOS images represent ground truth and fig. 5 shows the neural network is trained to minimize loss between the ground truth and the result of the decoder),
wherein the TOF image pair comprises images representing a TOF change of ultrasound data having penetrated a virtual tissue modeled with a speed-of-sound distribution, in different directions (pgs. 3-4, section 1.1 discloses simulating a time of flight (TOF) image using the speed of sound (SOS) parameters).
Cheng does not specifically disclose wherein the signal strength image pair comprises images representing a signal strength change of ultrasound data having penetrated a virtual tissue modeled with an attenuation coefficient distribution, in different directions.
However,
Shi in a similar field of endeavor teaches training a neural network using a signal strength image pair ([0035], [0040] and fig. 1 disclose a process of training a neural network using a pair of images 12a-b in order to generate attenuation map images. The change in color of the images 12a-b represents the change in signal strength. [0056] further discloses training the model with the attenuation image), wherein the signal strength image pair comprises images representing a signal strength change of ultrasound data having penetrated a virtual tissue modeled with an attenuation coefficient distribution, in different directions ([0040] discloses the attenuation map 14 is generated from the SPECT emission images (signal strength images) meaning the signal strength images represent the signal strength change of the image data model in the attenuation coefficient distribution).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the known technique of having the signal strength image pair comprise images representing a signal strength change of ultrasound data having penetrated a virtual tissue modeled with an attenuation coefficient distribution, in different directions of Shi to the method of Cheng to allow for the predictable results of producing a more accurate neural network.
Regarding claim 6, Cheng in view of Shi teaches the method of claim 5, as set forth above. Cheng further teaches the training data further comprises speed-of-sound distribution images or attenuation coefficient distribution images of the virtual tissues (pgs. 3-4, section 1.1 discloses the training data includes SOS distribution images), wherein each of the speed-of-sound images is a ground truth of the TOF images created with a corresponding virtual tissue (pgs. 5-6, section 2 discloses the SOS images are ground truth images), and wherein each of the attenuation coefficient distribution images is a ground truth of the signal strength image pair created with the corresponding virtual tissue.
Regarding claim 11, Cheng teaches the method of claim 9, as set forth above. Cheng does not specifically disclose if the input images are images representing the signal strength change, the ground truth is an image representing a modeled attenuation coefficient distribution of each virtual tissue.
However,
Shi in a similar field of endeavor teaches if the input images are images representing the signal strength change, the ground truth is an image representing a modeled attenuation coefficient distribution of each virtual tissue ([0035], [0040] and fig. 1 disclose a process of training a neural network using a pair of images 12a-b in order to generate attenuation map images. The change in color of the images 12a-b represents the change in signal strength. [0056] further discloses training the model with the attenuation image. [0009] discloses the ground truth data is an attenuation map).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the known technique of if the input images are images representing the signal strength change, the ground truth is an image representing a modeled attenuation coefficient distribution of each virtual tissue of Shi to the method of Cheng to allow for the predictable results of producing a more accurate neural network.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cheng and Shi as applied to claim 5 above and further in view of Wildeboer et al. (US 20220361848, hereinafter Wildeboer).
Regarding claim 7, Cheng and Shi teaches the method of claim 5, as set forth above. Cheng and Shi do not specifically teach the decoder comprises a network structure that provides a feature reconstructed at a low resolution and then transformed with a high resolution, through a skip connection.
However,
Wildeboer in a similar field of endeavor teaches the decoder comprises a network structure that provides a feature reconstructed at a low resolution and then transformed with a high resolution, through a skip connection ([0032] discloses the network structure includes a skip connection that enables the decoder to generate higher-resolution images).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the decoder disclosed by Cheng to comprise a network structure that provides a feature reconstructed at a low resolution and then transformed with a high resolution, through a skip connection in order to improve the quality of the trained neural network, as recognized by Wildeboer ([0008]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW BEGEMAN whose telephone number is (571)272-4744. The examiner can normally be reached Monday-Thursday 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at 5712701790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW W BEGEMAN/Examiner, Art Unit 3798