DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is in response to Applicant’s remarks, filed on 12/18/2025. The amendments to claim(s) 1-3, 5-6 and 12 have been entered. Claim(s) 4 and 10-11 is/are cancelled by Applicant and therefore withdrawn from further consideration pursuant to 37 CFR 1.142(b). Corresponding rejections of claim(s) 4 and 10-11 from the prior office action are withdrawn as moot in light of the Applicant’s cancellation. New claim(s) 14-15 have been entered. Claim(s) 1-3, 5-9 and 12-15 remain pending; Applicant has withdrawn claims 8-9 and 13. Accordingly, claims 1-3, 5-7, 12 and 14-15 remain pending for examination.
Response to Arguments
Applicant’s arguments, see p. 7-9, with respect to claim(s) 1-3, 5-7 and 12 have been fully considered.
After review of the Applicant’s remarks regarding the objections to claim(s) 1, 4 and 12, Examiner respectfully agrees with Applicant and the objections have been withdrawn.
After review of the amendment to claim(s) 1-8 and 10-12, Examiner respectfully disagrees with the Applicant’s remarks. The prior 35 USC § 112(b) rejection has been withdrawn, and new 35 U.S.C. § 112(b) rejections have been issued.
Regarding the rejection(s) under 35 U.S.C. § 102, Examiner respectfully disagrees with the remarks and does not find Applicant’s arguments persuasive. New grounds of rejection are made in view of the following: new amendments provided by Applicant and attached remarks; updated search and review of pertinent, eligible prior art; newly added claims; and/or different interpretation of the previously applied references. Applicant’s arguments with respect to claim(s) 1-3, 5-7 and 12 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Examiner respectfully notes that Applicant’s arguments only address independent claim(s) 1 and 12, and no remarks regarding the subject matter of the dependent claim(s) have been presented. Accordingly, the rejections to dependent claims 2-3 and 5-7 are modified to address Applicant’s amendments and the new rejection to independent claim(s) 1 and 12 and are sustained. New claims 14-15 are rejected as described below. The rejections of claim(s) 1-3, 5-7, 12 and 14-15 under 35 U.S.C. § 102 and 35 U.S.C. § 103 are maintained.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 1-3, 5-7, 12 and 14-15 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2-3, 5-7 and 14-15 are also rejected at least by virtue of dependency upon a rejected base claim.
Claim 1 and claim 12 are indefinite. Claim 1 recites “generate, using the ultrasonic image, an input image that matches a predetermined size of a training image that was used to train a model that is configured to receive the input image and determine an imaging site of the subject; […] determine the imaging site of the subject based on inputting the input image to the model”; and similarly claim 12 recites “generate, using the ultrasonic image, an input image that matches a predetermined size of a training image that was used to train a model that is configured to receive the input image and determine an imaging site of the subject; […] determine the imaging site of the subject based on inputting the input image to the model”; these limitations are unclear. The phrase ‘was used to train a model that is configured […]’ uses inconsistent language by combining past tense and present tense terms when describing the model. It is not clear whether the model is part of the ‘ultrasonic diagnostic device’, or exists as a remote algorithm external to the ‘ultrasonic diagnostic device’. Similarly, the claim limitation “determine the imaging site of the subject based on inputting the input image to the model” in claim 1 – and the mirrored language in claim 12 – is unclear because it is uncertain whether the processor is determining the ‘imaging site’ or if the ‘model’ determines the ‘imaging site’ and is received by the processor. It is suggested to amend the claims to provide a separate step or function describing the training of the model, to define the relationship between the model and structural elements of the ‘ultrasonic diagnostic device’, and to delineate from the preprocessing function of generating an ‘input image’ to be provided to the ‘trained model’.
Claim 5 recites the limitation “wherein the predetermined processing includes zero-fill processing”. There is insufficient antecedent basis for this limitation in the claim. It is not certain what the ‘predetermined processing is referring to: in an interpretation it may point to the ‘preprocessing’ of claim 2; and in another interpretation it may point to a new, distinct processing step. It is suggested to amend the claim(s) to use consistent language.
Claim 6 recites the limitation “perform predetermined processing on the ultrasonic image to generate the input image that includes the predetermined length of the predetermined size of the training image” which is unclear. It is uncertain how the ‘input image’ includes the “predetermined length of the predetermined size of the training image”; in an interpretation this may be generating an ‘input image’ with a ‘predetermined length’ in the ‘depth direction’, in another interpretation the ‘predetermined length’ is a distance perpendicular to the ‘depth direction’, and in another interpretation this is a numerical or graphical value presented or encoded within the ‘input image’. It is suggested to amend the claim to clarify how the ‘predetermined length’ relates to the ‘input image’. For the purposes of examination the broadest reasonable interpretation of the claim language – including those discussed above – is applied to the limitations.
Regarding claims 14 and 15, claim 14 recites the limitations “wherein the ultrasonic image is smaller than the training image, and wherein the input image is larger than the ultrasonic image”; similarly, claim 15 recites the limitations “wherein the ultrasonic image is larger than the training image, and wherein the input image is smaller than the ultrasonic image”. The terms “smaller” and “larger” are relative terms which renders the claim indefinite. The terms “smaller” and “larger” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is not clear whether the terms ‘smaller’ and ‘larger’ refer to the number of pixels in an image, the amount of data in an image, to the length of an image, to the depth of an image, etc. Furthermore, it is unclear how the ‘images’ may be different ‘sizes’ based on the “an input image that matches a predetermined size of a training image” from claim 1 (upon which claims 14 and 15 depend). It is suggested to amend the claims to clarify what the terms “smaller” and “larger” particularly refer to. For the purposes of examination the broadest reasonable interpretation of the terms “smaller” and “larger” is applied to the limitations.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-2, 5, 12 and 14-15 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Honarvar et al. (US20230054588A1, 2023-02-23; hereinafter “Honarvar”).
Regarding claim 1, Honarvar teaches an ultrasonic diagnostic device (“ultrasound systems and methods. Some embodiments provide systems and methods useful for detecting specific tissue and/or shear waves within the tissue” [0002]; “System 10 comprises an ultrasound unit 12 […] An ultrasound transducer 14 is coupled to ultrasound unit 12.” [0069]; [0068-0095], [fig. 1]), comprising:
an ultrasonic probe (“An ultrasound transducer 14 is coupled to ultrasound unit 12. Transducer 14 is positioned adjacent to the skin of a patient P1 proximate to a region or volume of tissue that is to be imaged to acoustically couple transducer 14 to patient P1.” [0069]; “Images of a three-dimensional (3D) volume of tissue may be obtained, for example, by using a two-dimensional (2D) matrix transducer array […] A transducer 14 may be moved relative to patient P1 to acquire a larger volume of ultrasound data.” [0071]; [0068-0095], [fig. 1]); and
a processor configured to communicate with the ultrasonic probe (“System 10 comprises an ultrasound unit 12 having a controller 13. An ultrasound transducer 14 is coupled to ultrasound unit 12.” [0069]; “Controller 13 controls transducer 14 and exciter 15 to acquire ultrasound imaging data according to a desired imaging plan. Controller 13 may also receive and process received echo signals from transducer 14.” [0075]; “Embodiments of the invention may be implemented using specifically designed hardware, configurable hardware, programmable data processors […] Examples of programmable data processors are: microprocessors, digital signal processors (“DSPs”), embedded processors, graphics processors, math co-processors, general purpose computers,” [0176]; [0068-0095], [fig. 1]);
wherein the processor is configured to:
receive ultrasound data of a subject acquired by the ultrasonic probe (“Ultrasound unit 12 is operative to obtain ultrasound images of the region or volume of tissue of interest […] Ultrasound unit 12 may be operative to obtain three-dimensional (3D) images of the region or volume of tissue of interest.” [0070]; “Controller 13 controls transducer 14 and exciter 15 to acquire ultrasound imaging data according to a desired imaging plan. Controller 13 may also receive and process received echo signals from transducer 14.” [0075]; [0068-0095], [fig. 1]);
generate an ultrasonic image of the subject based on the ultrasound data (“Ultrasound unit 12 is operative to obtain ultrasound images of the region or volume of tissue of interest […] Ultrasound unit 12 may be operative to obtain three-dimensional (3D) images of the region or volume of tissue of interest.” [0070]; “Controller 13 controls transducer 14 and exciter 15 to acquire ultrasound imaging data according to a desired imaging plan. Controller 13 may also receive and process received echo signals from transducer 14.” [0075]; [0068-0095], [fig. 1]);
generate, using the ultrasonic image, an input image that matches a predetermined size of a training image that was used to train a model that is configured to receive the input image and determine an imaging site of the subject (“System 10 may, for example, comprise a machine learning model 20 which has been trained to autonomously detect the desired type of tissue(s) (e.g. liver tissue).” [0090]; “As described elsewhere herein model 20 may receive two-dimensional B-mode ultrasound image data as input. […] In some embodiments the input image data is resized. In some embodiments the input image data is down-sampled. In some embodiments the input image data is down sampled to 512×512 pixels, 256×256 pixels, 128×128 pixels, 64×64 pixels, etc.” [0095]; “The example image size used to train model 20 comprising example architecture 40 was 256 by 256 pixels.” [0105]; “The image input (e.g. for branch 50B) may be an image (e.g. a corresponding b-mode image) with an example size of 256×256 and a pixel size of 0.55 mm. All inputs to model 22 may be normalized e.g. to a range of [−0.5 , 0.5].” [0138]; The input image data (e.g., B-mode ultrasound image data) may be resized/down-sampled to match the example image size used to train a model, wherein the model detects a desired type of tissue within the input image [0068-0095], [fig. 1]);
input the input image to the model (“Model 20 may, for example, be trained to perform image segmentation of incoming images of tissue (e.g. incoming ultrasound B-mode images acquired from ultrasound signals acquired by transducer 14). The segmentation may, for example, identify a desired tissue type such as liver tissue.” [0092]; “model 20 may receive two-dimensional B-mode ultrasound image data as input. […] In some embodiments the input image data is resized. In some embodiments the input image data is down-sampled. In some embodiments the input image data is down sampled to 512×512 pixels, 256×256 pixels, 128×128 pixels, 64×64 pixels, etc.” [0095]; [0068-0095], [fig. 1]);
determine the imaging site of the subject based on inputting the input image to the model (“model 20 may receive two-dimensional B-mode ultrasound image data as input. […] Model 20 may output a mask representing pixels which have been identified as corresponding to the desired tissue type.” [0095]; [0068-0095], [fig. 1]); and
selectively change imaging conditions of the ultrasonic diagnostic device based on the imaging site of the subject (“In some embodiments transducer 14 is positioned and/or moved by a mechanical system. In some embodiments a robotic system is controlled to position transducer 14.” [0072]; “It is this area of overlap (that contains both the desired tissue (e.g. liver tissue) and good shear waves) that is ideally maximized during an image acquisition sequence. The area of overlap may, for example, be maximized by adjusting a position and/or orientation of transducer 14, adjusting a placement of exciter 15, adjusting one or more parameters of a control signal of exciter 15, adjusting one or more ultrasound imaging parameters, etc.” [0157]; “In some cases a region of interest (ROI) is automatically selected and/or varied as described elsewhere herein.” [0165]; The system may automatically select/vary transducer and ultrasound imaging parameters based on model output to maximize area of overlap during an image acquisition sequence [0068-0095], [fig. 1]).
Regarding claim 2, Honarvar teaches the ultrasonic diagnostic device according to claim 1,
Honarvar further teaching wherein generating the input image based on the ultrasonic image includes performing preprocessing on the ultrasonic image (“As described elsewhere herein model 20 may receive two-dimensional B-mode ultrasound image data as input. […] In some embodiments the input image data is resized. In some embodiments the input image data is down-sampled. In some embodiments the input image data is down sampled to 512×512 pixels, 256×256 pixels, 128×128 pixels, 64×64 pixels, etc.” [0095]; “The example image size used to train model 20 comprising example architecture 40 was 256 by 256 pixels.” [0105]; “The image input (e.g. for branch 50B) may be an image (e.g. a corresponding b-mode image) with an example size of 256×256 and a pixel size of 0.55 mm. All inputs to model 22 may be normalized e.g. to a range of [−0.5 , 0.5].” [0138]; The model receives B-mode image as input and down-sampling (i.e., preprocessing) of the input image may be performed [0068-0095], [fig. 1], [see claim 1 rejection]).
Regarding claim 5, Honarvar teaches the ultrasonic diagnostic device according to claim 2,
Honarvar further teaching wherein the predetermined processing includes zero-fill processing (“In the last stage of the branch 50A a 3D convolutional layer with no zero padding in the third dimension may be used to reduce the third dimension so it can be combined with the last stage maps of branch 50B.” [0137]; [fig. 1-4, 8]).
Regarding claim 12, Honarvar teaches a non-transitory computer readable storage medium for storing commands that when executed by a processor (“Embodiments of the invention may be implemented using […] programmable data processors configured by the provision of software (which may optionally comprise “firmware”) capable of executing on the data processors, special purpose computers or data processors that are specifically programmed, configured, or constructed to perform one or more steps in a method” [0176]; [0068-0095, 0176-0185], [fig. 1-4]) cause the processor to:
receive ultrasound data of a subject acquired by an ultrasonic probe (“acquiring a plurality of ultrasound images of the tissue region with an ultrasonic transducer acoustically coupled to the patient;” [clm 1]; “Images of a three-dimensional (3D) volume of tissue may be obtained, for example, by using a two-dimensional (2D) matrix transducer array […] A transducer 14 may be moved relative to patient P1 to acquire a larger volume of ultrasound data.” [0071]; [0068-0095], [fig. 1-2]);
generate an ultrasonic image of the subject based on the ultrasound data (“Ultrasound unit 12 is operative to obtain ultrasound images of the region or volume of tissue of interest […] Ultrasound unit 12 may be operative to obtain three-dimensional (3D) images of the region or volume of tissue of interest.” [0070]; “Controller 13 controls transducer 14 and exciter 15 to acquire ultrasound imaging data according to a desired imaging plan. Controller 13 may also receive and process received echo signals from transducer 14.” [0075]; [0068-0095], [fig. 1-2]);
generate, using the ultrasonic image, an input image that matches a predetermined size of a training image that was used to train a model that is configured to receive the input image and determine an imaging site of the subject (“System 10 may, for example, comprise a machine learning model 20 which has been trained to autonomously detect the desired type of tissue(s) (e.g. liver tissue).” [0090]; “As described elsewhere herein model 20 may receive two-dimensional B-mode ultrasound image data as input. […] In some embodiments the input image data is resized. In some embodiments the input image data is down-sampled. In some embodiments the input image data is down sampled to 512×512 pixels, 256×256 pixels, 128×128 pixels, 64×64 pixels, etc.” [0095]; “The example image size used to train model 20 comprising example architecture 40 was 256 by 256 pixels.” [0105]; “The image input (e.g. for branch 50B) may be an image (e.g. a corresponding b-mode image) with an example size of 256×256 and a pixel size of 0.55 mm. All inputs to model 22 may be normalized e.g. to a range of [−0.5 , 0.5].” [0138]; [0068-0095], [fig. 1-4], [see claim 1 rejection]);
input the input image to the model (“Model 20 may, for example, be trained to perform image segmentation of incoming images of tissue (e.g. incoming ultrasound B-mode images acquired from ultrasound signals acquired by transducer 14). The segmentation may, for example, identify a desired tissue type such as liver tissue.” [0092]; “model 20 may receive two-dimensional B-mode ultrasound image data as input. […] In some embodiments the input image data is resized. In some embodiments the input image data is down-sampled. In some embodiments the input image data is down sampled to 512×512 pixels, 256×256 pixels, 128×128 pixels, 64×64 pixels, etc.” [0095]; [0068-0095], [fig. 1-4]):
determine the imaging site of the subject based on inputting the input image to the model (“model 20 may receive two-dimensional B-mode ultrasound image data as input. […] Model 20 may output a mask representing pixels which have been identified as corresponding to the desired tissue type.” [0095]; [0068-0095], [fig. 1-4, 8]); and
selectively change imaging conditions of the ultrasonic diagnostic device based on the imaging site of the subject (“In some embodiments transducer 14 is positioned and/or moved by a mechanical system. In some embodiments a robotic system is controlled to position transducer 14.” [0072]; “It is this area of overlap (that contains both the desired tissue (e.g. liver tissue) and good shear waves) that is ideally maximized during an image acquisition sequence. The area of overlap may, for example, be maximized by adjusting a position and/or orientation of transducer 14, adjusting a placement of exciter 15, adjusting one or more parameters of a control signal of exciter 15, adjusting one or more ultrasound imaging parameters, etc.” [0157]; “In some cases a region of interest (ROI) is automatically selected and/or varied as described elsewhere herein.” [0165]; [0068-0095], [fig. 1-4, 8], [see claim 1 rejection]).
Regarding claim 14, Honarvar teaches the ultrasonic diagnostic device according to claim 1,
Honarvar further teaching wherein the ultrasonic image is smaller than the training image, and wherein the input image is larger than the ultrasonic image (“In some embodiments the input image data is resized. In some embodiments the input image data is down-sampled. In some embodiments the input image data is down sampled to 512×512 pixels, 256×256 pixels, 128×128 pixels, 64×64 pixels, etc.” [0095]; “The example image size used to train model 20 comprising example architecture 40 was 256 by 256 pixels.” [0105]; “phasor input (e.g. for branch 50A) is created from a phasor corresponding to the shear wave with an example size of 128×128×3 and example pixel size of 1.1 mm. The image input (e.g. for branch 50B) may be an image (e.g. a corresponding b-mode image) with an example size of 256×256 and a pixel size of 0.55 mm. All inputs to model 22 may be normalized e.g. to a range of [−0.5 , 0.5].” [0138]; The example image size (e.g., 256x256) for training may be larger than the phasor image input (128x128), wherein the input image data for model may be resized to 512x512 (or 256x256) during down sampling [0068-0095], [fig. 1-4, 8], [see claim 1 rejection]).
Regarding claim 15, Honarvar teaches the ultrasonic diagnostic device according to claim 1,
Honarvar further teaching wherein the ultrasonic image is larger than the training image, and wherein the input image is smaller than the ultrasonic image (“In some embodiments the input image data is resized. In some embodiments the input image data is down-sampled. In some embodiments the input image data is down sampled to 512×512 pixels, 256×256 pixels, 128×128 pixels, 64×64 pixels, etc.” [0095]; “The example image size used to train model 20 comprising example architecture 40 was 256 by 256 pixels.” [0105]; “phasor input (e.g. for branch 50A) is created from a phasor corresponding to the shear wave with an example size of 128×128×3 and example pixel size of 1.1 mm. The image input (e.g. for branch 50B) may be an image (e.g. a corresponding b-mode image) with an example size of 256×256 and a pixel size of 0.55 mm. All inputs to model 22 may be normalized e.g. to a range of [−0.5 , 0.5].” [0138]; The input image data may be down-sampled to be a smaller size than a b-mode image input the model [0068-0095], [fig. 1-4, 8], [see claim 14 rejection]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 3 and 6-7 is/are rejected under 35 U.S.C. 103 as being obvious over Honarvar as applied to claim 2 above, in view of Ito et al. (US20200043172A1, 2020-02-06; hereinafter “Ito”).
Regarding claim 3, Honarvar teaches the ultrasonic diagnostic device according to claim 2,
Honarvar further teaching wherein the processor is further configured to extract a portion of the ultrasound image during the preprocessing, and generate the input image based on extracting the portion of the ultrasound image (“System 10 may, for example, detect boundaries between different portions of an acquired image (e.g. between portions of the image representing a desired tissue type such as the liver and portions of the image which do not represent the desired tissue type),” [0088]; [0068-0095], [fig. 1-4, 8], [see claim 1 rejection]);
but Honarvar may fail to teach the processor further configured to determine based on a length of the ultrasonic image in a depth direction of the subject being greater than a predetermined length of the predetermined size of the training image.
However, in the same field of endeavor, Ito teaches an ultrasonic diagnostic device (“An ultrasonic diagnostic apparatus” [clm 1]; [0031-0055], [fig. 1-2]);
Ito further teaching wherein the processor (“The image processing module 26 can be constituted by one or a plurality of processors operated according to a program. […] The information processing apparatus functions as an ultrasonic image processing apparatus.” [0046]; “A control unit 34 controls an operation of each component illustrated in FIG. 1. In the embodiment, the control unit 34 is configured by a CPU and a program. The control unit 34 may function as the image processing module 26.” [0051]; [0031-0055], [fig. 1-2]) is further configured to,
based on a length of the ultrasonic image in a depth direction of the subject being greater than a predetermined length of the predetermined size of the training image (“a detecting unit that detects a plurality of boundary points by setting a plurality of search paths so as to traverse a plurality of positions in a boundary image on an ultrasonic image including the boundary image” [clm 1]; “In the generating step, a region of interest including an attention tissue image present on a shallow side of a boundary image is generated on the basis of the boundary image, the boundary image having a form which extends in a direction intersecting with a depth direction on an ultrasonic image.” [0039]; “A start point of the boundary search is the deepest point on each search path in the embodiment, and the boundary search is sequentially executed from the start point toward a shallow side.” [0053]; The image analyzing unit performs analysis of the tomographic image by analyzing a region of the tomographic image having x and y (i.e., depth) coordinates and extracts a partial image for analysis [0031-0055], [fig. 1-9, 12-15; see fig. 6 reproduced below]):
extract a portion of the ultrasound image during the preprocessing, and generate the input image based on extracting the portion of the ultrasound image (“That is, the image analyzing unit 24 fulfills a CAD function. The image analyzing unit 24 performs the image analysis in units of frames. The image analysis may also be executed in units of a predetermined number of frames. The image analyzing unit 24 can be constituted by a machine learning analyzer such as a convolutional neural network (CNN). The image analyzing unit 24 has a function of recognizing, extracting, or discriminating a low-luminance tumor, a low-luminance non-tumor, or the like.” [0048]; “The ROI setting unit 22 includes a preprocessing unit 36, a boundary detecting unit 38, and an ROI generating unit 42 in the illustrated example. The preprocessing unit 36 performs necessary preprocessing on the tomographic image. Examples of the preprocessing include smoothing processing, minimum value extraction processing, maximum value extraction processing, median (median extraction) processing, edge enhancement processing, and the like.” [0052]; “The image analyzing unit 24 cuts a partial image from the tomographic image or the like based on the generated ROI, and executes an analysis on the partial image. Alternatively, the image analyzing unit 24 defines an analysis target range in the tomographic image or the like based on the generated ROI, and executes an analysis within the analysis target range.” [0055]; The image analyzing unit may assess image portions derived after performing maximum value extraction processing on the tomographic image [0031-0055], [fig. 1-9, 12-15; see fig. 6 reproduced below]).
PNG
media_image1.png
705
1027
media_image1.png
Greyscale
The original image is preprocessed into a ROI and a mask image before machine learning analyzer performed image analysis to discriminate a tumor (Ito [fig. 6])
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to combine the ultrasonic diagnostic device taught by Honarvar with the processor configured to determine a length of the ultrasonic image in a depth direction as taught by Ito. There is a need for improved systems and methods for ultrasound imaging. There is also a need for improved systems and methods for assessing the health of tissues such as liver tissues (Honarvar [0008]). In a case where the region of interest is displayed an appearance of the region of interest can be improved by smoothing the boundary point sequence; as a result, the lower side of the region of interest may be smoothed (Ito [0034]). According to this configuration, if the form of the lower side of the region of interest in a time axis direction is severely changed, the change can be suppressed and an accuracy of setting of the region of interest can be improved (Ito [0035]).
Regarding claim 6, Honarvar teaches the ultrasonic diagnostic device according to claim 2,
Honarvar may fail to teach the processor further configured to determine based on a length of the ultrasonic image in a depth direction of the subject being less than a predetermined length of the predetermined size of the training image.
However, in the same field of endeavor, Ito teaches wherein the processor is further configured to, based on a length of the ultrasonic image in a depth direction of the subject being less than a predetermined length of the predetermined size of the training image, perform predetermined processing on the ultrasonic image to generate the input image that includes the predetermined length of the predetermined size of the training image (“That is, the image analyzing unit 24 fulfills a CAD function. The image analyzing unit 24 performs the image analysis in units of frames. The image analysis may also be executed in units of a predetermined number of frames. The image analyzing unit 24 can be constituted by a machine learning analyzer such as a convolutional neural network (CNN). The image analyzing unit 24 has a function of recognizing, extracting, or discriminating a low-luminance tumor, a low-luminance non-tumor, or the like.” [0048]; “The ROI setting unit 22 includes a preprocessing unit 36, a boundary detecting unit 38, and an ROI generating unit 42 in the illustrated example. The preprocessing unit 36 performs necessary preprocessing on the tomographic image. Examples of the preprocessing include smoothing processing, minimum value extraction processing, maximum value extraction processing, median (median extraction) processing, edge enhancement processing, and the like.” [0052]; The image analyzing unit may assess image portions derived after performing minimum value extraction processing on the tomographic image [0031-0055], [fig. 1-9, 12-15]).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to combine the ultrasonic diagnostic device taught by Honarvar with the processor configured to determine a length of the ultrasonic image in a depth direction as taught by Ito. There is a need for improved systems and methods for ultrasound imaging. There is also a need for improved systems and methods for assessing the health of tissues such as liver tissues (Honarvar [0008]). In a case where the region of interest is displayed an appearance of the region of interest can be improved by smoothing the boundary point sequence; as a result, the lower side of the region of interest may be smoothed (Ito [0034]). According to this configuration, if the form of the lower side of the region of interest in a time axis direction is severely changed, the change can be suppressed and an accuracy of setting of the region of interest can be improved (Ito [0035]).
Regarding claim 7, Honarvar and Ito teach the ultrasonic diagnostic device according to claim 6,
Honarvar further teaching wherein the predetermined processing includes zero-fill processing (“In the last stage of the branch 50A a 3D convolutional layer with no zero padding in the third dimension may be used to reduce the third dimension so it can be combined with the last stage maps of branch 50B.” [0137]; [fig. 1-4, 8]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sawkey (US20220414867A1, 2022-12-29) teaches using artificial intelligence modeling to classify organ shapes for autosegmentation quality assurance [0001].
Tison et al. (WO2022261641A1, 2022-12-15) teaches methods and system for automatic coronary angiography interpretation using machine learning techniques [0003].
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to James F. McDonald III whose telephone number is (571)272-7296. The examiner can normally be reached M-F; 8AM-6PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Koharski can be reached at 5712727230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JAMES FRANKLIN MCDONALD III
Examiner
Art Unit 3797
/CHRISTOPHER KOHARSKI/Supervisory Patent Examiner, Art Unit 3797