DETAILED ACTION
This Office action is responsive to communications filed on 11/13/2025. Claims 1, 3, 5, 9-14, & 20 have been amended. Claims 2 & 4 are canceled. Presently, Claims 1, 3, & 5-20 remain pending and are hereinafter examined on the merits.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Previous objection to the Abstract is withdrawn in view of the amendments filed on 11/13/2025.
Previous objections to the Drawings are withdrawn in view of the amendments filed on 11/13/2025.
Previous rejections under 35 USC § 101 is withdrawn in view of the amendments filed on 11/13/2025.
Previous rejections under 35 USC § 112(a) are withdrawn in view of the amendments filed on 11/13/2025.
Previous rejections under 35 USC § 112(b) are withdrawn in view of the amendments filed on 11/13/2025 except for Claim 6 and Claims 9-12.
For Claim 6: line 5-6, it is unclear what once determining the noncontact state, determine whether the non-depicted portion is present means in the context of the claim. The claimed phrase appears grammatically incomplete.
For Claims 9-12: line 2, it is unclear what “once the depicted contour is closed” means in the context of the claim. The phrase is interpreted as the outer boundary or contour of the organ has been detected and forms a complete, continuous shape. It is requested that the claim define what constitutes as a closed depicted contour.
Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on Fujisawa (US 2021/0093303 A1) alone, as applied previously under 35 USC § 102(a)(1) in the prior rejection of record for any teaching or matter specifically challenged in the argument. The new grounds of rejection now relies on Fujisawa (US 2021/0093303 A1) in view of Orlando et al (Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images. Med Phys. 2020 Jun) under 35 USC § 103.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6, & 9-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention.
Claim 6:
line 5-6, it is unclear what once determining the noncontact state, determine whether the non-depicted portion is present means in the context of the claim. The claimed phrase appears grammatically incomplete. In addition, if the probe is not in contact, no ultrasound image can be acquired, and thus no-image-based analysis such as (‘non-depicted portion) can be performed. Accordingly, the claim is meet if a system detects a condition in which the ultrasound is not physically in contact with the subject’s body surface. Appropriate correction is required.
Claims 9-12:
line 2, it is unclear what “once the depicted contour is closed” means in the context of the claim. The phrase is interpreted as the outer boundary or contour of the organ has been detected and forms a complete, continuous shape. Appropriate correction is required.
The dependent claims of the above rejected claims are rejected due to their dependency.
Claim Objections
The following claims are objected to because of the following informalities and should recite:
Claim 14: line 4, “[[a]]the contour”.
Claim 15: lines 2-4, ‘[[a]]the three-dimensional image of the contour and [[a]]the three-dimensional image of the region, as the image representing the contour and
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 5-15, 17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Fujisawa (US 2021/0093303 A1) in view of Orlando et al (Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images. Med Phys. 2020 Jun).
Claim 1: Fujisawa discloses, An ultrasound diagnostic apparatus comprising: (¶Abstract, FIG. 3)
a monitor; (display 20, FIG. 3)
an ultrasound probe; (ultrasonic probe 30, FIG. 3)
a position and posture sensor that acquires position and posture information of the ultrasound probe; and (position sensor 40, FIG. 3, ¶0053)
a processor configured to: (¶0079, ‘image processor 30’)
acquire a plurality of ultrasound images of a subject by transmitting and receiving ultrasound beams by using the ultrasound probe; (¶Abstract, ¶0026-0027, ¶0077)
generate three-dimensional image data of a contour of an organ included in each of the plurality of ultrasound images based on the position and posture information acquired by the position and posture sensor; generate three-dimensional image data of a region of the organ included in each of the plurality of ultrasound images based on the position and posture information acquired by the position and posture sensor; and
-The ultrasonic image data, in the form of multiple cross-sections, correspond to position data of the ultrasonic probe, ¶0061-0063, ¶0103-0104. This positional data includes the coordinates (X, Y, Z) and tilt angle (posture) of the probe, ¶0054. The acquired ultrasonic image is arranged three-dimensionally in a 3D memory based on the position data, ¶0078-0080, ¶0103-0104. From this collection of three-dimensionally arrange ultrasonic image data and position data, deriving function 172 determines the shape of the entire target organ and the imaged organ region, ¶0076-0077, ¶0082, ¶0107. The determining function 172 specifically extracts an organ contour from the ultrasonic image of cross-sections arranged three-dimensionally, ¶0108. This extracted organ contour is then arranged in a three-dimensional manner, ¶0108, and thus the region of the organ is arranged in a three-dimensional manner. This volume data of the 3D image data represents the imaged organ region in three-dimensions, ¶0076-0077, ¶0082, ¶0107.
Fujisawa fails to disclose:
display, on the monitor, an image representing a three-dimensional shape of the contour and an image representing a three-dimensional shape of the region in a superimposed manner and in different displaying manners, based on the three-dimensional image data of the contour and the three-dimensional image data of the region,
wherein the three-dimensional image data of the contour and the three-dimensional image data of the region are generated independently of each other.
However, Orlando in the context of segmentation techniques using deep learning on ultrasound images discloses, display, on the monitor, an image representing a three-dimensional shape of the contour and an image representing a three-dimensional shape of the region in a superimposed manner and in different displaying manners, based on the three-dimensional image data of the contour and the three-dimensional image data of the region,
-Regarding the region, Orlando teaches 3D TRUS images serving as the 3D region data acquired from the patient, [2.A Clinical dataset / pg. 2415], ‘Three-dimensional images of the prostate were acquired using end-fire (as used in prostate biopsy) and side-fire SR (as used in some HDR-BT) mechanical scanning approaches (Fig. 1).19 Both methods rotate a TRUS transducer around the long-axis to create geometrically different reconstructed 3D images that are influenced by the transducer array configuration. The images used in this study were acquired with the C9-5 transducer with the iU22 (Philips, Amsterdam, the Netherlands), the C9-5 and BPTRT9-5 transducers with the ATL HDI-5000 (Philips, Amsterdam, the Netherlands), and the 8848 transducer with the Profocus 2202 (BK Medical, Peabody, MA, USA) ultrasound machine models. The total dataset of 246 3D TRUS images consisted of 104 end-fire and 142 side-fire 3D TRUS images and was split into training, validation, and testing datasets as shown in Table I. Manual 3D prostate segmentations (excluding the seminal vesicles) were performed by an observer (IG) with approximately 15 yr of TRUS prostate image analysis experience.’
-Regarding the 3D image contour (i.e., boundary) is generated entirely independent of the raw region data. It is generated manually by an observer with 15 years of experience or computationally via deep learning, such as U-Net or 3D V-NET, see [2.A Clinical dataset / pg. 2415], [2.B.3. 3D Reconstruction / pg 2416], ‘2.B.3 3D Reconstruction Predicted 3D prostate segmentations were obtained by segmenting multiple 2D radial frames generated by rotation around a central axis, followed by reconstruction to a 3D surface following a reconstruction method similar to Qiu et al.11 Previous observations have noted that segmenting the prostate on slices near the apex and base of the prostate can be challenging due to boundary incompleteness,15 so we chose to radially slice the 3D prostate image as opposed to transverse slicing in an attempt to improve segmentations at all boundaries. This choice was motivated by the experience of segmenting the prostate when the center of the gland is in-plane, which typically presents as an easier image to accurately define and segment the boundaries on the left and right sides of the 2D image. In contrast to this, a transverse slicing approach would result in 2D images with the prostate appearing as a different size and shape, with this difference more pronounced at the prostate apex and base, and when comparing end-fire and side-fire image geometries.’, [2.C Evaluation and comparison / pg. 2417 left col], ‘The performance of our algorithm was compared against three state-of-the-art fully 3D predicting CNNs (V-Net,24 Dense V-Net,25 and High-resolution 3D-Net26) using an open-source implementation on the NiftyNet platform.27 It is often assumed that performing a prediction based on 3D information allows for an improved result due to increased spatial context, so we completed a direct comparison on the same test dataset to investigate this hypothesis. Similar to our proposed method, the same 165/41 3D TRUS images (Table I) were used for training/validation, respectively. The 3D V-Net was chosen to optimize hyperparameters, including loss function, due to its widespread use and performance in preliminary experiments. For simplicity, these hyperparameters were also used for the Dense V-Net and high-resolution 3D-Net. Parameters were chosen to maximize the spatial window size and usable memory on the GPU with optimized hyperparameters shown in Table II. Previous work has shown improved performance with a hybrid loss function,16 so we compared performance between a dice loss function and a dice plus cross-entropy (DiceXEnt) loss function, as provided in NiftyNet, using the 3D V-Net. Although NiftyNet offers a patch-based analysis, preliminary experiments resulted in 3D segmentations with many flat surfaces throughout the prediction corresponding to patch edges. Since we had one structure of interest (i.e., the prostate), we did not perform a patch-based analysis and predictions were performed on a resized image to match the spatial window.’
-Regarding the superimposed display in different manners, FIGS. 1 & 3 provide colored shapes representing the prostate contour superimposed on the 3D bounding boxes containing gray scale lanes that represent the acquired 3D ultrasound region. In FIGS. 4, 5, & 6, the display has multiple independent generated 3D surfaces superimposed over 3D slices of the 3D ultrasound region. These contours are displayed in different manners using distinct colors to distinguish how they were generated, red for manual, blue for rmU-net, and yellow V-net.
wherein the three-dimensional image data of the contour and the three-dimensional image data of the region are generated independently of each other.
- the 3D image contour (i.e., boundary) is generated entirely independent of the 3D region data. These two data sets comes from the 3D region data acquired through ultrasound scanning of a patient, where the 3D contour data is generated either by manual human annotation, or computationally by the neural networks, see [2.A Clinical dataset / pg. 2415], [2.B.3. 3D Reconstruction / pg 2416], [2.C Evaluation and comparison / pg. 2417 left col].
It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the monitor of Fujisawa to display to include an image representing a three-dimensional shape of the contour and an image representing a three-dimensional shape of the region in a superimposed manner and in different displaying manners, based on the three-dimensional image data of the contour and the three-dimensional image data of the region, wherein the three-dimensional image data of the contour and the three-dimensional image data of the region are generated independently of each other in view of the teachings of Orlando. The motivation to do this yield predictable results such as providing accurate targeting, reducing patient risk and improving clinical workflow as suggested by Orlando, ¶Abstract.
Claim 3: Modified Fujisawa discloses all the elements above in claim 1, Fujisawa discloses, wherein the processor is configured to: determine a portion of which the contour is not depicted in the three-dimensional image data of the contour the organ or a portion of which the region is no
-Fujisawa teaches that the system determines by the processor a portion of the organ where the contour or region is not detected, identifies this as a “unimaged region” or “data miss area” which aligns with a non-depicted portion, and then notifies the user about is presence, ¶Abstract, ¶0004, ¶0074-0075, ¶0082, ¶0095, ¶0097, ¶0108-0109, ¶0123, ¶0128-0129.
Claim 5: Modified Fujisawa discloses all the elements above in claim 3, Fujisawa discloses, wherein the processor is configured to determine that a discontinuous portion having an interval equal to or greater than a predetermined interval in the region is the non-depicted portion. (¶0127, ‘The display control function 71 determines whether or not the ratio of the volume or contour of the unimaged organ region to the volume or contour of the entire organ shape derived in step ST15 is equal to or less than a threshold value (e.g. 20%). If it is determined as “YES” in step ST17, that is, if it is determined that the volume or contour ratio of the unimaged organ region is less than or equal to the threshold value, the display control function 71 displays on the display 20 that the data is sufficient with only a little missing data (step ST18).’, see also associated paragraphs, ¶0128-0129)
Claim 6: Modified Fujisawa discloses all the elements above in claim 3, Fujisawa fails to disclose: wherein the processor is configured to: detect a noncontact state in which the ultrasound probe is separated from a body surface of the subject; once determining the noncontact state, determine whether the non-depicted portion is present. (¶0105, ‘The acquiring function 171 determines whether to finish the ultrasonic imaging (step ST4). The acquiring function 171 may determine to finish the ultrasonic imaging on the basis of the finish instruction input by the operator via the input interface 19, or may determine that the ultrasonic imaging is finished if the ultrasonic probe 30 is in the air apart from the body surface of the subject after a certain time elapsed. For example, whether the ultrasonic probe 30 is in the air may be determined based on the position data of the ultrasonic probe 30.’)
-Fig. 9, the system determines whether to finish ultrasonic imaging (step ST4), ¶0105. If imaging is determined to be finished (i.e. “YES”-ST4) then the deriving function 172 derives the organ shape and determines the unimaged organ region, ¶0105-0107. Therefore, if a predetermine period of time elapses and the probe is no longer in contact, the determination of the unimaged region occurs.
Claim 7: Modified Fujisawa discloses all the elements above in claim 3, Fujisawa discloses, wherein the processor is configured to, once a predetermined period of time has elapsed from a start of scanning by the ultrasound probe, determine whether the non-depicted portion is present.
-Fig. 9, the system determines whether to finish ultrasonic imaging (step ST4), ¶0105. If imaging is determined to be finished (i.e. “YES”-ST4) then the deriving function 172 derives the organ shape and determines the unimaged organ region, ¶0105-0107. Therefore, if a predetermine period of time elapses and the probe is no longer in contact, the determination of the unimaged region occurs.
Claim 8: Modified Fujisawa discloses all the elements above in claim 3, Fujisawa discloses, wherein the processor is configured to, once the user gives an instruction to complete an examination, determine whether the non-depicted portion is present. (¶0105, ‘The acquiring function 171 determines whether to finish the ultrasonic imaging (step ST4). The acquiring function 171 may determine to finish the ultrasonic imaging on the basis of the finish instruction input by the operator via the input interface 19, or may determine that the ultrasonic imaging is finished if the ultrasonic probe 30 is in the air apart from the body surface of the subject after a certain time elapsed. For example, whether the ultrasonic probe 30 is in the air may be determined based on the position data of the ultrasonic probe 30.’)
-Fig. 9, the system determines whether to finish ultrasonic imaging (step ST4), ¶0105. If imaging is determined to be finished (i.e. “YES”-ST4) then the deriving function 172 derives the organ shape and determines the unimaged organ region, ¶0105-0107. Therefore, if a predetermine period of time elapses and the probe is no longer in contact, the determination of the unimaged region occurs.
Claim 9: Modified Fujisawa discloses all the elements above in claim 3, Fujisawa discloses, wherein the processor is configured to, once the depicted contour is closed, determine whether the non-depicted portion is present. (¶0107-0109)
-FIG. 9, ST5-derive entire organ shape and imaged organ region. ST6, determine unimaged organ region. ST7 unimaged organ region exist “YES” or “NO”,
Claim 10: Modified Fujisawa discloses all the elements above in claim 3, Fujisawa discloses, wherein the processor is configured to, once the depicted contour is closed and the region that is not depicted is not present in a range inside the contour, notify the user that scanning is completed. (FIG. 12-ST1-ST3 & ST15-ST20, ¶0126-0131)
-FIG. 12, ST15-derive entire organ shape and imaged organ region. ST16, determine unimaged organ region. ST17 unimaged organ region exist “YES” or “NO”,ST18 DISPLAY THAT DATA IS SUFFICIENT (this statement effectively serves as a notification to the user that the scanning has achieved a sufficient level of completeness).
Claim 11: Modified Fujisawa discloses all the elements above in claim 5, Fujisawa discloses, the processor is configured to, once the depicted contour is closed and the region that is not depicted is not present in a range inside the contour, notify the user that scanning is completed. (FIG. 12-ST1-ST3 & ST15-ST20, ¶0126-0131)
-FIG. 12, ST15-derive entire organ shape and imaged organ region. ST16, determine unimaged organ region. ST17 unimaged organ region exist “YES” or “NO”,ST18 DISPLAY THAT DATA IS SUFFICIENT (this statement effectively serves as a notification to the user that the scanning has achieved a sufficient level of completeness).
Claim 12: Modified Fujisawa discloses all the elements above in claim 6, Fujisawa fails to disclose: wherein the processor is configured to, once the depicted contour is closed and the region that is not depicted is not present in a range inside the contour, notify the user that scanning is completed. (FIG. 12-ST1-ST3 & ST15-ST20, ¶0126-0131)
-FIG. 12, ST15-derive entire organ shape and imaged organ region. ST16, determine unimaged organ region. ST17 unimaged organ region exist “YES” or “NO”,ST18 DISPLAY THAT DATA IS SUFFICIENT (this statement effectively serves as a notification to the user that the scanning has achieved a sufficient level of completeness).
Claim 13: Modified Fujisawa discloses all the elements above in claim 1, Fujisawa discloses, wherein the processor is configured to: display a three-dimensional schema image on the monitor; and display the contour and the region in an emphasized manner on the three-dimensional schema image. (¶0042, ¶0108)
Claim 14: Modified Fujisawa discloses all the elements above in claim 3, Fujisawa discloses, comprising: a reference contour memory configured to store a reference contour that is able to be depicted, wherein the processor is configured to, once that a contour corresponding to the reference contour stored in the reference contour memory is not depicted, notify the user to move the ultrasound probe.
-The system of Fujisawa utilizes a “3D model of the entire organ” which acts as a reference contour. This 3D model can be pre-acquired or be a general organ shape, ¶0108. This 3D model is stored in the main memory 18, which work in concert with the image memory 15, ¶0041-0042, ¶0047, ¶0106-0108. The determining function 173 address the non-depicted portion by “extracts an organ contour from the ultrasonic image of cross-sections arranged three-dimensionally” and then “collates the arranged one with a 3D model of the entire organ”, ¶0108. Subsequently, it “determines, as an unimaged organ region, a region acquired by removing the organ contour included in the already existing data area from the organ contour of the 3D model.”, ¶0108. This unimaged organ region is the portion of the reference contour that has no been detected or depicted by the probe’s scanning. If the determination function 173 determines that an unimaged organ is present, “YES” in step ST, the display, “displays information regarding the unimaged organ region on the display, ¶0109. This information is used by the operator to image the missing area, by moving the ultrasonic probe to the indicated positions and orientations of the unimaged region, ¶0119-0120.
Claim 15: Modified Fujisawa discloses all the elements above in claim 1, Fujisawa discloses, wherein the processor is configured to display, on the monitor, a three-dimensional image of the contour and a three-dimensional image of the region, as the image representing the contour and the image representing the region. -Based on the 35 USC § 112(b), the phrase is interpreted as display on the monitor a three-dimensional image representing the contour and region (i.e., one image) based on the image of the contour and the region.
-Fujisawa teaches 3D image data, ¶0040-0042, displaying the 3D of the entire organ and unimaged region, ¶0108, ¶0112-0115. The system effectively uses and display a 3D schema (the 3D model of the entire organ and thus region) and overlays information about the detected and undetected (unimaged) region directly onto this 3D representation to guide the user, ¶0109, ¶0112-0122.
Claim 17: Modified Fujisawa discloses all the elements above in claim 1, Fujisawa discloses, wherein the processor is configured to display the image representing the contour and the image representing the region in a superimposed manner on the monitor.
-Fujisawa discloses displaying the image of the contour and the image of the region in a superimposed manner, ¶0108-0109, ¶0112-0122. Specifically, Fujisawa indicates that the detected unimaged region, the suggested beam range to fill it, and guidance of probe placement are represented visually on the 3D model of the organ, thereby superimposing this information.
Claim 19: Fujisawa discloses all the elements above in claim 1, Fujisawa discloses, wherein the position and posture sensor includes an inertial sensor, a magnetic sensor, or an optical sensor. (¶0055)
Claim 20: Fujisawa discloses, A control method of an ultrasound diagnostic apparatus, (FIG. 3, FIG. 9, FIG. 12) the control method comprising:
acquiring position and posture information of an ultrasound probe; (position sensor 40, FIG. 3, ¶0053)
acquiring a plurality of ultrasound images of a subject by transmitting and receiving ultrasound beams by using the ultrasound probe; (¶Abstract, ¶0026-0027, ¶0077)
generating three-dimensional image data of a contour of an organ included in each of the plurality of ultrasound images based on the acquired position and posture information; generating three-dimensional image data of a region of the organ included in each of the plurality of ultrasound images based on the acquired position and posture information; and
-The ultrasonic image data, in the form of multiple cross-sections, correspond to position data of the ultrasonic probe, ¶0061-0063, ¶0103-0104. This positional data includes the coordinates (X, Y, Z) and tilt angle (posture) of the probe, ¶0054. The acquired ultrasonic image is arranged three-dimensionally in a 3D memory based on the position data, ¶0078-0080, ¶0103-0104. From this collection of three-dimensionally arrange ultrasonic image data and position data, deriving function 172 determines the shape of the entire target organ and the imaged organ region, ¶0076-0077, ¶0082, ¶0107. The determining function 172 specifically extracts an organ contour from the ultrasonic image of cross-sections arranged three-dimensionally, ¶0108. This extracted organ contour is then arranged in a three-dimensional manner, ¶0108, and thus the region of the organ is arranged in a three-dimensional manner. This volume data of the 3D image data represents the imaged organ region in three-dimensions, ¶0076-0077, ¶0082, ¶0107.
Fujisawa fails to disclose:
displaying, on a monitor, an image representing a three-dimensional shape of the contour and an image representing a three-dimensional shape of the region in a superimposed manner and in different displaying manners, based on the three-dimensional image data of the contour and the three-dimensional image data of the region,
wherein the three-dimensional image data of the contour and the three-dimensional image data of the region are generated independently of each other.
However, Orlando in the context of segmentation techniques using deep learning on ultrasound images discloses, displaying, on a monitor, an image representing a three-dimensional shape of the contour and an image representing a three-dimensional shape of the region in a superimposed manner and in different displaying manners, based on the three-dimensional image data of the contour and the three-dimensional image data of the region,
-Regarding the region, Orlando teaches 3D TRUS images serving as the 3D region data acquired from the patient, [2.A Clinical dataset / pg. 2415], ‘Three-dimensional images of the prostate were acquired using end-fire (as used in prostate biopsy) and side-fire SR (as used in some HDR-BT) mechanical scanning approaches (Fig. 1).19 Both methods rotate a TRUS transducer around the long-axis to create geometrically different reconstructed 3D images that are influenced by the transducer array configuration. The images used in this study were acquired with the C9-5 transducer with the iU22 (Philips, Amsterdam, the Netherlands), the C9-5 and BPTRT9-5 transducers with the ATL HDI-5000 (Philips, Amsterdam, the Netherlands), and the 8848 transducer with the Profocus 2202 (BK Medical, Peabody, MA, USA) ultrasound machine models. The total dataset of 246 3D TRUS images consisted of 104 end-fire and 142 side-fire 3D TRUS images and was split into training, validation, and testing datasets as shown in Table I. Manual 3D prostate segmentations (excluding the seminal vesicles) were performed by an observer (IG) with approximately 15 yr of TRUS prostate image analysis experience.’
-Regarding the 3D image contour (i.e., boundary) is generated entirely independent of the raw region data. It is generated manually by an observer with 15 years of experience or computationally via deep learning, such as U-Net or 3D V-NET, see [2.A Clinical dataset / pg. 2415], [2.B.3. 3D Reconstruction / pg 2416], ‘2.B.3 3D Reconstruction Predicted 3D prostate segmentations were obtained by segmenting multiple 2D radial frames generated by rotation around a central axis, followed by reconstruction to a 3D surface following a reconstruction method similar to Qiu et al.11 Previous observations have noted that segmenting the prostate on slices near the apex and base of the prostate can be challenging due to boundary incompleteness,15 so we chose to radially slice the 3D prostate image as opposed to transverse slicing in an attempt to improve segmentations at all boundaries. This choice was motivated by the experience of segmenting the prostate when the center of the gland is in-plane, which typically presents as an easier image to accurately define and segment the boundaries on the left and right sides of the 2D image. In contrast to this, a transverse slicing approach would result in 2D images with the prostate appearing as a different size and shape, with this difference more pronounced at the prostate apex and base, and when comparing end-fire and side-fire image geometries.’, [2.C Evaluation and comparison / pg. 2417 left col], ‘The performance of our algorithm was compared against three state-of-the-art fully 3D predicting CNNs (V-Net,24 Dense V-Net,25 and High-resolution 3D-Net26) using an open-source implementation on the NiftyNet platform.27 It is often assumed that performing a prediction based on 3D information allows for an improved result due to increased spatial context, so we completed a direct comparison on the same test dataset to investigate this hypothesis. Similar to our proposed method, the same 165/41 3D TRUS images (Table I) were used for training/validation, respectively. The 3D V-Net was chosen to optimize hyperparameters, including loss function, due to its widespread use and performance in preliminary experiments. For simplicity, these hyperparameters were also used for the Dense V-Net and high-resolution 3D-Net. Parameters were chosen to maximize the spatial window size and usable memory on the GPU with optimized hyperparameters shown in Table II. Previous work has shown improved performance with a hybrid loss function,16 so we compared performance between a dice loss function and a dice plus cross-entropy (DiceXEnt) loss function, as provided in NiftyNet, using the 3D V-Net. Although NiftyNet offers a patch-based analysis, preliminary experiments resulted in 3D segmentations with many flat surfaces throughout the prediction corresponding to patch edges. Since we had one structure of interest (i.e., the prostate), we did not perform a patch-based analysis and predictions were performed on a resized image to match the spatial window.’
-Regarding the superimposed display in different manners, FIGS. 1 & 3 provide colored shapes representing the prostate contour superimposed on the 3D bounding boxes containing gray scale lanes that represent the acquired 3D ultrasound region. In FIGS. 4, 5, & 6, the display has multiple independent generated 3D surfaces superimposed over 3D slices of the 3D ultrasound region. These contours are displayed in different manners using distinct colors to distinguish how they were generated, red for manual, blue for rmU-net, and yellow V-net.
wherein the three-dimensional image data of the contour and the three-dimensional image data of the region are generated independently of each other.
- the 3D image contour (i.e., boundary) is generated entirely independent of the 3D region data. These two data sets comes from the 3D region data acquired through ultrasound scanning of a patient, where the 3D contour data is generated either by manual human annotation, or computationally by the neural networks, see [2.A Clinical dataset / pg. 2415], [2.B.3. 3D Reconstruction / pg 2416], [2.C Evaluation and comparison / pg. 2417 left col].
It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the monitor of Fujisawa to display to include an image representing a three-dimensional shape of the contour and an image representing a three-dimensional shape of the region in a superimposed manner and in different displaying manners, based on the three-dimensional image data of the contour and the three-dimensional image data of the region, wherein the three-dimensional image data of the contour and the three-dimensional image data of the region are generated independently of each other in view of the teachings of Orlando. The motivation to do this yield predictable results such as providing accurate targeting, reducing patient risk and improving clinical workflow as suggested by Orlando, ¶Abstract.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Fujisawa (US 2021/0093303 A1) in view of Orlando et al (Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images. Med Phys. 2020 Jun), as applied to claim 1, in further view of Fukuda et al (US 20170252000 A1).
Claim 16: Modified Fujisawa discloses all the elements above in claim 1, Fujisawa discloses, extracting 2D images from 3D data for a 2D display, ¶0042, ‘in order to generate various 2D image data so as to display the volume data stored in the 3D memory on the display 20, the image generating circuit 14 performs processing for displaying the volume data on the 2D display and processing for displaying the 3D data three-dimensionally, with respect to the volume data.’. Fujisawa fails to explicitly disclose:
wherein the processor is configured to:
extract a two-dimensional tomographic image of the contour and a two-dimensional tomographic image of the region on a same cut section, from the three-dimensional image data of the contour and the three-dimensional image data of the region, respectively; and
display, on the monitor, the two-dimensional tomographic image of the contour and the two-dimensional tomographic image of the region, as the image representing the contour and the image representing the region.
However, Fukuda in the context of the known function of three-dimensional segmentation, discloses, extract a two-dimensional tomographic image of the contour and a two-dimensional tomographic image of the region on a same cut section, from the three-dimensional image data of the contour and the three-dimensional image data of the region, respectively; and
-The ultrasound apparatus performs 3D scanning to acquire 3D signals to generate 3D voltage data which includes the heart of the subject, ¶0021, ¶0024-0025, ¶0033, ¶0057-0058, Claim 14. The image generating circuitry 141 generates multi-planar reconstruction (MPR) images from the voltage data, ¶0038-0039. MPR is a technique for producing 2D images (tomographic images) from 3D data, ¶0038-0039, ¶0073, ¶0077, ¶0097. Volume data corresponding to selected regions are generated by the image generation circuitry 141 which constructs cross-sectional images by MPR from each of the selected volume data, ¶0038-0039, ¶0073, ¶0077, ¶0097. The control function 140a then causes the display 103 to display these generated cross-sectional images, ¶0073. The identification function 170b automatically traces and identifies the 3D contour of the heart for each selected volume data, ¶0076, ¶0092. Alternatively, when 3D scanning is performed, the identification function 170b generates a plurality of cross sectional images by MPR processing from the volume data and then identify the contour of the LV of the heart for each of the cross-sectional images, ¶0077, ¶0097. This means the contour is identified on the already generated two-dimensional tomographic image of the region, ensuring its one the “same cut section”, respectively.
display, on the monitor, the two-dimensional tomographic image of the contour and the two-dimensional tomographic image of the region, as the image representing the contour and the image representing the region.
-Fukuda teaches from 3D volume data, 2D tomographic images of the heart region can be generated using MPR. Subsequently, the contour is either identified directly from the three dimensions and information used, or more commonly, the contour is identified on these generated two-dimensional tomographic images, ¶0077, ¶0097. An “image” or representation of this contour is then displayed directly on that same two-dimensional tomographic image of the region, allowing for simultaneous visualization of both on the same “cut section”, ¶0112-0116.
It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the processor of modified Fujisawa to include, extracting a two-dimensional tomographic image of the contour and a two-dimensional tomographic image of the region on a same cut section, from the three-dimensional image data of the contour and the three-dimensional image data of the region, respectively; and displaying, on the monitor, the two-dimensional tomographic image of the contour and the two-dimensional tomographic image of the region, as the image representing the contour and the image representing the region as taught by Fukuda. The motivation to do this yields predictable results such as assisting a user in easily performing diagnosis of the organ by providing useful information about the shape and position of the contour, ¶0114-0116 of Fukuda.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Fujisawa (US 2021/0093303 A1) in view of Orlando et al (Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images. Med Phys. 2020 Jun), as applied to claim 1, in further view of Gooding et al (US 2023/0100255 A1).
Claim 18: Modified Fujisawa discloses all the elements above in claim 1, Fujisawa fails to disclose: wherein the processor is configured to display the image representing the contour and the image representing the region side by side on the monitor.
However, Gooding in the context of interacting contouring of medical images discloses, wherein the processor is configured to display the image representing the contour and the image representing the region side by side on the monitor. (FIG. 6, ¶0105, ¶0146, ¶0156)
It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the processor of modified Fujisawa to be configured to display the image representing the contour and the image representing the region side by side on the monitor as taught by Gooding. The motivation to do this yields predictable results such as improving spatial and imaging context to the user.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Schoisswohl et al (US 2006/0056690 A1) discloses methods for 3D segmentation of ultrasound images to 2D images of contours and regions of interest, ¶Abstract.
Raynaud et al (US 2025/0134493 A1) discloses detecting missing regions and contours in image segmentation techniques, ¶Abstract, ¶0020, ¶0025-0026.
Altmann et al (US 2006/0241445 A1) discloses cardiac imaging using ultrasound contour reconstruction.
Zur et al (US 2009/0080738 A1) disclose edge detection in ultrasound imaging with respect to closed contours of a region of interest.
Miseikis (US 2025/0072862 A1) discloses reconstructing of missing portions of a fetus by generative AI.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nicholas Robinson whose telephone number is (571)272-9019. The examiner can normally be reached M-F 9:00AM-5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.A.R./Examiner, Art Unit 3798
/PASCAL M BUI PHO/Supervisory Patent Examiner, Art Unit 3798