DETAILED ACTION
This office action is in response to the communication received on October 13, 2025 and November 13, 2025 concerning application No. 17/634,015 filed on February 9, 2022.
Claims 1, 2, 4-7, 10-15, and 22-25 are currently pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on October 13, 2025 and November 13, 2025 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 14, and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Regarding applicant’s arguments for new claim 25. Please see the rejection below for how the Schneider reference is being applied to teach the new claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-2, 5-6, 10, 14-15, and 22-25 is/are rejected under 35 U.S.C. 103 as being unpatentable by Schneider et al. (WO2015068073A1, hereinafter Schneider) as evidenced by Nelson et al. (US5158088, hereinafter Nelson) in view of Radulescu et al. (US 20150310581, hereinafter Radulescu).
Regarding claim 1, Schneider teaches a steerable multi-plane ultrasound imaging system for steering image planes (Abstract, discloses a steerable ultrasound system with probe for steering a plurality of intersecting image planes), the system comprising:
a beamforming ultrasound imaging probe (pg. 5, lines 5-8 disclose an ultrasound probe which includes an array and micro-beamformer, making the probe a beamforming ultrasound imaging probe) configured to:
generate ultrasound beams that define a plurality of intersecting image planes (pg. 14, lines 26-30 and fig. 8 discloses transducer 500 which is part of the probe transmitting intersecting biplanes 510 and 512), comprising at least a first image plane and a second image plane (plane 510 is considered the first image plane and plane 512 is considered the second image plane. Also see pg. 7, lines 18-25 discloses the first plane as the reference plane as well as a second plane);
wherein ultrasound signals are transmitted between the beamforming ultrasound imaging probe and an ultrasound transducer disposed within a field of view of the beamforming ultrasound imaging probe (pg. 16, lines 9-15 disclose a transducer is placed at the catheter tip and the catheter tip is identified within the image, meaning the transducer is disposed within a field of view of the beamforming ultrasound imaging probe, see lines 18-19); and
at least one processor (the electronic circuitry of device 10 in fig. 1) in communication with the beamforming ultrasound imaging probe (fig. 1 shows the electronic circuitry of the device is in communication with the probe 70), the at least one processor configured to:
cause the beamforming ultrasound imaging probe to adjust an orientation of the first image plane (pg. 8, lines 2-7 discloses the motion detector 30 tracks the motion of the target and controls the beamformer controller to steer the imaging planes (including the first image plane) of the probe to the target, the target being the transducer on the catheter tip. Also pg. 16, lines 16-20, where the image plane orientations are adjusted to continually visualize the catheter tip in the generated images) such that the first image plane passes through a position of the ultrasound transducer (in order for both planes to image the catheter tip the plane must pass through the position of the transducer at the catheter tip) and a magnitude of the ultrasound signals is maximized (pg. 16, lines 9-15 discloses the catheter tip includes a transducer as disclosed in US Pat. 5158088, Nelson, which discloses in claim 1, “said medical instrument ultrasonic transducer receiving maximal ultrasonic wave energy when located in the image plane of said ultrasonic imaging transducer”. pg. 8, lines 2-7 discloses the motion detector 30 tracks the motion of the target and controls the beamformer controller to steer the imaging planes (including the first image plane) of the probe to the target, the target being the transducer on the catheter tip. When the transducer is located within the first image plane the steering will stop and based on Nelson a maximal ultrasonic wave is received, therefore a maximum signal ultrasound beam is being maximized);
cause the beamforming ultrasound imaging probe to adjust an orientation of the second image plane such that an intersection between the first plane and the second image plane passes through the position of the ultrasound transducer (pg. 16, lines 14-16 “the plane of the second image is then steered to image the catheter tip 46 in the second image”, in order for both planes to image the catheter tip the planes must pass through the position of the transducer at the catheter tip and as discussed above planes that pass through the position of the transducer are at an orientation where the magnitude of the signal is maximized);
compute an image quality metric corresponding to an image feature (pg. 9, lines 2-7 discloses the image planes which do not already contain the target are steered to contain the target, in order to know whether the image planes must be steered it must be determined whether the target is contained within the plane or not, the determination of whether the target is contained within the plane is considered the image quality metric corresponding to the image feature), and
cause the beamforming ultrasound imaging probe to re-orient at least one of the first image plane or the second image plane to maximize the image quality metric while maintaining the intersection passing through the position of the ultrasound transducer (pg. 9, lines 2-7 discloses adjusting the image planes that do not contain the target so that the target is contained within the planes (maximizing the image quality metric), thereby also maintaining the intersection passing through the position of the transducer).
Schneider does not specifically teach performing a segmentation of an image feature in at least one of the first image plane or the second image plane; and the image quality metric is specifically a value of an image quality metric that is configured to quantify: a completeness of the segmentation; or a closeness between the segmentation and a model of the image feature.
However,
Radulescu in a similar field of endeavor teaches performing a segmentation of an image feature in an image plane ([0057] discloses performing an image-segmentation of the bodily organ (image feature)); and computing a value of an image quality metric ([0060]-[0062] and [0069]-[0070] disclose increasing or subtracting from the progress bar (quality metric) based on whether the target is present within the current view (image plane). The amount by which the progress bar increases are decreases is considered the value) that is configured to quantify: a completeness of the segmentation; or a closeness between the segmentation and a model of the image feature ([0063] “the apparatus can determine, based on the model, whether the heart, or heart section, is entirely or sufficiently within the current field of view of the probe”. [0071] discloses comparing the current field of view segmentation with the model of the image feature). Radulescu additionally teaches re-orienting the image plane to maximize the value of the image quality metric ([0066] discloses tilting the probe. [0070] discloses using electronic steering and when the entire heart is within the current field of view the progress bar is increased, thereby maximizing the value of the image quality metric).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system disclosed by Schneider to have performed a segmentation of an image feature in at least one of the first image plane or the second image plane; and the image quality metric be specifically a value of an image quality metric that is configured to quantify: a completeness of the segmentation; or a closeness between the segmentation and a model of the image feature in order to improve image quality, as recognized by Radulescu ([0076]).
Regarding claim 2, Schneider in view of Radulescu teaches the system of claim 1, as set forth above. Schneider further teaches the ultrasound transducer is an ultrasound sensor (pg. 16, line 12, “locating transducer”), and the ultrasound signals are ultrasound imaging signals transmitted by the beamforming ultrasound probe (pg. 6, lines 9-20 discloses the received signals are used to generate an image meaning the signals transmitted are ultrasound imaging signals) and received by the ultrasound sensor (Nelson claim 1 discloses “an ultrasonic transducer associated therewith for reception of ultrasonic wave energy from said imaging transducer”).
Regarding claim 5, Schneider in view of Radulescu teaches the system of claim 1, as set forth above. Schneider further teaches the beamforming ultrasound imaging probe comprises a two-dimensional array of transducer elements (Abstract, “two dimensional array transducer probe”) having a normal axis (fig. 6 shows a z-axis which is perpendicular to the face of the transducer array, the z-axis is considered the normal axis), and
to adjusting the orientation of the first image plane or the second image plane, the at least one processor is configured to at least one of:
tilt the respective image plane with respect to the normal axis (pg. 12, lines 7-17 disclose tilting at least the second plane relative to a zero degree orientation, the zero degree orientation represents the normal axis),
rotate the respective image plane about the normal axis (pg. 12, lines 5-6, “permits rotation of the two planes about their common center line), and
translate the respective image plane perpendicularly with respect to the normal axis.
Regarding claim 6, Schneider in view of Radulescu teaches the system of claim 1, as set forth above. Schneider further teaches the at least one processor is further configured to:
track movements of the ultrasound transducer to each of a plurality of new positions by adjusting the orientation of at least one of the first image plane or the second image plane such that the intersection passes through each new position of the ultrasound transducer (pg. 8, lines 2-7 discloses the motion detector 30 tracks the motion of the target and controls the beamformer to steer the imaging planes of the probe to the target. Pg. 16, lines 16-19 discloses the catheter tip (target that includes the ultrasound transducer) is tracked and the image plane orientations are adjusted in order to continually visualize the catheter tip, therefore the intersection of the first and second planes passes through the new position of the ultrasound transducer).
Regarding claim 10, Schneider in view of Radulescu teaches the system of claim 1, as set forth above. Radulescu further teaches the compute the value of the image quality metric, the at least one processor is configured to fit the model to the image feature ([0057] discloses the segmentation is performed using the model, which pg. 10 of the present applications specification discloses is an example of a model-fitting technique. [0063] further discloses “the apparatus 100 can determine, based on the model, whether the heart, or heart section, is entirely or sufficiently within the current field of view of the probe”).
Regarding claim 14, Schneider teaches a method of steering a plurality of intersecting planes (Abstract, discloses a steerable ultrasound process using a probe for steering a plurality of intersecting image planes), the method comprising:
generating a plurality of ultrasound beams that define a plurality of intersecting image planes of a beamforming ultrasound imaging probe (pg. 14, lines 26-30 and fig. 8 discloses transducer 500 (probe) which is part of the probe transmitting intersecting biplanes 510 and 512), comprising at least a first image plane and a second image plane (plane 510 is considered the first image plane and plane 512 is considered the second image plane. Also see pg. 7, lines 18-25 discloses the first plane as the reference plane as well as a second plane);
transmitting ultrasound signals between the beamforming ultrasound imaging probe and a ultrasound disposed within a field of view of the beamforming ultrasound imaging probe (pg. 16, lines 9-15 disclose a transducer is placed at the catheter tip and the catheter tip is identified within the image, meaning the transducer is disposed within a field of view of the beamforming ultrasound imaging probe, see lines 18-19);
causing the beamforming ultrasound imaging probe to adjust an orientation of the first image plane (pg. 8, lines 2-7 discloses the motion detector 30 tracks the motion of the target and controls the beamformer controller to steer the imaging planes (including the first image plane) of the probe to the target, the target being the transducer on the catheter tip. Also pg. 16, lines 16-20, where the image plane orientations are adjusted to continually visualize the catheter tip in the generated images) such that the first image plane passes through a position of the ultrasound transducer (in order for both planes to image the catheter tip the plane must pass through the position of the transducer at the catheter tip) and a magnitude of the ultrasound signals is maximized (pg. 16, lines 9-15 discloses the catheter tip includes a transducer as disclosed in US Pat. 5158088, Nelson, which discloses in claim 1, “said medical instrument ultrasonic transducer receiving maximal ultrasonic wave energy when located in the image plane of said ultrasonic imaging transducer”. pg. 8, lines 2-7 discloses the motion detector 30 tracks the motion of the target and controls the beamformer controller to steer the imaging planes (including the first image plane) of the probe to the target, the target being the transducer on the catheter tip. When the transducer is located within the first image plane the steering will stop and based on Nelson a maximal ultrasonic wave is received, therefore a maximum signal ultrasound beam is being maximized);
causing the beamforming ultrasound imaging probe to adjust an orientation of the second image plane such that an intersection between the first plane and the second image plane passes through the position of the ultrasound transducer (pg. 16, lines 14-16 “the plane of the second image is then steered to image the catheter tip 46 in the second image”, in order for both planes to image the catheter tip the planes must pass through the position of the transducer at the catheter tip and as discussed above planes that pass through the position of the transducer are at an orientation where the magnitude of the signal is maximized);
computing an image quality metric corresponding to an image feature (pg. 9, lines 2-7 discloses the image planes which do not already contain the target are steered to contain the target, in order to know whether the image planes must be steered it must be determined whether the target is contained within the plane or not, the determination of whether the target is contained within the plane is considered the image quality metric corresponding to the image feature), and
causing the beamforming ultrasound imaging probe to re-orient at least one of the first image plane or the second image plane to maximize the image quality metric while maintaining the intersection passing through the position of the ultrasound transducer (pg. 9, lines 2-7 discloses adjusting the image planes that do not contain the target so that the target is contained within the planes (maximizing the image quality metric), thereby also maintaining the intersection passing through the position of the transducer).
Schneider does not specifically teach performing a segmentation of an image feature in at least one of the first image plane or the second image plane; and the image quality metric is specifically a value of an image quality metric that is configured to quantify: a completeness of the segmentation; or a closeness between the segmentation and a model of the image feature.
However,
Radulescu in a similar field of endeavor teaches performing a segmentation of an image feature in an image plane ([0057] discloses performing an image-segmentation of the bodily organ (image feature)); and computing a value of an image quality metric ([0060]-[0062] and [0069]-[0070] disclose increasing or subtracting from the progress bar (quality metric) based on whether the target is present within the current view (image plane). The amount by which the progress bar increases are decreases is considered the value) that is configured to quantify: a completeness of the segmentation; or a closeness between the segmentation and a model of the image feature ([0063] “the apparatus can determine, based on the model, whether the heart, or heart section, is entirely or sufficiently within the current field of view of the probe”. [0071] discloses comparing the current field of view segmentation with the model of the image feature). Radulescu additionally teaches re-orienting the image plane to maximize the value of the image quality metric ([0066] discloses tilting the probe. [0070] discloses using electronic steering and when the entire heart is within the current field of view the progress bar is increased, thereby maximizing the value of the image quality metric).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by Schneider to have performed a segmentation of an image feature in at least one of the first image plane or the second image plane; and the image quality metric be specifically a value of an image quality metric that is configured to quantify: a completeness of the segmentation; or a closeness between the segmentation and a model of the image feature in order to improve image quality, as recognized by Radulescu ([0076]).
Regarding claim 15, Schneider teaches a non-transitory computer-readable storage medium having stored a computer program comprising instructions (pg. 8, line 16-pg.9, line 34 discloses a series of steps being performed, the steps are considered the instructions and the part of system 10 in fig. 1 that stores the steps is considered the non-transitory computer-readable medium having a computer program comprising the instructions) which, when executed by a processor,
generate a plurality of ultrasound beams of a beamforming ultrasound imaging probe to define a plurality of intersecting image planes (pg. 14, lines 26-30 and fig. 8 discloses transducer 500 which is part of the probe transmitting intersecting biplanes 510 and 512), comprising at least a first image plane and a second image plane (plane 510 is considered the first image plane and plane 512 is considered the second image plane. Also see pg. 7, lines 18-25 discloses the first plane as the reference plane as well as a second plane), wherein ultrasound signals are transmitted between the beamforming ultrasound imaging probe and a ultrasound transducer disposed within a field of view of the beamforming ultrasound imaging probe (pg. 16, lines 9-15 disclose a transducer is placed at the catheter tip and the catheter tip is identified within the image, meaning the transducer is disposed within a field of view of the beamforming ultrasound imaging probe, see lines 18-19);
cause the beamforming ultrasound imaging probe to adjust an orientation of the first image plane (pg. 8, lines 2-7 discloses the motion detector 30 tracks the motion of the target and controls the beamformer controller to steer the imaging planes (including the first image plane) of the probe to the target, the target being the transducer on the catheter tip. Also pg. 16, lines 16-20, where the image plane orientations are adjusted to continually visualize the catheter tip in the generated images) such that the first image plane passes through a position of the ultrasound transducer (in order for both planes to image the catheter tip the plane must pass through the position of the transducer at the catheter tip) and a magnitude of the ultrasound signals is maximized (pg. 16, lines 9-15 discloses the catheter tip includes a transducer as disclosed in US Pat. 5158088, Nelson, which discloses in claim 1, “said medical instrument ultrasonic transducer receiving maximal ultrasonic wave energy when located in the image plane of said ultrasonic imaging transducer”. pg. 8, lines 2-7 discloses the motion detector 30 tracks the motion of the target and controls the beamformer controller to steer the imaging planes (including the first image plane) of the probe to the target, the target being the transducer on the catheter tip. When the transducer is located within the first image plane the steering will stop and based on Nelson a maximal ultrasonic wave is received, therefore a maximum signal ultrasound beam is being maximized);
cause the beamforming ultrasound imaging probe to adjust an orientation of the second image plane such that an intersection between the first plane and the second image plane passes through the position of the ultrasound transducer (pg. 16, lines 14-16 “the plane of the second image is then steered to image the catheter tip 46 in the second image”, in order for both planes to image the catheter tip the planes must pass through the position of the transducer at the catheter tip and as discussed above planes that pass through the position of the transducer are at an orientation where the magnitude of the signal is maximized);
compute an image quality metric corresponding to an image feature (pg. 9, lines 2-7 discloses the image planes which do not already contain the target are steered to contain the target, in order to know whether the image planes must be steered it must be determined whether the target is contained within the plane or not, the determination of whether the target is contained within the plane is considered the image quality metric corresponding to the image feature), and
cause the beamforming ultrasound imaging probe to re-orient at least one of the first image plane or the second image plane to maximize the image quality metric while maintaining the intersection passing through the position of the ultrasound transducer (pg. 9, lines 2-7 discloses adjusting the image planes that do not contain the target so that the target is contained within the planes (maximizing the image quality metric), thereby also maintaining the intersection passing through the position of the transducer).
Schneider does not specifically teach performing a segmentation of an image feature in at least one of the first image plane or the second image plane; and the image quality metric is specifically a value of an image quality metric that is configured to quantify: a completeness of the segmentation; or a closeness between the segmentation and a model of the image feature.
However,
Radulescu in a similar field of endeavor teaches performing a segmentation of an image feature in an image plane ([0057] discloses performing an image-segmentation of the bodily organ (image feature)); and computing a value of an image quality metric ([0060]-[0062] and [0069]-[0070] disclose increasing or subtracting from the progress bar (quality metric) based on whether the target is present within the current view (image plane). The amount by which the progress bar increases are decreases is considered the value) that is configured to quantify: a completeness of the segmentation; or a closeness between the segmentation and a model of the image feature ([0063] “the apparatus can determine, based on the model, whether the heart, or heart section, is entirely or sufficiently within the current field of view of the probe”. [0071] discloses comparing the current field of view segmentation with the model of the image feature). Radulescu additionally teaches re-orienting the image plane to maximize the value of the image quality metric ([0066] discloses tilting the probe. [0070] discloses using electronic steering and when the entire heart is within the current field of view the progress bar is increased, thereby maximizing the value of the image quality metric).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the non-transitory computer readable medium disclosed by Schneider to have performed a segmentation of an image feature in at least one of the first image plane or the second image plane; and the image quality metric be specifically a value of an image quality metric that is configured to quantify: a completeness of the segmentation; or a closeness between the segmentation and a model of the image feature in order to improve image quality, as recognized by Radulescu ([0076]).
Regarding claim 25, Schneider in view of Radulescu teaches the system of claim 1, as set forth above. Schneider further teaches the at least one processor is configured to:
cause the beamforming ultrasound imaging probe to initially orient the first image plane and the second image plane such that an initial intersection between the first image plane and the second image plane does not pass through a position of the ultrasound transducer (pg. 8, line16-pg. 9, line 34 discloses “ In the first step A, the user or an automated algorithm identifies a target (dot 54) within one of the planes in the real-time multi-plane system. Notice that at this step, the user is not necessarily capable of viewing the target 54 in the other image planes” meaning the first and second image planes are not intersecting at the target 54. If they were, the target would appear in both image planes);
stop the adjustment of the orientation of the first image plane when the first image plane comprises a maximum signal ultrasound beam for which the magnitude of the ultrasound signals is maximized (as discussed above, the magnitude of ultrasound signals is maximized when the ultrasound transducer is within the image plane, therefore by having the target (transducer) within the first image plane, the first image plane comprises a maximum signal ultrasound beam); and
stop the adjustment of the orientation of the second image plane when the second image plane intersects the maximum signal ultrasound beam (pg. 8, line16-pg. 9, line 34 disclose “In step B, which immediately follows the target point selection (step A), the image planes which do not already contain the target point 54 are electronically steered such that the target 54 is contained in these other cut planes”. Therefore the second image plane is stopped when the target is within the plane. As discussed above, the magnitude of ultrasound signals is maximized when the ultrasound transducer is within the image plane, therefore by having the target (transducer) within the second image plane, the second image plane comprises a maximum signal ultrasound beam).
Regarding claim 22, Schneider in view of Radulescu teaches the system of claim 25, as set forth above. Schneider further teaches causing the beamforming ultrasound imaging probe to adjust the orientation of the first image plane, the at least one processor is configured to repeatedly change the orientation of the first image plane (pg. 8, lines 2-7 discloses the motion detector 30 tracks the motion of the target and controls the beamformer controller to steer the imaging planes (including the first image plane) of the probe to the target, the target being the transducer on the catheter tip. Also pg. 16, lines 16-20, where the image plane orientations are adjusted to continually visualize the catheter tip in the generated images).
Regarding claim 23, Schneider in view of Radulescu teaches the system of claim 22, as set forth above. Schneider further teaches causing the beamforming ultrasound imaging probe to stop the adjustment of the first image plane, the at least one processor is configured to stop the repetition (pg. 2, lines 2-4 “the probe is manipulated until the anatomy or object of interest is visualized in this plane”, therefore re-orientation and repetition is stopped once the transducer is visualized within the planes).
Regarding claim 24, Schneider in view of Radulescu teaches the system of claim 1, as set forth above. Schneider further teaches if the magnitude of the ultrasound signals falls below a predetermined threshold value (pg. 15, lines 11-17 discloses that the image planes are re-steered to relocate the target in the center of the intersection of the planes. When the target is no longer at the center of the intersection the signal intensity is no longer at is maximum value as discussed above and is therefore below a threshold value (maximum signal value)), the processor is configured to cause the beamforming ultrasound imaging probe to:
further adjust the orientation of the first image plane such that the magnitude of the ultrasound signals is maximized (pg. 15, lines 11-17 discloses that the image planes are re-steered to relocate the target in the center of the intersection of the planes. By having the target be at the center of the intersection of the planes the magnitude of ultrasound signals transmitted between the probe and transducer (target) is maximized); and
re-orient the second image plane such that a further intersection between the first image plane and the second image plane passes through the position of the ultrasound transducer (pg. 15, lines 11-17 discloses that the image planes are re-steered to relocate the target in the center of the intersection of the planes, meaning the second plane is also adjusted so that the intersection passes through the position of the transducer (target)).
Claim(s) 4 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schneider in view of Radulescu as applied to claim 1 above, and further in view of De Wijs et al. (WO2017102338A1, hereinafter De Wijs).
Regarding claim 4, Schneider in view of Radulescu teaches the system of claim 1, as set forth above. Schneider further teaches the ultrasound transducer is an ultrasound sensor (pg. 16, line 12, “locating transducer” and Nelson claim 1 discloses “an ultrasonic transducer associated therewith for reception of ultrasonic wave energy from said imaging transducer”, because the transducer receives the wave energy it is considered a sensor),
the ultrasound signals are transmitted by the beamforming ultrasound imaging probe (pg. 6, lines 9-20 discloses the received signals are used to generate an image meaning the signals transmitted are ultrasound imaging signals) and received by the ultrasound sensor (Nelson claim 1 discloses “an ultrasonic transducer associated therewith for reception of ultrasonic wave energy from said imaging transducer”), and the at least one processor is further configured to:
receive electrical signals generated by the ultrasound sensor in response to the ultrasound signals transmitted by the beamforming ultrasound imaging probe (pg. 16, lines11-20 discloses a signal is produced by the locating transducer to determine the location of the catheter tip and the motion of the catheter is tracked thereafter meaning the motion detector 30 receives the signal from the transducer on where the catheter tip is located).
Schneider in view of Radulescu does not specifically teach receive synchronization signals from the beamforming ultrasound imaging probe, the synchronization signals corresponding to a time of emission of the transmitted ultrasound signals; and to identify the maximum signal ultrasound beam based on the received electrical signals and the received synchronization signals.
However,
De Wijs in a similar field of endeavor teaches receive synchronization signals from the beamforming ultrasound imaging probe, the synchronization signals corresponding to a time of emission of the transmitted ultrasound signals (pg. 12, lines 29-32 discloses “time emission of each beam of the plurality of beams is matched with the time of detection of the maximum signal”, meaning the synchronization signals that correspond to a time of emission of the transmitted ultrasound signals is received); and to
identify the maximum signal ultrasound beam based on the received electrical signals and the received synchronization signals (pg. 12, lines 29-32 discloses identifying the beam associated with the maximum signals using the emission time and the maximum signals).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, the non-transitory computer readable medium and the method disclosed by Schneider in view of Radulescu to have received synchronization signals from the beamforming ultrasound imaging probe, the synchronization signals corresponding to a time of emission of the transmitted ultrasound signals; and identifying the maximum signal ultrasound beam based on the received electrical signals and the received synchronization signals in order to determine which location is associated with the maximum signal, as recognized by De Wijs (pg. 12, lines 24-26).
Regarding claim 7, Schneider in view of Radulescu and De Wijs teaches the system of claim 4, as set forth above. Schneider further teaches causing the beamforming ultrasound imaging probe to adjust the orientation of the first image plane and the second image plane, the at least one processor is configured to:
re-orient the first image plane and the second image plane simultaneously such that an electrical signal associated with the first image plane is maximized (pg. 9, lines 17-19, “steer the multiple planes in the multi-plane system such that the target 54 is continuously visualized in all planes”, in order to continuously visualize the target in all planes the multiple planes must be adjusted simultaneously. By maintaining the first image plane with the target (transducer on the catheter tip) the electrical signal is maximized); and
re-orient the second image plane independently of the first image plane such that an electrical signal associated with the second image plane is maximized (pg. 2, lines 9-13 discloses once the target is viewed in the reference plane (first imaging plane) the second plane is adjusted to also capture the target, by adjusting the second plane to view the target (transducer on the catheter tip) the electrical signal is being maximized).
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schneider in view of Radulescu as applied to claim 1 above, and further in view of Sato et al. (US 20100056918, hereinafter Sato).
Regarding claim 11, Schneider in view of Radulescu teaches the system of claim 1, as set forth above. Schneider further teaches the at least one processor is configured to:
reconstruct ultrasound images based on ultrasound image data generated by the beamforming ultrasound imaging probe for each of the plurality of intersecting image planes (pg. 6, lines 11-12 discloses the subsystem 10B processes the echo signals in order to generate the images. Pg. 16, lines 18-20 discloses each image plane generates a separate image 450B and 420B), and
cause the beamforming imaging probe to rotate at least one of the first image plane or the second image plane about the intersection (pg. 13, lines 10-13 discloses both planes are configured to rotate. Pg. 8, lines 2-7 discloses the motion detector causes the imaging planes to be steered).
Schneider in view of Radulescu does not specifically teach reconstructing a three-dimensional ultrasound image based on ultrasound image data corresponding to at least one of the first image plane or the second image plane during a rotation of the imaging planes.
However,
Sato in a similar field of endeavor teaches reconstructing a three-dimensional ultrasound image based on ultrasound image data corresponding to at least one of the plurality of intersecting image planes during a rotation of the imaging planes ([0013] discloses a ultrasonic scanning plane is rotated around an axis in order to reconstruct a three-dimensional image. [0016]-[0017] further discloses the probe is a multi-plane probe and the scanning planes are rotated in order to perform three-dimensional scanning).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system disclosed by Schneider in view of Radulescu to have reconstructed a three-dimensional ultrasound image based on ultrasound image data corresponding to at least one of the plurality of intersecting image planes during a rotation of the imaging planes in order to generate a more accurate three-dimensional image of the target, as recognized by Sato ([0013]).
Claim(s) 12 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schneider in view of Radulescu as applied to claim 1 above, and further in view of Schneider et al. (US20160249885, hereinafter Schneider ‘885).
Regarding claim 12, Schneider in view of Radulescu teaches the system of claim 1, as set forth above. Schneider further teaches the at least one processor is configured to:
reconstruct ultrasound images based on ultrasound image data generated by the beamforming ultrasound imaging probe for each of the plurality of intersecting image planes (pg. 6, lines 11-12 discloses the subsystem 10B processes the echo signals in order to generate the images. Pg. 16, lines 18-20 discloses each image plane generates a separate image 450B and 420B);
and cause the beamforming ultrasound imaging probe to adjust the orientation of at least one of first image plane or the second image plane based on a desired view (pg. 13, lines 10-13 discloses both planes are rotated and tilted to follow a target and keep it in the planes. The desired view is considered the view with the target located in it).
Schneider in view of Radulescu does not specifically teach the at least one processor is configured to generate an overlay image in which the reconstructed ultrasound images are registered to an anatomical model; and cause the beamforming ultrasound imaging probe to adjust at least one of first image plane or the second image plane based on a desired view defined in the anatomical model.
However,
Schneider ‘885 in a similar field of endeavor teaches a processor configured to generate an overlay image in which the reconstructed ultrasound images are registered to an anatomical model (claim 1 discloses a segmentation and tracking processor configured to generate view planes from heart image data from heart image data that is registered to a heart model. The generated view planes are considered the overlay image and the heart model is considered the anatomical model); and
cause the beamforming ultrasound imaging probe to adjust at least one of first image plane or the second image plane based on a desired view defined in the anatomical model ([0024]-[0025] discloses the desired planes locations of the heart model are used to extract the plane images (tri-planes) and “after the heart model has been fitted to the heart anatomy in the volume image, only the tri-planes can be scanned for the next update” and “steer the direction of the plane scanning of the next tri-plane acquisition by control of the beamform controller”, by only being able to scan the tri-planes the system has caused the probe to adjust at least one of the image planes based on the desired view defined in the anatomical model).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system disclosed by Schneider in view of Radulescu to have the at least one processor be configured to generate an overlay image in which the reconstructed ultrasound images are registered to an anatomical model; and cause the beamforming ultrasound imaging probe to adjust at least one of first image plane or the second image plane based on a desired view defined in the anatomical model in order to improve the quality of the displayed image, as recognized by Schneider ‘885 ([0025]).
Regarding claim 13, Schneider in view of Radulescu and Schneider ‘885 teaches the system of claim 12, as set forth above. Schneider further discloses the desired view comprises a visualization plane (pg. 13, lines 10-13 disclose adjusting both planes to follow a target, the plane view with the target located in it is considered the visualization plane); and
the at least one processor is configured to cause the beamforming ultrasound imaging probe to provide the desired view by rotating at least one of first image plane or the second image plane about the intersection of the first image plane and the second image plane such that at least one of first image plane or the second image plane is parallel to the visualization plane (pg. 13, lines 10-13 disclose, “both planes may be rotated and tilted from these initial orientations to follow a target and keep it in the planes” and pg. 8, lines 2-7 discloses the motion detector 30 tracks the motion of the target and controls the beamformer controller to continually steer the imaging planes of the probe. By moving the planes to have the desired view at least one of the planes is parallel to the visualization plane).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW BEGEMAN whose telephone number is (571)272-4744. The examiner can normally be reached Monday-Thursday 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at 5712701790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW W BEGEMAN/Examiner, Art Unit 3798