DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/07/2025 has been entered.
Response to Amendment
This office action is in response to the remarks filed on 11/07/2025.
The amendment filed 11/07/2025 has been entered. Claims 1, 3-4, and 6-11, 13-19, and 21-22 remain pending in the application, claims 2, 5, 12 and 20.
The claim objections have been withdrawn in light of claim amendments.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 4, 10-11, 14-15, and 21 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Heron et al. (US 20090149749 A1, of record).
Regarding claim 1, Heron teaches a method of echocardiography comprising:
accessing a cine-loop comprising a sequence of image frames representing at least one a cardiac cycle , wherein each of the image frames in the sequence of image frames is based on ultrasound data ([0008] discloses acquiring sequential ultrasound images over a time interval capturing a cardiac cycle);
identifying a target sequence of image frames including a cardiac event within the cardiac cycle (the first interval can be an interval of interest and the second interval can be a reference interval corresponding to the interval of interest [0078]; The two different cineloop can be, for example, separate segments of a single cineloop [0046]) wherein the target sequence of image frames is a subset of the sequence of image frames and a portion of the sequence of image frames that is part of the cardiac cycle and not part of the target sequence comprises a non-target sequence of image frame (The synchronized playback function enables a user to review a first interval and at least one corresponding second interval selected from different cineloops in a synchronized fashion … The two different cineloop can be, for example, separate segments of a single cineloop [0046]; [0048] further discloses that multiple cardiac cycles can be captured within one cineloop)
displaying the cine-loop with multiple playback speeds ([0048] discloses that the user can select the playback speed of the cine-loop, and [0016] discloses that each interval is adjustable, include the display speed/playback speed of each interval), where the non-target sequence of image frames is displayed at a first playback speed, and where the target sequence of image frames is displayed at a second playback speed that is slower than the first playback speed ([0048] discloses that the user can select the playback speed of the cine-loop, and [0016] discloses that the display speed/playback speed of each interval, i.e. image frame, can be adjusted; [0010] discloses that consecutive cardiac cycles can be captured, and [0076] discloses that for two intervals, one can be the interval of interest, i.e. target sequence, and the other is not the interval of interest, i.e. not part of the target sequence; the selection of different speeds/intervals for different phases of the cardiac cycles allow for displaying different image frames and different playback speeds).
Regarding claim 4, Heron teaches the method of claim 1, as discussed above. Heron further teaches wherein the second playback speed is 50% or less of the first playback speed (the user can select to playback at a percentage of the real-time or acquisition speed, e.g., 25%, 50%, 75%, 100%, 150% or 200%. 75% means a playback rate at 75% of the original, acquisition frame rate and 200% is twice the original frame rate [0048]; [0016] discloses that the display speed/playback speed of each interval, i.e. image frame, can be adjusted;).
Regarding claim 10, Heron teaches the method of claim 1, as discussed above. Heron further teaches wherein the cardiac event is selected from the following list: an opening of a valve, a closing of a valve, an initial ejection phase, a peak ejection phase, an early filling phase, an atrial contraction phase ([0042] discloses that the intervals can be define using ECG readings, and uses R-Wave detection as an example, ECG captures cardiac cycle information which includes atrial systole, ventricular systole and complete cardiac diastole, as disclosed in [0004]) and a septal flash.
Regarding claim 11, Heron teaches a method of echocardiography comprising:
an ultrasound probe (an echocardiography probe for acquisition of ultrasound data [0019]);
a display device (a display system [0017]); and
at least one processor in electronic communication with the ultrasound probe and the display device, wherein the at least one processor is configured to (processor [0035]):
access a cine loop comprising sequence of image frames representing a cardiac cycle (the first interval can be an interval of interest and the second interval can be a reference interval corresponding to the interval of interest [0078]; The two different cineloop can be, for example, separate segments of a single cineloop [0046]), wherein each of the image frames in the sequence of image frames is based on ultrasound data ([0008] discloses acquiring sequential ultrasound images over a time interval capturing a cardiac cycle);
identify a target sequence of image frames including a cardiac event within the cardiac cycle (the first interval can be an interval of interest and the second interval can be a reference interval corresponding to the interval of interest [0078]; The two different cineloop can be, for example, separate segments of a single cineloop [0046]), wherein the target sequence of image frames is a subset of the sequence of image frames and a portion of the sequence of image frames that is part of the cardiac cycle and not part of the target sequence comprises a non-target sequence of image frame (The synchronized playback function enables a user to review a first interval and at least one corresponding second interval selected from different cineloops in a synchronized fashion … The two different cineloop can be, for example, separate segments of a single cineloop [0046]; [0048] further discloses that multiple cardiac cycles can be captured within one cineloop); and
display the cine-loop with multiple playback speeds ([0048] discloses that the user can select the playback speed of the cine-loop, and [0016] discloses that each interval is adjustable, include the display speed/playback speed of each interval), the non-target sequence of image frames is displayed at a first playback speed, and where the target sequence of image frames is displayed at a second playback speed that is slower than the first playback speed([0048] discloses that the user can select the playback speed of the cine-loop, and [0016] discloses that the display speed/playback speed of each interval, i.e. image frame, can be adjusted; [0010] discloses that consecutive cardiac cycles can be captured, and [0076] discloses that for two intervals, one can be the interval of interest, i.e. target sequence, and the other is not the interval of interest, i.e. not part of the target sequence; the selection of different speeds/intervals for different phases of the cardiac cycles allow for displaying different image frames and different playback speeds).
Regarding claim 14, Heron teaches the system of claim 11, as discussed above. Heron further teaches wherein the second playback speed is 50% or less of the first playback speed (the user can select to playback at a percentage of the real-time or acquisition speed, e.g., 25%, 50%, 75%, 100%, 150% or 200%. 75% means a playback rate at 75% of the original, acquisition frame rate and 200% is twice the original frame rate [0048]; [0016] discloses that the display speed/playback speed of each interval, i.e. image frame, can be adjusted;).
Regarding claim 15, Heron teaches the system of claim 1, as discussed above. Heron further teaches wherein the second playback speed is 25% or less of the first playback speed (the user can select to playback at a percentage of the real-time or acquisition speed, e.g., 25%, 50%, 75%, 100%, 150% or 200%. 75% means a playback rate at 75% of the original, acquisition frame rate and 200% is twice the original frame rate [0048]; and [0016] discloses that the display speed/playback speed of each interval, i.e. image frame, can be adjusted).
Regarding claim 21, Heron teaches non-transitory computer readable medium storing instructions that, when executed by a processor, cause the processor to:
access a cine-loop comprising a sequence of image frames representing at least one cardiac cycle, wherein each of the image frames in the sequence of image frames is based on ultrasound data ([0008] discloses acquiring sequential ultrasound images over a time interval capturing a cardiac cycle);
identify a target sequence of image frames including a cardiac event within the cardiac cycle (the first interval can be an interval of interest and the second interval can be a reference interval corresponding to the interval of interest [0078]; The two different cineloop can be, for example, separate segments of a single cineloop [0046]), wherein the target sequence of image frames is a subset of the sequence of image frames and a portion of the sequence of image frames that is part of the cardiac cycle and not part of the target sequence comprises a non-target sequence of image frame (The synchronized playback function enables a user to review a first interval and at least one corresponding second interval selected from different cineloops in a synchronized fashion … The two different cineloop can be, for example, separate segments of a single cineloop [0046]; [0048] further discloses that multiple cardiac cycles can be captured within one cineloop);
display the cine-loop with multiple playback speeds ([0048] discloses that the user can select the playback speed of the cine-loop, and [0016] discloses that each interval is adjustable, include the display speed/playback speed of each interval), where the non-target sequence of image frames is displayed at a first playback speed, and where the target sequence of image frames is displayed at a second playback speed that is slower than the first playback speed ([0048] discloses that the user can select the playback speed of the cine-loop, and [0016] discloses that the display speed/playback speed of each interval, i.e. image frame, can be adjusted; [0010] discloses that consecutive cardiac cycles can be captured, and [0076] discloses that for two intervals, one can be the interval of interest, i.e. target sequence, and the other is not the interval of interest, i.e. not part of the target sequence; the selection of different speeds/intervals for different phases of the cardiac cycles allow for displaying different image frames and different playback speeds).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3, 6-7, 13, 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Heron (US 20090149749 A1, of record) in view of Annangi (US 20210174496 A1, of record).
Regarding claim 3, Heron teaches the method of claim 1, as discussed above. Heron, however, is silent regarding wherein the portion of the sequence of image frames that is not part of the target sequence comprises a first non-target sequence and a second non-target sequence, wherein the first non-target sequence is before the target sequence in the sequence of image frames and the second non-target sequence is after the target sequence in the sequence of image frames.
Annangi is considered analogous to the instant application as “System and methods for sequential scan parameter selection is disclosed”).
Annangi teaches a first non-target sequence and a second non-target sequence, wherein the first non-target sequence is before the target sequence in the sequence of image frames and the second non-target sequence is after the target sequence in the sequence of image frames ([0049]-[0056] discloses a method acquiring a plurality of images, wherein during the image acquisition, the target images are acquired, and scanning still takes places afterwards, i.e. a target sequence in between two non-target sequence).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Heron to include a first non-target sequence and a second non-target sequence, wherein the first non-target sequence is before the target sequence in the sequence of image frames and the second non-target sequence is after the target sequence in the sequence of image frames, as taught by Annangi, Doing so would aid in diagnosis and/or displayed on a display device in real time or near real time, as suggested by Annangi ([0002]).
Regarding claim 6, Heron teaches the method of claim 1, as discussed above. Heron, however, is silent regarding said identifying the target sequence of image frames is performed automatically by at least one processor.
Annangi is considered analogous to the instant application as “System and methods for sequential scan parameter selection is disclosed”).
Annangi teaches said identifying the target sequence of image frames is performed automatically by at least one processor ([0025] discloses using a machine learning model to classify images, [0045] discloses classifying images within cine loops, and [0049] discloses capturing target images; processor [0020]).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Heron to include said identifying the target sequence of image frames is performed automatically by at least one processor, as taught by Annangi, Doing so would aid in diagnosis and/or displayed on a display device in real time or near real time, as suggested by Annangi ([0002]).
Regarding claim 7, modified Heron teaches the method of claim 6, as discussed above. Heron, however, does not teach wherein the at least one processor implements an artificial intelligence technique in order to identify the target sequence of image frames.
Annangi teaches wherein the at least one processor implements an artificial intelligence technique in order to identify the target sequence of image frames ([0025] discloses using a machine learning model to classify images, [0045] discloses classifying images within cine loops, and [0049] discloses capturing target images; processor [0020]).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Heron to include wherein the at least one processor implements an artificial intelligence technique in order to identify the target sequence of image frames, as taught by Annangi, Doing so would aid in diagnosis and/or displayed on a display device in real time or near real time, as suggested by Annangi ([0002]).
Regarding claim 13, Heron teaches the system of claim 11, as discussed above. Heron, however does not teach wherein the portion of the sequence of image frames that is not part of the target sequence comprises a first non-target sequence and a second non-target sequence, wherein the first non-target sequence is before the target sequence in the sequence of image frames and the second non-target sequence is after the target sequence in the sequence of image frames.
Annangi is considered analogous to the instant application as “System and methods for sequential scan parameter selection is disclosed”).
Annangi teaches a first non-target sequence and a second non-target sequence, wherein the first non-target sequence is before the target sequence in the sequence of image frames and the second non-target sequence is after the target sequence in the sequence of image frames ([0049]-[0056] discloses a method acquiring a plurality of images, wherein during the image acquisition, the target images are acquired, and scanning still takes places afterwards, i.e. a target sequence in between two non-target sequence).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Heron to include a first non-target sequence and a second non-target sequence, wherein the first non-target sequence is before the target sequence in the sequence of image frames and the second non-target sequence is after the target sequence in the sequence of image frames, as taught by Annangi, Doing so would aid in diagnosis and/or displayed on a display device in real time or near real time, as suggested by Annangi ([0002]).
Regarding claim 16, Heron teaches the system of claim 11, as discussed above. Heron, however does not teach wherein the at least one processor is configured to automatically identify the target sequence of image frames.
Annangi is considered analogous to the instant application as “System and methods for sequential scan parameter selection is disclosed”).
Annangi teaches said wherein the at least one processor is configured to automatically identify the target sequence of image frames ([0025] discloses using a machine learning model to classify images, [0045] discloses classifying images within cine loops, and [0049] discloses capturing target images; processor [0020]).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Heron to include wherein the at least one processor is configured to automatically identify the target sequence of image frames, as taught by Annangi, Doing so would aid in diagnosis and/or displayed on a display device in real time or near real time, as suggested by Annangi ([0002]).
Regarding claim 17, modified Heron teaches the system of claim 11, as discussed above. Heron, however, does not teach wherein the at least one process is configured to implement an artificial intelligence technique in order to identify the target sequence of image frames.
Annangi teaches wherein the at least one process is configured to implement an artificial intelligence technique in order to identify the target sequence of image frames ([0025] discloses using a machine learning model to classify images, [0045] discloses classifying images within cine loops, and [0049] discloses capturing target images; processor [0020]).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the invention of Heron to include wherein the at least one process is configured to implement an artificial intelligence technique in order to identify the target sequence of image frames, as taught by Annangi, Doing so would aid in diagnosis and/or displayed on a display device in real time or near real time, as suggested by Annangi ([0002]).
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Heron (US 20090149749 A1, of record) in view of Annangi (US 20210174496 A1, of record) and Li (US 20240173007 A1, of record) and Jackson et al (US 20070055158 A1, hereinafter “Jackson”, of record).
Regarding claim 8, modified Heron teaches the method of claim 7, as discussed above. Heron, however, does not teach wherein the at least one processor implements the artificial intelligence technique in order to identify a cardiac phase associated with each image frame in the sequence of image frames in order to identify the cardiac event, and wherein the at least one processor identifies the target sequence as including a first plurality of image frames acquired within a first predetermined amount of time before the cardiac event, a second plurality of image frames acquired within a second predetermined amount of time after the cardiac event, and a target image frame representing the cardiac event.
Li is considered analogous to the instant application as an ultrasound imaging system and method is disclosed (abstract).
Li teaches the at least one processor implements the artificial intelligence technique ([0045] discloses that a machine learning models are used to determine whether the target view is in the image) in order to identify a cardiac phase associated with each image frame in the sequence of image frames in order to identify the cardiac event ([0009] and [0060] discloses that the processor is configured to select image one or more image frames from a cineloop that captures/identifies as cardiac event).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the combined invention of Heron to include wherein the at least one processor implements the artificial intelligence technique in order to identify a cardiac phase associated with each image frame in the sequence of image frames in order to identify the cardiac event, as taught by Li. Doing so can assist operators in obtaining accurate measurements, with repeatability, may be desirable, as suggested by Li ([0004]).
The combined invention, however, is still silent regarding wherein the at least one processor identifies the target sequence as including a first plurality of image frames acquired within a first predetermined amount of time before the cardiac event, a second plurality of image frames acquired within a second predetermined amount of time after the cardiac event, and a target image frame representing the cardiac event.
Jackson is considered analogous to the instant application as “Automated identification of cardiac events with medical ultrasound” is disclosed (title).
Jackson teaches wherein the at least one processor (processor [0017]) identifies the target sequence as including a first plurality of image frames acquired within a first predetermined amount of time before the cardiac event a second plurality of image frames acquired within a second predetermined amount of time after the cardiac event, and a target image frame representing the cardiac event ([0016] and [0018] discloses that different frames of frames of data are associated with different relative times with respect to a heart cycles, and a table of cardiac event times is generated; [0022]-[0023] discloses that during data analysis, a subset of images are used to limit a search for a cardiac event to approximate time intervals, and that heart valve motion such as opening/closing events can be identified; [0016], [0028], [0039] disclose that timing information is obtained which can then be used for expected event times; there multiple (at least 3) time intervals that are captured, one of which is are identified with a cardiac event of interest, with time stamps/intervals recorded before, after, and during the cardiac event based off a generated table of cardiac event times).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the combined invention of Heron to include the target sequence as including a first plurality of image frames acquired within a first predetermined amount of time before the cardiac event a second plurality of image frames acquired within a second predetermined amount of time after the cardiac event, and a target image frame representing the cardiac event, as taught by Jackson. Doing so would allow for more efficient examination or diagnosis may be provided, as suggested by Jackson ([0015]).
Regarding claim 18, modified Heron teaches the method of claim 17, as discussed above. Heron, however, does not teach wherein the at least one processor implements the artificial intelligence technique in order to identify a cardiac phase associated with each image frame in the sequence of image frames in order to identify the cardiac event, and wherein the at least one processor identifies the target sequence as including a first plurality of image frames acquired within a first predetermined amount of time before the cardiac event, a second plurality of image frames acquired within a second predetermined amount of time after the cardiac event, and a target image frame representing the cardiac event.
Li is considered analogous to the instant application as an ultrasound imaging system and method is disclosed (abstract).
Li teaches the at least one processor implements the artificial intelligence technique ([0045] discloses that a machine learning models are used to determine whether the target view is in the image) in order to identify a cardiac phase associated with each image frame in the sequence of image frames in order to identify the cardiac event ([0009] and [0060] discloses that the processor is configured to select image one or more image frames from a cineloop that captures/identifies as cardiac event).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the combined invention of Heron to include wherein the at least one processor implements the artificial intelligence technique in order to identify a cardiac phase associated with each image frame in the sequence of image frames in order to identify the cardiac event, as taught by Li. Doing so can assist operators in obtaining accurate measurements, with repeatability, may be desirable, as suggested by Li ([0004]).
The combined invention, however, is still silent regarding wherein the at least one processor identifies the target sequence as including a first plurality of image frames acquired within a first predetermined amount of time before the cardiac event, a second plurality of image frames acquired within a second predetermined amount of time after the cardiac event, and a target image frame representing the cardiac event.
Jackson is considered analogous to the instant application as “Automated identification of cardiac events with medical ultrasound” is disclosed (title).
Jackson teaches wherein the at least one processor (processor [0017]) identifies the target sequence as including a first plurality of image frames acquired within a first predetermined amount of time before the cardiac event a second plurality of image frames acquired within a second predetermined amount of time after the cardiac event, and a target image frame representing the cardiac event ([0016] and [0018] discloses that different frames of frames of data are associated with different relative times with respect to a heart cycles, and a table of cardiac event times is generated; [0022]-[0023] discloses that during data analysis, a subset of images are used to limit a search for a cardiac event to approximate time intervals, and that heart valve motion such as opening/closing events can be identified; [0016], [0028], [0039] disclose that timing information is obtained which can then be used for expected event times; there multiple (at least 3) time intervals that are captured, one of which is are identified with a cardiac event of interest, with time stamps/intervals recorded before, after, and during the cardiac event based off a generated table of cardiac event times).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the combined invention of Heron to include the target sequence as including a first plurality of image frames acquired within a first predetermined amount of time before the cardiac event a second plurality of image frames acquired within a second predetermined amount of time after the cardiac event, and a target image frame representing the cardiac event, as taught by Jackson. Doing so would allow for more efficient examination or diagnosis may be provided, as suggested by Jackson ([0015]).
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Heron (US 20090149749 A1, of record) in view of Dormer (US 20210391040 A1, of record).
Regarding claim 9, Heron teaches the method of claim 1, as discussed above. Heron, however, does not teach automatically applying a spatial zoom to the sequence of image frames in order to zoom-in on an anatomical structure while displaying the sequence of image frames on the display device as the cine-loop.
Dormer is considered analogous to the instant application as medical imaging data is disclosed ([0416]).
Dormer teaches automatically applying a spatial zoom to the sequence of image frames in order to zoom-in on an anatomical structure while displaying the sequence of image frames on the display device as the cine-loop ([0205] discloses that an orientation and zoom level for each view can be calculated from the positions of the landmarks within an image, if the landmark's position changes in time the view will change in time accordingly; [0215] discloses that the landmarks and views are generated automatically, and can be displayed in a cine movie).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the combined invention of Heron to include applying a spatial zoom to the sequence of image frames in order to zoom-in on an anatomical structure while displaying the sequence of image frames on the display device as the cine-loop, as taught by Dormer. Doing so would be able to allow to quickly identify landmarks, as suggested by Dormer ([0195]).
Regarding claim 19, Heron teaches the method of claim 11, as discussed above. Heron, however, does not teach automatically applying a spatial zoom to the sequence of image frames in order to zoom-in on an anatomical structure while displaying the sequence of image frames on the display device as the cine-loop.
Dormer is considered analogous to the instant application as medical imaging data is disclosed ([0416]).
Dormer teaches wherein the processor is further configured to apply a spatial zoom to the sequence of image frames in order to zoom-in on an anatomical structure while displaying the sequence of image frames on the display device as the cine loop ([0205] discloses that an orientation and zoom level for each view can be calculated from the positions of the landmarks within an image, if the landmark's position changes in time the view will change in time accordingly; [0215] discloses that the landmarks and views are generated automatically, and can be displayed in a cine movie).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the combined invention of Heron to include the processor is further configured to apply a spatial zoom to the sequence of image frames in order to zoom-in on an anatomical structure while displaying the sequence of image frames on the display device as the cine loop, as taught by Dormer. Doing so would be able to allow to quickly identify landmarks, as suggested by Dormer ([0195]).
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Heron (US 20090149749 A1) in view of Halmann (US 20210350539 A1).
Regarding claim 21, Heron teaches the method according to claim 1, as discussed above. Heron, however, does not teach wherein the first playback speed transitions to the second playback speed at a constant rate over a period of time.
Halmann is considered analogous to the instant application as “Ultrasound imaging system and method” is disclosed (title).
Halmann teaches wherein the first playback speed transitions to the second playback speed at a constant rate over a period of time (the processor 116 may play back each of the videos at a speed other than that at which the ultrasound image data was acquired. The processor 116 may therefore provide a relative temporal scaling between the videos based on how much the playback speed for each of the videos is adjusted in relation to the other videos in the panoramic view. In other words, the temporal scaling is performed by adjusting the relative playback speeds of each of the videos in the panoramic view… the temporal scaling is performed by adjusting the relative playback speeds of each of the videos in the panoramic view. [0046] A segment of interest may be temporally scaled …. The target video duration may be based on the length of one or more of the segments of interest [0048]).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the combined invention of Heron to include wherein the first playback speed transitions to the second playback speed at a constant rate over a period of time, as taught by Halmann. Doing so would achieve a target video duration, as suggested by Halmann ([0048]).
Response to Arguments
Applicant's arguments filed 11/07/2025 have been fully considered but they are not persuasive.
Regarding the 35 USC § 102 rejection of claims 1, 11, and 17 on pages 7-10 that Heron does not discloses all the claim elements of claim 1, and states that “Paragraph [0011] fails to disclose selecting two intervals on the same cine loop, let alone from the same cardiac cycle”. The applicant further cites paragraph [0008] for context for claim 11. In response, the examiner respectfully disagrees. Heron states that “The two different cineloop can be, for example, separate segments of a single cineloop” ([0048]), further, paragraph [0010] of Heron discloses selection of a segment of a cardiac cycle when multiple cardiac cycles are captured. Accordingly, this argument is not persuasive and this rejection is maintained.
Applicant’s arguments on page 9-10 regarding the remaining dependent claims, are premised upon the assertion that the claims are allowable for the same reasons as claim 1, 11, and 21, due to dependency on an allowable claim. The examiner respectfully disagrees for the reasons discussed above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Szucs (US 20080249402 A1)
[0005] discloses that the speed of the clip is dependent on the specific process of the anatomical region of interest, like systole
[0015] display of the first and second clips is arranged to start and end simultaneously
[0037] discloses a set of images captured during three cycles/ i.e. multiple image frames
[0040] discloses that each “frame” is set at different speeds, with one cycle being half the speed of the other (play back at half the speed for the third cycle)
[0035] discloses 1 clip that is set into three parts (subsets), a cine-loop is formed/playback is continued until the user stops
[0041] each portion corresponds to a predetermined cycle of the anatomical region of interest
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NESHAT BASET whose telephone number is (571)272-5478. The examiner can normally be reached M-F 8:30-17:30 CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PASCAL M. BUI-PHO can be reached on (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.B./Examiner, Art Unit 3798
/PASCAL M BUI PHO/Supervisory Patent Examiner, Art Unit 3798