DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s arguments and formality amendments filed on 12/8/2025 have been entered. Claims 1-20 are pending. Previous claim objections are withdrawn in light of applicant’s remarks.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Independent claim 1 recites, for example, the following abstract ideas: “…recurrently generating a live similarity measure indicative of a similarity between the view represented in the reference image data and a view represented in the live ultrasound image data…” and “…generating a user output alert…”, fall within mental processes. There is no specific machine or device that is not known or generic recited in the claim limitations, see MPEP § 2106.05(b). The limitations have no specifics to the algorithmic foundation or dimensionality associated with the image data and as such can be considered computations that can be performed in the mind using visual inspection or simple pen and paper. Further, the usage of image segmentation of binary presence of anatomical structures is considered a judgment or evaluation, which is grouped as a mental process under the 2019 PEG, since for example segmenting is a partition/identification of an image into discrete groups, an individual can mentally identify the different structures (groups) presented in an image or at most by pen and paper and as such this limitation is considered a mental process. The generating of a user output alert is considered a judgment or evaluation, which is grouped as a mental process under the 2019 PEG, and/or managing interactions between people, namely, humans following rules, which is grouped as a method of organizing human activity under the 2019 PEG); once the similarity measurement is evaluated in the mind an individual can internally verbalize an alert, as such this limitation is considered a mental process.
Lastly, limitations “…acquiring at a first ultrasound imaging device…”, “…transmitting…”, “…retrieving from the datastore…”, “…acquiring live ultrasound image data…” and “…capturing and storing one or more frames…”, are considered extra solution activities recited at a high level of generality with no specific machine or device disclosed that is not generic or known to perform the limitations. Further, analogous limitations are found in claims 12-15.
The judicial exceptions are not integrated into a “practical application” as defined by the Subject Matter Eligibility Analysis documented in Federal Register 84(4), issued on 07 January 2019, and MPEP § 2106. The limitation of “…one or more processors…” in claims 12-15, simply represent implementing the abstract ideas with a computer. MPEP § 2106.05(f) notes that “using a computer as a tool to perform the abstract idea” is not sufficient to integrate a judicial exception into a practical application as interpreted by the court(s). Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972) “held that simply implementing a mathematical principle on a physical machine, namely a computer, was not a patentable application of that principle and Intellectual Ventures LLC v. Symantec Corp., 838 F.3d 1307, 1318 (Fed. Cir. 2016) established that mental processes encompass acts which, absent anything beyond generic computer components, may be “performed by a human, mentally or with pen and paper.” Intellectual Ventures additionally established that if a claim, under its broadest reasonable interpretation, covers performance in the mind but for the recitation of generic computer components, then it is still in the mental processes category of abstract ideas unless the step(s) cannot be practically performed in the mind. Therefore, a positive recitation of the associated computer would not necessarily result in patent eligible subject matter.
The dependent claims 2-11, and 16-20 do not sufficiently link the subject matter to a practical application or recite element(s) which constitute significantly more than the abstract ideas identified. The depending claims are directed to additional limitations which encompass abstract ideas consistent with those identified above that are well-understood, routine and/or conventional activity. Further, dependent claims 2-11, and 16-20 merely include limitations that either further define the abstract idea (and thus don’t make the abstract idea any less abstract) or amount to no more than generally linking the use of the abstract idea to a particular technological environment or field of use because they’re merely incidental or token additions to the claims that do not alter or affect how the process steps are performed.
Regarding Claim 15, the claimed invention is directed to non-statutory subject matter. The claim does/do not fall within at least one of the four categories of patent eligible subject matter because it’s considered a computer program per se.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, limitation “…obtaining from the datastore, at a second ultrasound imaging device, the reference image data based on querying the datastore with patient identifier information…” is unclear the connection of a second ultrasound imaging device with obtaining the reference image which is acquired in a subsequent acquiring step by a first ultrasound imaging device. The connection of the first and second ultrasound imaging devices with respect to the obtaining step are unclear.
Claim 9 recites the limitation "receiving annotation data for storage" in line 2. There is insufficient antecedent basis for this limitation in the claim. Further, the reference image data and the annotation data connection with the first or second ultrasound imaging devices from claim 1 are unclear with respect to receiving annotation data.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Emery (U.S. 6402693, June 11, 2002)(hereinafter, “Emery”) in view of Harks et. al. (U.S. 20190357987, November 28, 2019)(hereinafter, “Harks”).
Regarding Claim 1, Emery teaches: A computer implemented method, for performance with a distributed computing system (Fig. 2), the method comprising:
acquiring at a first ultrasound imaging device first ultrasound imaging data of a subject, including at least one image depicting at least one view of an anatomy of the subject (“FIG. 1 illustrates a pair of sequentially obtained ultrasound images. A reference image 10 contains a tissue sample 12 that is of interest to a physician or sonographer. For example, the tissue sample 12 may be a heart muscle in a non-stressed condition or may be an image of a tumor located in the patient's body.”, column 2);
transmitting the first ultrasound imaging data as reference ultrasound image data into a datastore, wherein the first ultrasound imaging data is tagged with subject identifier information (“Beginning with a step 30, a user of the ultrasound system selects a reference image against which future images will be compared. The reference image is stored on a recordable media of the ultrasound system.”, column 2);
obtaining from the datastore, by a second ultrasound imaging device, the reference ultrasound image data based on querying the datastore with patient identifier information, wherein the second imaging device is the same as or different to the first imaging device (“For each subsequent image obtained, a computer processor within the ultrasound system computes a two-dimensional comparison with the reference image stored in memory. At a step 34, the user is provided with feedback indicating the results of the comparison for each image.”, column 2);
acquiring live ultrasound image data at the second ultrasound imaging device (“In order to analyze the tissue sample under different conditions, such as stress, or at a different time, a subsequent ultrasound image 14 is obtained. In the subsequent image 14, the tissue sample 12' is seen under the different conditions or at a different time.”, column 2; “When the user wishes to obtain another image of the tissue sample in the reference image, the user positions the probe at approximately the same location used to obtain the reference image and begins acquiring sequential ultrasound images.”, column 2);
recurrently generating a live similarity measure indicative of a similarity between the view represented in the reference image data and a view represented in the live ultrasound image data, (“The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images.”);
generating a user output alert when the similarity measure matches at least one pre- defined criterion (“The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images. Alternatively, the feedback may comprise a visual display that changes in appearance or intensity with the degree of similarity. By responding to the feedback, the user can tell whether the image is becoming more or less like the reference image. Upon achieving an acceptable degree of similarity, the user knows that the orientation of the transducer is nearly the same as that used to obtain the reference image.”, column 2); and
capturing and storing one or more frames of the live ultrasound image data, tagged with the patient identifier information (“At a step 36, it is determined whether the user has selected a subsequent image for comparison against the reference image. If not, the processing returns to step 32 and additional images are obtained and compared against the reference image. If the answer to step 36 is yes, then the image selected is stored on the recordable media at a step 38. At a step 40, the reference image and the selected subsequent image are displayed for a physician or sonographer in order to compare the tissue sample under different conditions or at different times.”, column 2).
Emery does not teach: wherein the similarity measure uses at least one of: image segmentation and binary presence of anatomical structures.
Harks in the field of ultrasound imaging system teaches: “The similarities between an US image and the model may be determined on the basis of a segmented version of the US image, which may be computed using a suitable procedure known the person skilled in the art. The similarity measure may be computed on the basis of the number of overlapping points between the segmented US image and the model for the best overlap between the segmented US image and the model.” [0049].
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the similarity measure in Harks to us image segmentation as taught in Harks to determine the transformation that would correlate and register the images such that the live ultrasound image has the largest similarity to the reference (Harks, [0064]).
Regarding Claim 2, the combination of Emery and Harks teach the claim limitations as noted above.
Emery further teaches: wherein the pre-defined criterion includes a pre-defined threshold for the similarity measure (“At a step 34, the user is provided with feedback indicating the results of the comparison for each image. The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images. Alternatively, the feedback may comprise a visual display that changes in appearance or intensity with the degree of similarity. By responding to the feedback, the user can tell whether the image is becoming more or less like the reference image. Upon achieving an acceptable degree of similarity, the user knows that the orientation of the transducer is nearly the same as that used to obtain the reference image.”, column 2; “…the comparison performed comprises a sum of absolute differences (SAD) calculation performed on each pixel of the reference image with each pixel of the subsequently obtained images. The magnitude of each pixel of the reference image and a corresponding pixel in a subsequent image are subtracted and summed over the entire image. Those images having a higher degree of similarity will have a lower sum. Therefore, the ultrasound system provides feedback to the user indicating the results of the SAD calculation.”, column 3).
Regarding Claim 3, the combination of Emery and Harks teach the claim limitations as noted above.
Emery further teaches: wherein the datastore is within a patient monitoring subsystem (See Figs. 2-3).
Regarding Claim 4, the combination of Emery and Harks teach the claim limitations as noted above.
Emery further teaches: further comprising capturing and storing at least one frame of the live image data only when the similarity measure meets the pre-defined criterion (“ Once a reference frame has been stored in a memory 64, subsequent frames are compared with the reference frame by the processor 62. A signal which is proportional to the comparison is supplied to a transducer 68, which provides a visual or audible indication of the degree of similarity to the user. Once the degree of similarity has reached an acceptable level, the user selects the capture switch 66 or speaks, a command which is interpreted and causes the processor 62 to store the latest image frame in the memory 64. Alternatively, the ultrasound system may always store the subsequent image having the greatest degree of similarity without user input.”, column 3).
Regarding Claim 5, the combination of Emery and Harks teach the claim limitations as noted above.
Emery further teaches: wherein capturing and storing the at least one frame of the live image data occurs: only when the similarity measure meets the pre-defined criterion; and responsive to a capture command from a user input device (“. The processor 62 is interfaced with a capture switch 66 or other input device which allows a user to indicate when an image frame is to be saved. The physician or sonographer selects an image frame for use as a reference frame against which future ultrasound images will be compared. Once a reference frame has been stored in a memory 64, subsequent frames are compared with the reference frame by the processor 62. A signal which is proportional to the comparison is supplied to a transducer 68, which provides a visual or audible indication of the degree of similarity to the user. Once the degree of similarity has reached an acceptable level, the user selects the capture switch 66 or speaks, a command which is interpreted and causes the processor 62 to store the latest image frame in the memory 64. Alternatively, the ultrasound system may always store the subsequent image having the greatest degree of similarity without user input.”, column 3).
Regarding Claim 6, the combination of Emery and Harks teach the claim limitations as noted above.
Emery further teaches: wherein the datastore stores a set of reference images depicting different respective views of the anatomy, each tagged in accordance with the view depicted (“At a step 40, the reference image and the selected subsequent image are displayed for a physician or sonographer in order to compare the tissue sample under different conditions or at different times.”, column 2).
Regarding Claim 7, the combination of Emery and Harks teach the claim limitations as noted above.
Emery further teaches: wherein the datastore further stores a view recommendation indicative of a recommended next view of the anatomy to capture, and wherein the obtained reference image is one of the set of images depicting the recommended next view (“At a step 40, the reference image and the selected subsequent image are displayed for a physician or sonographer in order to compare the tissue sample under different conditions or at different times.”, column 2;“ A 2-D cross-correlation technique provides not only a measure of the similarity between the two images but also provides data indicating which way the subsequent image (and hence which way the transducer should be moved) to increase the similarity of the images. Furthermore, it is not required that the comparison be based on the whole image. Instead, a user may select a portion of the image that is compared against additional images.”, column 2).
Regarding Claim 8, the combination of Emery and Harks teach the claim limitations as noted above.
Emery further teaches: further comprising generating probe positioning guidance based on the similarity metric, the guidance for guiding movement of the probe by a user for improving the similarity metric, and outputting the guidance information to a user interface device (“At a step 34, the user is provided with feedback indicating the results of the comparison for each image. The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images. Alternatively, the feedback may comprise a visual display that changes in appearance or intensity with the degree of similarity. By responding to the feedback, the user can tell whether the image is becoming more or less like the reference image. Upon achieving an acceptable degree of similarity, the user knows that the orientation of the transducer is nearly the same as that used to obtain the reference image.”, column 2; “For example, the two images may be compared using correlation, mean brightness, 2-D Fourier transform, least mean squares, median or mode techniques or other mathematical techniques that provide an indication of the similarity of the two images. A 2-D cross-correlation technique provides not only a measure of the similarity between the two images but also provides data indicating which way the subsequent image (and hence which way the transducer should be moved) to increase the similarity of the images.”, column 2).
Regarding Claim 9, the combination of Emery and Harks teach the claim limitations as noted above.
Emery does not teach: further comprising receiving annotation data for storage in the datastore with the reference image data.
Harks in the field of ultrasound imaging system teaches: “…the mapping unit 8 may identify fiducial image points in the live US image and map these image points to corresponding points of the model in order to determine the transformation. The mapping of fiducial points can be carried out using known computer vision techniques, such as, for example, scale-invariant feature transform (SIFT).” [0064]; “These points may be identified manually or automatically during the ablation procedure and stored in the mapping unit 8 in response to their identification so that they can be marked in subsequently generated visualizations.” [0079].
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination to further receive annotation data for storage in the datastore with the reference image data as taught in Harks for transformation determination (harks, [0064]) and/or determination of relative positions of sensor(s) in visualization (Harks, [0075][0079]).
Regarding Claim 10, the combination of Emery and Harks teach the claim limitations as noted above.
Emery does not teach: wherein the second imaging device is different to the first.
Harks in the field of ultrasound imaging system teaches: “… in case the US probe 2 is an ICE probe, the images may be acquired using another US imaging modality, such as TEE or TTE Likewise, another imaging modality may be used to acquire one or more image(s) for creating the model, such as, for example computed tomography (CT) imaging, magnetic resonance (MR) imaging or 3D rotational angiography (3DATG).” [0050]; “Moreover, the generated visualizations may be fused with fluoroscopy images of the relevant region of the patient body acquired using a fluoroscopy device.” [0091].
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the second imaging device in the combination of references to be different from the first as taught in Harks for obtaining various information regarding the area of interest such as tissue information, blood flow and positioning of medical devices (Harks, [0003]).
Regarding Claim 11, the combination of Emery and Harks teach the claim limitations as noted above.
Emery does not teach further comprising processing the captured at least one frame of the live image data with an anatomical measurement algorithm.
Harks in the field of ultrasound imaging system teaches: “…the medical device is configured to carry out electrical measurements to generate an electro-anatomical map of the region of the patient body and wherein the mapping unit is configured to overlay the electro-anatomical map over the model on the basis of the relative position of the at least one ultrasound sensor with respect to the ultrasound probe during the measurements. The electro-anatomical map may particularly comprise an activation map and/or a voltage map of the region of the patient body which may include a region of the patient's heart.” [0017].
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination to further comprise processing the captured at least one frame of the live image data with an anatomical measurement algorithm as taught in Harks determine an accurate map of the relevant region of the patient body (Harks, [0017]).
Regarding Claim 12, Emery teaches: A processing arrangement (Figs. 2-3) comprising: an input/output (“The processor 62 is interfaced with a capture switch 66 or other input device…”, column 3);
and one or more processors, adapted to: receive via the input/output and from an external datastore, reference ultrasound image data depicting at least one view of an anatomy of a subject based on querying the datastore with patient identifier information (“For each subsequent image obtained, a computer processor within the ultrasound system computes a two-dimensional comparison with the reference image stored in memory. At a step 34, the user is provided with feedback indicating the results of the comparison for each image.”, column 2);
receive at the input/output live ultrasound image data (“FIG. 1 illustrates a pair of sequentially obtained ultrasound images. A reference image 10 contains a tissue sample 12 that is of interest to a physician or sonographer. For example, the tissue sample 12 may be a heart muscle in a non-stressed condition or may be an image of a tumor located in the patient's body.”, column 2);
recurrently generate a live similarity measure indicative of similarity between the view represented in the reference image data and a view represented in the live ultrasound image data (“The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images.”),
generate a user output alert when the similarity measure matches at least one pre-defined criterion (“The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images. Alternatively, the feedback may comprise a visual display that changes in appearance or intensity with the degree of similarity. By responding to the feedback, the user can tell whether the image is becoming more or less like the reference image. Upon achieving an acceptable degree of similarity, the user knows that the orientation of the transducer is nearly the same as that used to obtain the reference image.”, column 2); and
capture and store one or more frames of the live image data, tagged with the patient identifier information (“When the user wishes to obtain another image of the tissue sample in the reference image, the user positions the probe at approximately the same location used to obtain the reference image and begins acquiring sequential ultrasound images.”, column 2; “At a step 36, it is determined whether the user has selected a subsequent image for comparison against the reference image. If not, the processing returns to step 32 and additional images are obtained and compared against the reference image. If the answer to step 36 is yes, then the image selected is stored on the recordable media at a step 38. At a step 40, the reference image and the selected subsequent image are displayed for a physician or sonographer in order to compare the tissue sample under different conditions or at different times.”, column 2).
Emery does not teach: wherein the similarity measure uses at least one of: image segmentation and binary presence of anatomical structures;
Harks in the field of ultrasound imaging system teaches: “The similarities between an US image and the model may be determined on the basis of a segmented version of the US image, which may be computed using a suitable procedure known the person skilled in the art. The similarity measure may be computed on the basis of the number of overlapping points between the segmented US image and the model for the best overlap between the segmented US image and the model.” [0049].
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the similarity measure in Harks to us image segmentation as taught in Harks to determine the transformation that would correlate and register the images such that the live ultrasound image has the largest similarity to the reference (Harks, [0064]).
Regarding Claim 13, Emery teaches: An ultrasound imaging apparatus (Fig. 3) comprising:
an input/output (“The processor 62 is interfaced with a capture switch 66 or other input device…”, column 3);
one or more processors, adapted to:receive via the input/output and from an external datastore, reference ultrasound image data depicting at least one view of an anatomy of a subject based on querying the datastore with patient identifier information (“For each subsequent image obtained, a computer processor within the ultrasound system computes a two-dimensional comparison with the reference image stored in memory. At a step 34, the user is provided with feedback indicating the results of the comparison for each image.”, column 2);
receive at the input/output live ultrasound image data (“FIG. 1 illustrates a pair of sequentially obtained ultrasound images. A reference image 10 contains a tissue sample 12 that is of interest to a physician or sonographer. For example, the tissue sample 12 may be a heart muscle in a non-stressed condition or may be an image of a tumor located in the patient's body.”, column 2);
recurrently generate a live similarity measure indicative of similarity between the view represented in the reference image data and a view represented in the live ultrasound image data (“The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images.”),
generate a user output alert when the similarity measure matches at least one pre-defined criterion (“The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images. Alternatively, the feedback may comprise a visual display that changes in appearance or intensity with the degree of similarity. By responding to the feedback, the user can tell whether the image is becoming more or less like the reference image. Upon achieving an acceptable degree of similarity, the user knows that the orientation of the transducer is nearly the same as that used to obtain the reference image.”, column 2); and
capture and store one or more frames of the live image data, tagged with the patient identifier information and an ultrasound imaging probe for acquiring the live ultrasound imaging data (“When the user wishes to obtain another image of the tissue sample in the reference image, the user positions the probe at approximately the same location used to obtain the reference image and begins acquiring sequential ultrasound images.”, column 2; “At a step 36, it is determined whether the user has selected a subsequent image for comparison against the reference image. If not, the processing returns to step 32 and additional images are obtained and compared against the reference image. If the answer to step 36 is yes, then the image selected is stored on the recordable media at a step 38. At a step 40, the reference image and the selected subsequent image are displayed for a physician or sonographer in order to compare the tissue sample under different conditions or at different times.”, column 2).
Emery does not teach: wherein the similarity measure uses at least one of: image segmentation and binary presence of anatomical structures;
Harks in the field of ultrasound imaging system teaches: “The similarities between an US image and the model may be determined on the basis of a segmented version of the US image, which may be computed using a suitable procedure known the person skilled in the art. The similarity measure may be computed on the basis of the number of overlapping points between the segmented US image and the model for the best overlap between the segmented US image and the model.” [0049].
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the similarity measure in Harks to us image segmentation as taught in Harks to determine the transformation that would correlate and register the images such that the live ultrasound image has the largest similarity to the reference (Harks, [0064]).
Regarding Claim 14, Emery teaches: A system (Fig. 3), comprising:
an external datastore for storing reference ultrasound image data (“ a computer processor 62 which stores data for individual images in a memory 64. The processor 62 is interfaced with a capture switch 66 or other input device which allows a user to indicate when an image frame is to be saved.”, column 3);
an input/output (“The processor 62 is interfaced with a capture switch 66 or other input device…”, column 3);
one or more processors, adapted to: receive via the input/output and from the external datastore, reference ultrasound image data depicting at least one view of an anatomy of a subject based on querying the datastore with patient identifier information (“For each subsequent image obtained, a computer processor within the ultrasound system computes a two-dimensional comparison with the reference image stored in memory. At a step 34, the user is provided with feedback indicating the results of the comparison for each image.”, column 2);
receive at the input/output live ultrasound image data (“FIG. 1 illustrates a pair of sequentially obtained ultrasound images. A reference image 10 contains a tissue sample 12 that is of interest to a physician or sonographer. For example, the tissue sample 12 may be a heart muscle in a non-stressed condition or may be an image of a tumor located in the patient's body.”, column 2);
recurrently generate a live similarity measure indicative of similarity between the view represented in the reference image data and a view represented in the live ultrasound image data (“The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images.”),
generate a user output alert when the similarity measure matches at least one pre-defined criterion (“The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images. Alternatively, the feedback may comprise a visual display that changes in appearance or intensity with the degree of similarity. By responding to the feedback, the user can tell whether the image is becoming more or less like the reference image. Upon achieving an acceptable degree of similarity, the user knows that the orientation of the transducer is nearly the same as that used to obtain the reference image.”, column 2); and
capture and store one or more frames of the live image data, tagged with the patient identifier information (“At a step 36, it is determined whether the user has selected a subsequent image for comparison against the reference image. If not, the processing returns to step 32 and additional images are obtained and compared against the reference image. If the answer to step 36 is yes, then the image selected is stored on the recordable media at a step 38. At a step 40, the reference image and the selected subsequent image are displayed for a physician or sonographer in order to compare the tissue sample under different conditions or at different times.”, column 2).
Emery does not teach: wherein the similarity measure uses at least one of: image segmentation and binary presence of anatomical structures;
Harks in the field of ultrasound imaging system teaches: “The similarities between an US image and the model may be determined on the basis of a segmented version of the US image, which may be computed using a suitable procedure known the person skilled in the art. The similarity measure may be computed on the basis of the number of overlapping points between the segmented US image and the model for the best overlap between the segmented US image and the model.” [0049].
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the similarity measure in Harks to us image segmentation as taught in Harks to determine the transformation that would correlate and register the images such that the live ultrasound image has the largest similarity to the reference (Harks, [0064]).
Regarding Claim 15, Emery teaches: A non-transitory computer program product comprising code means configured for execution by a processor, wherein, upon execution, the code causes the processor to (“The media may be an electronic memory or may be a computer-readable magnetic memory such as a hard disk, floppy disk, video tape, optical disk, etc…For each subsequent image obtained, a computer processor within the ultrasound system computes a two-dimensional comparison with the reference image stored in memory. ”, column 2):
receive from an external datastore, reference ultrasound image data depicting at least one view of an anatomy of a subject based on querying the datastore with patient identifier information (“For each subsequent image obtained, a computer processor within the ultrasound system computes a two-dimensional comparison with the reference image stored in memory. At a step 34, the user is provided with feedback indicating the results of the comparison for each image.”, column 2);
receive from an ultrasound imaging apparatus live ultrasound image data (“FIG. 1 illustrates a pair of sequentially obtained ultrasound images. A reference image 10 contains a tissue sample 12 that is of interest to a physician or sonographer. For example, the tissue sample 12 may be a heart muscle in a non-stressed condition or may be an image of a tumor located in the patient's body.”, column 2);
recurrently generate a live similarity measure indicative of similarity between the view represented in the reference image data and a view represented in the live ultrasound image data (“The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images.”),
generate a user output alert when the similarity measure matches at least one pre-defined criterion (“The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images. Alternatively, the feedback may comprise a visual display that changes in appearance or intensity with the degree of similarity. By responding to the feedback, the user can tell whether the image is becoming more or less like the reference image. Upon achieving an acceptable degree of similarity, the user knows that the orientation of the transducer is nearly the same as that used to obtain the reference image.”, column 2);
capture and store one or more frames of the live image data, tagged with the patient identifier information (“At a step 36, it is determined whether the user has selected a subsequent image for comparison against the reference image. If not, the processing returns to step 32 and additional images are obtained and compared against the reference image. If the answer to step 36 is yes, then the image selected is stored on the recordable media at a step 38. At a step 40, the reference image and the selected subsequent image are displayed for a physician or sonographer in order to compare the tissue sample under different conditions or at different times.”, column 2).
Emery does not teach: wherein the similarity measure uses at least one of: image segmentation and binary presence of anatomical structures;
Harks in the field of ultrasound imaging system teaches: “The similarities between an US image and the model may be determined on the basis of a segmented version of the US image, which may be computed using a suitable procedure known the person skilled in the art. The similarity measure may be computed on the basis of the number of overlapping points between the segmented US image and the model for the best overlap between the segmented US image and the model.” [0049].
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the similarity measure in Harks to us image segmentation as taught in Harks to determine the transformation that would correlate and register the images such that the live ultrasound image has the largest similarity to the reference (Harks, [0064]).
Regarding Claim 16, the combination of Emery and Harks teach the claim limitations as noted above.
Emery further teaches: wherein the datastore stores a set of reference images depicting different respective views of the anatomy, each tagged in accordance with the view depicted (“At a step 40, the reference image and the selected subsequent image are displayed for a physician or sonographer in order to compare the tissue sample under different conditions or at different times.”, column 2).
Regarding Claim 17, the combination of Emery and Harks teach the claim limitations as noted above.
wherein the datastore further stores a view recommendation indicative of a recommended next view of the anatomy to capture, and wherein the obtained reference image is one of the set of images depicting the recommended next view (“At a step 40, the reference image and the selected subsequent image are displayed for a physician or sonographer in order to compare the tissue sample under different conditions or at different times.”, column 2;“ A 2-D cross-correlation technique provides not only a measure of the similarity between the two images but also provides data indicating which way the subsequent image (and hence which way the transducer should be moved) to increase the similarity of the images. Furthermore, it is not required that the comparison be based on the whole image. Instead, a user may select a portion of the image that is compared against additional images.”, column 2).
Regarding Claim 18, the combination of Emery and Harks teach the claim limitations as noted above.
Emery further teaches: wherein the processor is further configured to generate probe positioning guidance based on the similarity metric for guiding movement of the probe by a user for improving the similarity metric, and output the guidance information to a user interface device (“At a step 34, the user is provided with feedback indicating the results of the comparison for each image. The comparison may produce an audible tone that varies in frequency or loudness with the degree of similarity between the two images. Alternatively, the feedback may comprise a visual display that changes in appearance or intensity with the degree of similarity. By responding to the feedback, the user can tell whether the image is becoming more or less like the reference image. Upon achieving an acceptable degree of similarity, the user knows that the orientation of the transducer is nearly the same as that used to obtain the reference image.”, column 2; “For example, the two images may be compared using correlation, mean brightness, 2-D Fourier transform, least mean squares, median or mode techniques or other mathematical techniques that provide an indication of the similarity of the two images. A 2-D cross-correlation technique provides not only a measure of the similarity between the two images but also provides data indicating which way the subsequent image (and hence which way the transducer should be moved) to increase the similarity of the images.”, column 2).
Regarding Claim 19, the combination of Emery and Harks teach the claim limitations as noted above.
Emery further teaches: wherein the datastore is housed within a patient monitoring subsystem (See Figs. 2-3).
Regarding Claim 20, the combination of Emery and Harks teach the claim limitations as noted above.
Emery further teaches: wherein the processor is further configured to captures and store at least one frame of the live image data only when the similarity measure meets the pre-defined criterion (“ Once a reference frame has been stored in a memory 64, subsequent frames are compared with the reference frame by the processor 62. A signal which is proportional to the comparison is supplied to a transducer 68, which provides a visual or audible indication of the degree of similarity to the user. Once the degree of similarity has reached an acceptable level, the user selects the capture switch 66 or speaks, a command which is interpreted and causes the processor 62 to store the latest image frame in the memory 64. Alternatively, the ultrasound system may always store the subsequent image having the greatest degree of similarity without user input.”, column 3).
Response to Arguments
With regards to Applicant’s remarks regarding 35 U.S.C. 112(b) for claim 9: “Regarding the rejection of claim 9, the claim recites "[t]he method of claim 1, further comprising receiving annotation data for storage in the datastore with the reference image data." The Patent Office asserts that "receiving annotation data for storage" lacks antecedent basis. However, the claim intentionally does not recite "annotation data for storage" in a way that indicates that the annotation data has been claimed previously, as this is the first recitation of the annotation data for storage. Further, as "annotation data" is or can be plural, preceding it with "a" would be grammatically incorrect. Accordingly, it is respectfully requested that the rejections under 35 U.S.C. § 112 be withdrawn, and respectfully submitted that the claims are in condition for allowance.”, Examiner respectfully disagrees with Applicant’s arguments that there is not an antecedent issue. The recited claim limitation infers there is annotation data that is received, yet, there is no previous instances of annotation data. Examiner respectfully maintains the 35 U.S.C. 112(b) rejection of claim 9.
With regards to Applicant’s arguments regarding 35 U.S.C. 101 being eligible, “Applicant respectfully asserts that claims as a whole are not directed to an abstract idea, and further asserts that the claims comprise a practical application and amount to significantly more than an abstract idea, and thus are directed to statutory subject matter under 35 U.S.C. § 101.”, Examiner again respectfully disagrees and maintains the 35 U.S.C. 101 rejection. Considering for example, the acquiring limitation as recited was provided as extra solution activity in the analysis above, but it can also be considered a mental recall of an image (i.e. mental process) or obtaining from a storage/memory an ultrasound image (i.e. extra solution activity). Segmentation is a partition/identification of an image into discrete groups, as provided by the standard definition known in the art, an individual can mentally identify the different structures (groups) presented in an image or at most by pen and paper. The generating of a user output alert is considered a judgment or evaluation, which is grouped as a mental process under the 2019 PEG, an individual can internally verbalize an alert that consists of stop or retreat or proceed. Since the image data can be acquired, processed (e.g. similarity measurement and alert) within the mind, pen and paper or at most by routine and conventional devices the recited claim limitations are an abstract idea mental process that is not integrated into a practical application.
Further, the recited claim limitations require an alert in response to the generated similarity measure is a judicial exception since the recited limitation amounts to a practitioner evaluating the data and providing an alert that can be verbally expressed or mentally noted. Further, as provided, Genetic Technologies Limited v. Merial LLC (Fed Cir., 2016) tells us that the inventive concept of step 2 of the Alice/Mayo analysis cannot be supplied by the abstract idea. The inventive concept necessary at step two of the Mayo/Alice analysis cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself. In the present application, once the alert is issued no further action is required therefore the judicial exception is not integrated into a practical application. Accordingly, the claim recites an abstract idea.
Judicial exception is not integrated into a practical application since the claim only recites additional element generic computer components. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible.
With regards to Applicant’s arguments regarding 35 U.S.C. 103 Applicant makes blanket statements of the prior art not teaching the claim limitations but not specifically how the Applicant is interpreting the prior art with regards to specific recited claim limitations for a rebuttal to be provided. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Further, Applicant’s arguments regarding the prior art Emory not teaching “obtaining from the datastore”, Examiner respectfully disagrees. As provided in the above office action Emery states the reference image is stored in a memory (column 2). Applicant's arguments are also against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMAL FARAG whose telephone number is (571)270-3432. The examiner can normally be reached 8:30 - 5:30 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at (571) 270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMAL ALY FARAG/Primary Examiner, Art Unit 3798