DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
Amendments to the drawings overcome the objections to the drawings. The cancellation of claim 14 obviates all rejections of claim 14. Amendments to the remaining originally filed claims have not overcome any rejections. See the Response to Arguments for specificity. New claim 21 has been rejected under 35 USC 101 and prior art. See the rejection section below for specificity.
Response to Arguments
Applicant's arguments have been fully considered but they are not persuasive. Concerning the 35 USC 101 abstract idea rejection, in the first paragraph on 10 of Applicant’s remarks, Applicant states that:
PNG
media_image1.png
537
1795
media_image1.png
Greyscale
In the third paragraph on page 10 of Applicant’s remarks, Applicant argues that:
PNG
media_image2.png
443
1796
media_image2.png
Greyscale
Examiner respectfully disagrees. Different radiologists have differing levels of sensitivity in terms of their differing abilities in correctly identifying true positive (target) features while minimizing false negatives (misses). These are mental processes involving visual perception.
In the first paragraph on page 11 of Applicant’s remarks, Applicant argues that:
PNG
media_image3.png
919
1079
media_image3.png
Greyscale
Examiner respectfully disagrees. As explained in the 35 USC 101 rejection section below, the following limitations of claim 1 are written broadly so as to be reasonably be interpreted as being performed mentally: “generating a first set of candidate medical findings by subjecting the medical image to a first medical findings detection process; generating a second set of candidate medical findings by subjecting the medical image to a second medical findings detection process, the second medical findings detection process being different than the first medical findings detection process and having a higher sensitivity level than the first medical findings detection process; obtaining a region of interest in the medical image; identifying, in the region of interest, at least one candidate medical finding comprised in the second set of candidate medical findings and not comprised in the first set of candidate medical findings”. See the 35 USC 101 rejection section below for specificity. In other words, these limitations do not require any interpretation of a technological solution. The excerpts that Applicant refers to above from paragraphs [0008] and [0043] of Applicant’s specification are not required in the broadest reasonable interpretation of the claim limitations. Never-the-less, those excerpts from the specification are routinely performed by radiologists viewing displayed images – where different radiologists will have different levels of sensitivity as explained above.
Turning to Applicant’s remarks pertaining to claim rejections under 35 U.S.C. § 102, in the third paragraph on page 12 of Applicant’s remarks, Applicant argues that:
PNG
media_image4.png
634
1798
media_image4.png
Greyscale
Examiner respectfully disagrees. Let us carefully consider the relevant cited teachings from Bonakdar:
PNG
media_image5.png
596
992
media_image5.png
Greyscale
PNG
media_image6.png
1444
964
media_image6.png
Greyscale
PNG
media_image7.png
1096
991
media_image7.png
Greyscale
As shown above, paragraph [0066] of Bonakdar teaches that, “The patient level sensitivity is between one of the first and second operating points taken alone (one false negative case from the first operating point can be turned into a true positive by the second operating point). On the lesion side, actual lesion level sensitivity is improved compared to the first operating point only” (emphasis added). A “false negative” means that a lesion was not detected by the first lesion detection model with the first operating point and first sensitivity. The second lesion detection model, with the second operating point and higher sensitivity, is able to detect these formerly undetected lesions; thus, the second lesion detection model then provides “true positive” detections where the first detection model was unable to detect those lesions (in the corresponding locations). This directly teaches the disputed claim limitations, “identifying, in the region of interest, at least one candidate medical finding comprised in the second set of candidate medical findings and not comprised in the first set of candidate medical findings”. That is, the same region of interest is considered by both lesion detection models, but there are some lesions that the second detection model can detect, precisely because it has higher sensitivity, that the first detection model is unable to detect.
Applicant’s arguments with respect to the prior art rejections of claim 7 have been fully considered and are persuasive. The prior art rejection of claim 7 has been withdrawn.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “interface unit” and “computing unit” in claim 13 and “computing unit” in claims 14 and 15.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-13 and 15-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to mental process abstract idea without significantly more.
Claim 1 recites:
“generating a first set of candidate medical findings by subjecting the medical image to a first medical findings detection process”, which can be reasonably interpreted as a first human observer viewing a displayed medical image and mentally generating a first set of candidate medical findings according to a first visual perception detection process;
“generating a second set of candidate medical findings by subjecting the medical image to a second medical findings detection process, the second medical findings detection process being different than the first medical findings detection process and having a higher sensitivity level than the first medical findings detection process”, which can be reasonably interpreted as a second human observer viewing a displayed image and mentally generating a second set of candidate medical findings according to a second visual perception detection process;
“obtaining a region of interest in the medical image”, which can be reasonably interpreted as human observer(s) viewing a displayed image and mentally designating a region of interest via visual perception; and
“identifying, in the region of interest, at least one candidate medical finding comprised in the second set of candidate medical findings and not comprised in the first set of candidate medical findings”, which can be reasonably interpreted as human observer(s) viewing a displayed image and mentally identifying, within the mentally designated region of interest, candidate medical finding comprised in the second set of candidate medical findings – via visual perception.
This judicial exception is not integrated into a practical application because additional elements of:
“computer-implemented” are generically recited computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer;
“obtaining a medical image, the medical image depicting a body part of a patient” are generically recited insignificant extra-solution activity of data gathering; and
“providing the at least one candidate medical finding” are generically recited insignificant extra-solution activity of data outputting.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because additional elements of:
“computer-implemented” are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f);
“obtaining a medical image, the medical image depicting a body part of a patient” are insignificant extra-solution activity of data gathering; and
“providing the at least one candidate medical finding” are insignificant extra-solution activity of data outputting.
Depending claims do not remedy these deficiencies:
Claims 2-8, 12, and 16-18 further recite limitations that can be reasonably be interpreted as being performed mentally by human observer(s), such as radiologist(s), viewing displayed images.
Claims 9, 10, 19, and 20 recite limitations that are additional elements that are insignificant extra-solution activity of data gathering.
Claim 11 recites limitations that are additional elements that are insignificant extra-solution activity of data outputting.
Claim 15 recites limitations that are additional elements that are generically recited computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer and are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f).
Claim 21 further recites limitations that can be reasonably be interpreted as being performed mentally by human observer(s), such as radiologist(s), viewing displayed images.
As per claim(s) 13, arguments made in rejecting claim(s) 1 are analogous. Claim 13 also recites, “a system comprising… a computing unit is configured to”, which are additional elements that are generically recited computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer and are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-6, 8, 12-13, and 15-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2022/0138932 A1 (Bonakdar).
As per claim 1, Bonakdar teaches a computer-implemented method comprising:
obtaining a medical image, the medical image depicting a body part of a patient (Bonakdar: Figs. 18A and 18C (both shown below): mainly 1810;
Fig. 19 (shown below): mainly 1910;
Fig. 1: mainly 102-105);
generating a first set of candidate medical findings by subjecting the medical image to a first medical findings detection process (Bonakdar: abstract: “A lesion detection ensemble machine learning model architecture comprising a plurality of trained machine learning (ML) computer models is provided. A first decoder of a lesion detection ML model processes a medical image input to generate a first lesion mapping prediction. A second decoder of the lesion detection ML model processes the medical image input to generate a second lesion mapping prediction ….
The first decoder is trained with a first loss function that is configured to counterbalance a training of the second decoder that is trained using a second loss function different from the first loss function”;
Para 7: “implementing a first machine learning process, the first decoder with the first loss function, wherein the first loss function penalizes false negative lesion detection …
second machine learning process, the second decoder with the second loss function. The second loss function penalizes false positive lesion detection”;
Para 220 (shown below)
Figs. 18B and 18C and associated text in paras 221-224 (all shown below));
generating a second set of candidate medical findings by subjecting the medical image to a second medical findings detection process different than the first medical findings detection process (Bonakdar: abstract: see citation above;
para 7: See citation above;
PNG
media_image5.png
596
992
media_image5.png
Greyscale
PNG
media_image6.png
1444
964
media_image6.png
Greyscale
Figs. 18B and 18C and associated text in paras 221-224 (all shown below));
obtaining a region of interest in the medical image (Bonakdar: para 8: “generate a mask corresponding to an anatomical structure of interest… masked portion of the received medical images corresponding to the anatomical structure of interest… the lesion detection performed may focus on the portion of input images that correspond to the anatomical region of interest where lesions are to be detected”;
para 9: “the anatomical structure of interest is a human liver”);
identifying, in the region of interest, at least one candidate medical finding comprised in the second set of candidate medical findings and not comprised in the first set of candidate medical findings (Bonakdar: para 9: “These features allow the illustrative embodiments to process certain medical images within a volume and do not have to process the entire volume. Moreover, in some illustrative embodiments, the invention may focus on detecting lesions in the human liver which tends to be a difficult task”;
PNG
media_image7.png
1096
991
media_image7.png
Greyscale
PNG
media_image8.png
1227
998
media_image8.png
Greyscale
PNG
media_image9.png
1184
996
media_image9.png
Greyscale
PNG
media_image10.png
450
993
media_image10.png
Greyscale
Para 224: “The operation in FIG. 18C is similar to that of FIG. 18B but with the operations being performed with regard to voxels in the input set S”;
PNG
media_image11.png
609
1349
media_image11.png
Greyscale
PNG
media_image12.png
724
1614
media_image12.png
Greyscale
PNG
media_image13.png
1417
995
media_image13.png
Greyscale
PNG
media_image14.png
851
969
media_image14.png
Greyscale
); and
providing the at least one candidate medical finding (Bonakdar:
para 10: “generating the final lesion prediction output by combining the combined lesion mapping prediction output and the unmasked lesion mapping prediction output”;
para 12: “outputting the final lesion prediction output comprises outputting the mask and the final lesion prediction output. By outputting the mask, which represents the anatomical structure of interest, the output allows for downstream computing systems to utilize the mask along with the lesion prediction output to generate representations of the anatomical structure and the corresponding detected lesions, such as in a medical imaging viewer application or the like.”
Also see arguments and citations offered in rejecting claims 7 and 11 below;
Figs. 18B-C (shown above): mainly 1835, 1845).
As per claim 2, Bonakdar teaches the method of claim 1, wherein the generating the first set of candidate medical findings includes detecting candidate medical findings in the medical image with a first sensitivity level, and the generating the second set of candidate medical findings includes detecting candidate medical findings in the medical image with a second sensitivity level higher than the first sensitivity level (Bonakdar: See arguments and citations offered in rejecting claim 1 above;
Para 66 (shown above): “a second operating point is used to re-interpret/process the detected lesion(s). This second operating point is chosen to be more sensitive…
The patient level sensitivity is between one of the first and second operating points taken alone ( one false negative case from the first operating point can be turned into a true positive by the second operating point). On the lesion side, actual lesion level sensitivity is improved compared to the first operating point only.”;
Para 131: “One loss function is configured to penalize false positive errors (yielding low sensitivity, but high precision) and the other is configured to penalize false negative errors (yielding high sensitivity”;
Para 140: “produces high sensitivity detection with relatively low precision, the other of the encoders 623 uses a loss function for training that penalizes errors in false positive lesion detection, resulting in low sensitivity detection”;
Para 214 (shown above): “a more sensitive operating point is used at a lesion level, referred to herein as the lesion level operating point OPzesion·”;
Para 220 (shown above): “The second operating point, i.e. the lesion level operating point OPzesion' is defined such that lesion sensitivity is above the lesion sensitivity obtained for the first operating point…
the lesion level operating point is selected along the lesion level ROC curve such that the lesion sensitivity is above the lesion sensitivity for the patient level operating point.”;
para 226 (shown above): “patient level operating point that is relatively more highly specific and less sensitive…
processed by a second ML/DL computer model that is trained implementing a second operating point that is relatively more sensitive”).
As per claim 3, Bonakdar teaches the method of claim 2, wherein the first medical findings detection process includes applying a first medical findings detection algorithm to the medical image, the first medical findings detection algorithm operating at the first sensitivity level, and the second medical findings detection process includes applying a second medical findings detection algorithm to the medical image, the second medical findings detection algorithm operating at the second sensitivity level (Bonakdar: See arguments and citations offered in rejecting claim 2 above).
As per claim 4, Bonakdar teaches the method of claim 3, wherein the second medical findings detection algorithm and the first medical findings detection algorithm are the same (Bonakdar: See arguments and citations offered in rejecting claim 1 above;
Para 222 (shown above): “the same ML/DL computer model such that the second ML/DL computer model may be a processing of the input S with the same ML/DL computer model as 1820 but with different operational parameters corresponding to the second operating point.”;
Para 226: “the first and second ML/DL computer model may be the same model but configured with different operating parameters corresponding to the different training implementing the different operating points”).
As per claim 5, Bonakdar teaches the method of claim 2, further comprising: setting the first sensitivity level based on at least one of, an input of a user directed to set the first sensitivity level, the medical image, or supplementary non-image data associated with the medical image (Bonakdar: See arguments and citations offered in rejecting claim 2 above: Fig. 18A and associated text in para 220; para 66 (shown above);
PNG
media_image5.png
596
992
media_image5.png
Greyscale
PNG
media_image6.png
1444
964
media_image6.png
Greyscale
PNG
media_image15.png
863
995
media_image15.png
Greyscale
).
As per claim 6, Bonakdar teaches the method of claim 2, further comprising: setting the second sensitivity level based on the first sensitivity level (Bonakdar: See arguments and citations offered in rejecting claims 2 and 5 above).
As per claim 8, Bonakdar teaches the method of claim 1, wherein the identifying the at least one candidate medical finding includes determining if the region of interest comprises the at least one candidate medical finding, and the providing the at least one candidate medical finding is based on the determining (Bonakdar: See arguments and citations offered in rejecting claim 1 above).
As per claim 12, Bonakdar teaches the method of claim 1, wherein each of the candidate medical findings of the second set of candidate medical findings comprises a confidence value, and the identifying the at least one candidate medical finding considers only candidate medical findings of the second set of medical findings having the confidence values above a preset confidence threshold (Bonakdar: See arguments and citations offered in rejecting claim 1 above;
para 223: “The second ML/DL computer model 1840 processes the input with the trained operational parameters corresponding to the second operating point to again generate classifications of lesions as to whether or not they are true positives or false positives. The result is a subset Si+ containing the predicted lesions (true positives) and a subset containing the predicted false positives. The filtered listing of lesions 1845 is then output as the subset s•+, thereby effectively eliminating the false positives specified in the subsets•-.”;
para 226 (shown above): “second true positive subset”
: the “true” in “true positive” is indicative of confidence).
As per claim(s) 13, arguments made in rejecting claim(s) 1 are analogous. Bonakdar also teaches a system comprising: an interface unit configured to obtain a medical image depicting a body part of a patient; and a computing unit is configured (Bonakdar: See arguments and citations offered in rejecting claim 1 above; Figs. 19 and 20 and associated text).
As per claim 14, Bonakdar teaches a non-transitory computer program product comprising program elements, when executed by a computing unit of a system, cause the system to perform the method of claim 1 (Bonakdar: See arguments and citations offered in rejecting claim 1 above; Figs. 19 and 20 and associated text).
As per claim 15, Bonakdar teaches a non-transitory computer-readable medium on which program elements are stored that, when executed by a computing unit of a system, cause the system to perform the method of claim 1 (Bonakdar: See arguments and citations offered in rejecting claim 1 above; Figs. 19 and 20 and associated text).
As per claim(s) 16-18, arguments made in rejecting claim(s) 6-8 (and base and intervening claims 1, 2, and 5) are analogous, respectively.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 9-11 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bonakdar as applied to claims 1 and 18 above, and further in view of US 20090010511 A1 (Gardner).
As per claim 9, Bonakdar teaches the method of claim 1. Bonakdar does not teach the obtaining the region of interest includes: generating a representation of the medical image for displaying to a user in a user interface, providing the representation to the user in the user interface, receiving a user input directed to indicate a region of interest in the representation, and defining the region of interest in the medical image based on the user input directed to indicate the region of interest in the representation.
Gardner teaches these limitations (Gardner:
[0010] In a third aspect, a method is provided for identifying a region of interest. A user-selected point is identified. A distance to a boundary from the user-selected point is determined. The region of interest is identified as a function of the user-selected point and the distance.
Para 23: “A point is selected by the user or processor. Boundaries of the region of interest are identified based on the selected point and a distance from one or more boundaries within an image. In other embodiments, a virtual region of interest or an image processing region is identified as a region of interest determined by a user or processor with additional spatial locations in a contiguous grouping”;
Para 26: “a user selects a point in the myocardium of an image. The processor 20 applies an edge detection algorithm to determine the endocardial boundary and/or an epicardium boundary. The shortest distance(s) from the user-selected point to one or both boundaries is determined. The region of interest is then assigned based on the point and the distance(s). For example, the point provides a general position of the region of interest between the boundaries and the distance provides a spatial extent of the region of interest along any dimension.”
Also see paras 30-37;
: the distance is specified to be the distance to the boundary.
PNG
media_image16.png
630
498
media_image16.png
Greyscale
PNG
media_image17.png
368
1008
media_image17.png
Greyscale
).
Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Gardner into Bonakdar since both Bonakdar and Gardner suggest a practical solution and field of endeavor of displaying and analyzing anatomical medical image involving defining a region of interest in general and Gardner additionally provides teachings that can be incorporated into Bonakdar in that the region of interest is defined interactively by user selection of a point as for “decreasing an amount of time used to designate regions of interest” (Gardner: para 5). The teachings of Gardner can be incorporated into Bonakdar in that the region of interest is defined interactively by user selection of a point. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable.
As per claim 10, Bonakdar in view of Gardner teaches the method of claim 9, wherein the user input is directed to a location in the representation, and the region of interest in the representation is defined as a region around the location within a preset distance from the location (Gardner: see arguments and citations offered in rejecting claim 9 above).
As per claim 11, Bonakdar in view of Gardner teaches the method of claim 9, wherein the providing the at least one candidate medical finding includes: including an indication of the at least one candidate medical finding in the representation so as to generate an updated representation, and displaying the updated representation to the user in the user interface (Bonakdar: See arguments and citations offered in rejecting claim 9 above;
para 12: “outputting the final lesion prediction output comprises outputting the mask and the final lesion prediction output. By outputting the mask, which represents the anatomical structure of interest, the output allows for downstream computing systems to utilize the mask along with the lesion prediction output to generate representations of the anatomical structure and the corresponding detected lesions, such as in a medical imaging viewer application or the like.”;
para 69: “This AI pipeline generated information may be provided to further downstream computing systems for further processing and generation of representations of the anatomical structure of interest and any detected lesions present in the anatomical structure. For example, graphical representations of the volume of input CT medical images may be generated in a medical image viewer or other computer application with the anatomical structure and detected lesions being superimposed or otherwise accentuated in the graphical representation using the contour information generated by the AI pipeline. In other illustrative embodiments, downstream processing of the AI pipeline generated information may include diagnosis decision support operations, automated medical imaging report generation based on the detected listing of lesions, classifications, and contour. In other illustrative embodiments, based on classifications of lesions, different treatment recommendations may be generated for review and consideration by medical practitioners.”;
para 104: “This AI pipeline 100 generated output may be provided to further downstream computing systems 180 for further processing and generation of representations of the anatomical structure of interest and any detected lesions present in the anatomical structure. For example, graphical representations of the input volume may be generated in a medical image viewer or other computer application of the downstream computing system 180 with the anatomical structure and detected lesions being superimposed or otherwise accentuated in the graphical representation using the contour information generated by the AI pipeline”;
para 110: “The filtered listing of lesions and their contours are provided to lesion classification logic which performs lesion classification to generate a finalized listing of lesions, their contours, and the lesion classifications (step 232). This finalized listing is provided along with liver contour information to downstream computing systems (step 234) which may operate on this information to generate medical imaging views in a medical imaging viewer application”;
para 11: “preventing the radiologist spending valuable manual resources on useless or faulty results when reviewing non-anatomical structure of interest input volumes, e.g., non-liver cases”;
para 151: “output of a list of lesions for downstream computing system operations, such as providing a medical viewing application”;
para 236: “identify the listing of liver lesions which is output to the cognitive computing system 2000 for further evaluation through the request processing pipeline 2008, for generating a medical imaging viewer application”).
As per claim(s) 19-20, arguments made in rejecting claim(s) 9-10 (and base and intervening claims 1, 2, 5, 16-18) are analogous, respectively.
Claim(s) 21 is rejected under 35 U.S.C. 103 as being unpatentable over Bonakdar as applied to claim 1 above, and further in view of US 20100284590 A1 (Peng).
As per claim 21, Bonakdar teaches the method of claim 1. Bonakdar does not teach the region of interest is obtained after the first set of candidate medical findings and the second set of candidate medical findings are generated. Peng teaches these limitations (Peng: para 58: “a set of local features are detected (Step S22). Each local feature may be an anatomical landmark that is observable from the topogram. The set of local features may be a redundant set of local features, X, that are detected with multiple hypotheses. This is to say that there may be multiple local features detected for the same anatomical landmark whereby each of the local features is obtained based on a different set of assumptions for identifying the feature.”;
Para 75: “These remaining voters may then be used to automatically identify the desired ROIs from within the topogram image (Step S25). As the ROIs represent regions of interest within the body of the subject that are to be the focus of the imaging study, the medical practitioner may input the desired organs and/or other anatomical structures that are to be treated as ROIs (Step S20).”;
Para 76: “After the ROIs have been automatically identified within the topogram (Step S25), the goal is to perform the medical image study in such a way as to include the identified regions of interest.”;
Fig. 2 (shown below): mainly s22-s25
PNG
media_image18.png
742
191
media_image18.png
Greyscale
).
Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Peng into Bonakdar since both Bonakdar and Peng suggest a practical solution and field of endeavor of detecting features based on different hypotheses or assumptions in the same anatomical region of a medical image general and Peng additionally provides teachings that can be incorporated into Bonakdar in that a region of interest is obtained after detecting the features since “These remaining voters may then be used to automatically identify the desired ROIs from within the topogram image (Step S25). As the ROIs represent regions of interest within the body of the subject that are to be the focus of the imaging study, the medical practitioner may input the desired organs and/or other anatomical structures that are to be treated as ROIs (Step S20)” (Peng: para 75) and since “After the ROIs have been automatically identified within the topogram (Step S25), the goal is to perform the medical image study in such a way as to include the identified regions of interest.” (Peng: para 76). The teachings of Peng can be incorporated into Bonakdar in that a region of interest is obtained after detecting the features. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable.
Allowable Subject Matter
Claim 7 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 101 set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: Limitations pertaining to “the first medical findings detection process and the second medical findings detection process run in parallel”, in conjunction with other limitations present in the independent claim(s), distinguish over the prior art.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Atiba Fitzpatrick whose telephone number is (571) 270-5255. The examiner can normally be reached on M-F 10:00am-6pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for Atiba Fitzpatrick is (571) 270-6255.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Atiba Fitzpatrick
/ATIBA O FITZPATRICK/
Primary Examiner, Art Unit 2677