DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/9/26 has been entered. Currently, claims 1-18 are pending.
Response to Arguments
Applicant's arguments filed 2/9/26 have been fully considered but they are not persuasive.
The applicant asserts Vaz et al. (US 2006/0056691) does not teach segment finding regions corresponding to a plurality of intensity sections divided and set based on at least one of a property or state of tissue in a lung area image generated by excluding the airways and the blood vessels from the medical image, wherein a combination of the plurality of intensity sections are determined based on a target diagnosis information; and generate reading assistance information based on quantification of distributions of the finding regions corresponding to the plurality of intensity sections within the lung area image, as recited in claims 1 and 14, and segment second regions corresponding to a second intensity section in the lung area image, wherein a combination of the first intensity section and the second intensity section are determined based on a target diagnosis information; and generate reading assistance information based on quantification of a distribution of the first regions and the second regions within the lung area image, recited in claims 7 and 16.
The Examiner respectfully disagrees as Vaz discloses the above-mentioned features. Particularly, Vaz discloses receiving and processing digital medical image data, which may be 3D reconstructed data. The 3D image data in rendered in accordance with data processing results, such as intensity variations (para 43). Acquired CT image data of a parenchyma in the pair of lungs is segmented. Once the image data has been segmented, a perfusion map of the segmented image data is generated. The perfusion map is generated by performing an adaptive smoothing of the segmented image using an averaging operator (para 45). The perfusion map is then rendered as a color-coded semi-transparent 3D volume. An example of this is shown in image (a) of FIG. 3. As shown in FIG. 3, image (a) is an original slice of CT data with the perfusion visualization overlaid. As shown in image (a) spheres 310a indicate the locations of pulmonary emboli, blue opaque patches 320 indicate areas of lower perfusion in the parenchyma, green semi-opaque patches 330 indicate areas of average perfusion and red transparent patches 340 indicate areas of high perfusion. In other words, patches 320 indicate areas that have a lack of blood flow, patches 330 indicate areas that have healthy or normal perfusion and patches 340 indicate areas that have increased densities or abnormally high perfusion (para 47). The different color patches read on the plurality of intensity sections based on a target diagnosis of a pulmonary emboli. The intensity sections being that or low perfusion, average perfusion, and high perfusion. The color-coding helps with reading assistance but also designates different intensity sections.
The applicant argues that the color patches of Vaz are intended top provide mere visual representation and cannot be understood as analytical criteria for segmenting anatomical or pathological regions. The Examiner fails to see this differentiation explicitly recited in the current claim language. Therefore, Vaz discloses segment finding regions corresponding to a plurality of intensity sections divided and set based on at least one of a property or state of tissue in a lung area image generated by excluding the airways and the blood vessels from the medical image, wherein a combination of the plurality of intensity sections are determined based on a target diagnosis information; and generate reading assistance information based on quantification of distributions of the finding regions corresponding to the plurality of intensity sections within the lung area image, as recited in claims 1 and 14, and segment second regions corresponding to a second intensity section in the lung area image, wherein a combination of the first intensity section and the second intensity section are determined based on a target diagnosis information; and generate reading assistance information based on quantification of a distribution of the first regions and the second regions within the lung area image, recited in claims 7 and 16.
Claim Rejections - 35 USC § 102
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 1-10, 12-16, and 18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Vaz et al. (US 2006/0056691), cited in the IDS dated 8/28/24.
Regarding claims 1 and 14, Vaz discloses a medical image reading assistance method and a medical image reading assistance apparatus for assisting reading of chest medical images, the medical image reading assistance apparatus comprising a computing system, wherein the computing system comprises at least one processor, and wherein the at least one processor is configured to:
segment airways and blood vessels from a medical image including lungs (see paras 45 and 52, a CT image of lungs is segmented into airways and blood vessels);
segment finding regions corresponding to a plurality of intensity sections divided and set based on at least one of a property or state of tissue in a lung area image generated by excluding the airways and the blood vessels from the medical image, wherein a combination of the plurality of intensity sections are determined based on a target diagnosis information (see Figs. 3-5 and paras 45, 47-48, 50, and 52, a perfusion map is created from small regions of the lung image, the perfusion map detects pulmonary emboli and/or blood flow issues, airways and blood vessels are excluded, different color patches read on the plurality of intensity sections based on a target diagnosis of a pulmonary emboli); and
generate reading assistance information based on quantification of a distribution of distributions of the finding regions corresponding to the plurality of intensity sections within the lung area image (see paras 43, 47-48, 50, 57, and 59-60, a perfusion map is displayed to medical practitioner, the perfusion map includes intensity sections of perfusion displayed as opaque patches).
Regarding claims 7 and 16, Vaz discloses a medical image reading assistance method and a medical image reading assistance apparatus for assisting reading of chest medical images, the medical image reading assistance apparatus comprising a computing system, wherein the computing system comprises at least one processor, and wherein the at least one processor is configured to:
segment airways and blood vessels from a medical image including lungs (see paras 45 and 52, a CT image of lungs is segmented into airways and blood vessels);
segment first regions corresponding to a first intensity section set to include at least one of blood or thrombi in a lung area image generated by excluding the airways and the blood vessels from the medical image (see Figs. 3-5 and paras 45, 47-48, 50, and 52, a perfusion map is created from small regions of the lung image, the perfusion map detects pulmonary emboli and/or blood flow issues, airways and blood vessels are excluded, perfusion is the passage of blood);
segment second regions corresponding to a second intensity section in the lung area image, wherein a combination of the first intensity section and the second intensity section are determined based on a target diagnosis information (see Figs. 3-5 and paras 45, 47-48, 50, and 52, a perfusion map is created from small regions of the lung image, the perfusion map detects pulmonary emboli and/or blood flow issues, airways and blood vessels are excluded, different color patches read on the plurality of intensity sections based on a target diagnosis of a pulmonary emboli); and
generate reading assistance information based on quantification of a distribution of a distribution of the first regions and the second regions within the lung area image (see paras 43, 47-48, 50, 57, and 59-60, a perfusion map is displayed to medical practitioner, the perfusion map includes intensity sections of perfusion displayed as opaque patches).
Regarding claim 2, Vaz further discloses wherein the plurality of intensity sections are set based on at least one of a clinically distinctive property or state of tissue (see paras 47 and 57-60, a perfusion map is created from small regions of the lung image, the perfusion map detects pulmonary emboli and/or blood flow issues).
Regarding claims 3 and 15, Vaz further discloses wherein the at least one processor is further configured to threshold the finding regions based on the plurality of intensity sections within the lung area image (see paras 43, 47, and 57-60, thresholds are used to determine low, average, and high perfusion regions).
Regarding claim 4, Vaz further discloses wherein the at least one processor is further configured to, within the lung area image, visualize a distribution of first finding regions corresponding to a first intensity section on a first window and also visualize a distribution of second finding regions corresponding to a second intensity section on a second window (see para 48, a medical practitioner can toggle between two windows of image data).
Regarding claim 5, Vaz further discloses wherein the at least one processor is further configured to, within the lung area image, visualize a distribution of first finding regions corresponding to a first intensity section and also visualize a distribution of second finding regions corresponding to a second intensity section by overlaying the distribution of second finding regions on the distribution of first finding regions (see para 47, an original slice of CT data can be overlaid with the perfusion visualization).
Regarding claim 6, Vaz further discloses wherein the at least one processor is further configured to provide a user menu configured to allow a user to reset the plurality of intensity sections or add one or more intensity sections to the plurality of intensity sections (see paras 58-59, a medical practitioner can interactively adjust the perfusion map).
Regarding claim 8, Vaz further discloses wherein the first intensity section is set to correspond to blood (see para 47, a perfusion map is created from small regions of the lung image, the perfusion map detects pulmonary emboli and/or blood flow issues, perfusion is the passage of blood).
Regarding claim 9, Vaz further discloses wherein the first intensity section is set to correspond to thrombi (see paras 5, 47-48, 50, and 56, a perfusion map is created from small regions of the lung image, the perfusion map detects pulmonary emboli and/or blood flow issues, a pulmonary embolism is a blood clot, thrombi is a blood clot).
Regarding claim 10, Vaz further discloses wherein the at least one processor is further configured to: segment second regions corresponding to the second intensity section set to correspond to blood within the lung area image; and visualize a distribution of the first regions and a distribution of the second regions so that the first regions and the second regions are distinguished from each other within the lung area image (see paras 45-47, a perfusion map is created from small regions of the lung image, the perfusion map detects pulmonary emboli and/or blood flow issues, airways and blood vessels are excluded, perfusion is the passage of blood).
Regarding claim 12, Vaz further discloses wherein the at least one processor is further configured to threshold the first regions based on the first intensity section within the lung area image (see paras 43, 47, and 57-60, thresholds are used to determine low, average, and high perfusion regions).
Regarding claims 13 and 18, Vaz further discloses wherein the at least one processor is further configured to: quantify a distribution of the first regions within the lung area image; and generate reading assistance information based on information about the quantification of the distribution of the first finding regions, wherein the reading assistance information includes: a first diagnosis information regarding whether a lung disease is present in the lung area image; and a second diagnosis information regarding a cause of the lung disease (see paras 45, 47-48, 50, 52, and 57-60, a perfusion map is displayed to medical practitioner, the perfusion map includes intensity sections of perfusion displayed as opaque patches, the perfusion map detects pulmonary emboli and/or blood flow issues).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 11 and 17 are rejected under 35 U.S.C. 103(a) as being unpatentable over Vaz as applied to claims 7 and 16 above, and further in view of Chaganti et al. (US 2021/0398654).
Vaz does not disclose expressly wherein the at least one processor is further configured to: segment third regions corresponding to a third intensity section set to correspond to ground-glass opacity (GGO) within the lung area image; and visualize a distribution of the first regions and a distribution of the third regions so that the first regions and the third regions are distinguished from each other within the lung area image.
Chaganti discloses wherein the at least one processor is further configured to: segment third regions corresponding to a third intensity section set to correspond to ground-glass opacity (GGO) within the lung area image (see paras 30, 49, 59, and 67, a medical image may show opacities such as GGO); and
visualize a distribution of the first regions and a distribution of the third regions so that the first regions and the third regions are distinguished from each other within the lung area image (see Figs. 2 and 6 and paras 54, 59, and 67, the classification of the medical images are displayed as is a probability map for regions that include GGO).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the GGO, as described by Chaganti, with the system of Vaz.
The suggestion/motivation for doing so would have been to aid in the determination of CT image opacities thereby reducing misclassification of abnormalities in lung regions.
Therefore, it would have been obvious to combine Chaganti with Vaz to obtain the invention as specified in claims 11 and 17.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK R MILIA whose telephone number is (571) 272-7408. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at 571-270-3438. The fax number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARK R MILIA/Primary Examiner, Art Unit 2681