DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Specification
The disclosure is objected to because of the following informalities:
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 6-7, 13-14 and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being clearly anticipated by Nie et al. US2011/0123087 hereinafter referred to as Nie.
As per Claim 1, Nie teaches a medical image processing method, performed by an electronic device, the method comprising:
obtaining a biological tissue image comprising a biological tissue, the biological tissue image comprising a first biological tissue image (Nie, Paragraph [0061], “Various embodiments of the invention relate to systems and methods for the measurement and display of attributes of an object of interest in a medical image. An object of interest is identified, such as a lesion in a breast area on a mammography image”) and a second biological tissue image that are images of the biological tissue in different views; (Nie, Paragraph [0053], [0054])
recognizing, in the biological tissue image, a first region of a lesion object in the biological tissue, and (Nie, Paragraph [0061], “At least one attribute of the lesion is then automatically measured, such as the area of the lesion, the width and height of a cluster of lesions, the number of lesions in a cluster, or the distance from one or more lesions to an anatomical feature such as the nipple, skin line or chest wall. The measurements are then displayed to a user, for example by mapping the measurements onto the mammography image. Additionally, anatomical zones, such as standard, quadrant and clock zones of the breast area may be determined and mapped onto the mammography image to display the location of the lesion as it corresponds to the zones”)
recognizing a lesion attribute matching the lesion object; (Nie, Paragraph [0061], “Various embodiments of the invention relate to systems and methods for the measurement and display of attributes of an object of interest in a medical image. An object of interest is identified, such as a lesion in a breast area on a mammography image. At least one attribute of the lesion is then automatically measured, such as the area of the lesion, the width and height of a cluster of lesions, the number of lesions in a cluster, or the distance from one or more lesions to an anatomical feature such as the nipple, skin line or chest wall.”)
dividing an image region of the biological tissue in the biological tissue image into a plurality of quadrant regions, (Nie, Paragraph [0061], “Additionally, anatomical zones, such as standard, quadrant and clock zones of the breast area may be determined and mapped onto the mammography image to display the location of the lesion as it corresponds to the zones”) the image region of the biological tissue comprising a first tissue image region in the first biological tissue image and a second tissue image region in the second biological tissue image, and the plurality of quadrant regions comprising: an inner quadrant region and an outer quadrant region divided from the first tissue image region, and an upper quadrant region and a lower quadrant region divided from the second tissue image region; (Nie, Figure 7, Figure 8, Paragraph [0071]-[0072])
obtaining target quadrant position information of a quadrant region in which the first region is located; and generating medical service data according to the target quadrant position information and the lesion attribute. (Nie, Paragraph [0061], “Additionally, anatomical zones, such as standard, quadrant and clock zones of the breast area may be determined and mapped onto the mammography image to display the location of the lesion as it corresponds to the zones”)
As per Claim 2, Nie teaches the method according to claim 1, wherein the biological tissue comprises a first tissue object; and (Nie, Paragraph [0066])
the dividing an image region of the biological tissue in the biological tissue image into a plurality of quadrant regions comprises: recognizing a second region of the first tissue object in the biological tissue image; (Nie, Paragraph [0066], “A breast border and nipple detection algorithm may be used to determine the location of the anatomical features such as the skin line 104 (breast border) and nipple 106”)
determining quadrant segmentation lines in the biological tissue image according to the second region; and using the image region of the biological tissue in the biological tissue image as a tissue image region, and dividing the tissue image region into the plurality of quadrant regions according to the quadrant segmentation lines. (Nie, Paragraph [0071]-[0072], [0076], “FIG. 9 illustrates a further embodiment of the invention, where a quadrant and clock location measurement of a lesion can be determined, as long as both the CC and ML/MLO views are available. One method of determining the quadrant and clock location is to compute the distance of the lesion to the central line segment in the CC view image (Dcc) and ML view image (Dml), which are denoted by Dcc and Dml, respectively. In the breast diagram as shown in FIG. 9, let the angle between the line segment that connects the lesion and the nipple (i.e., the center of the clock face) and the horizontal line be denoted by A, then there exists a relationship where A=ArcTan(Dml/Dcc). The clock position can be readily determined according to the value of A”)
As per Claim 6, Nie teaches the method according to claim 1, wherein the first biological tissue image is an image of the biological tissue in a craniocaudal view, and the second biological tissue image is an image of the biological tissue in a mediolateral oblique view. (Nie, Paragraph [0053], [0054])
As per Claim 7, Nie teaches the method according to claim 2, wherein the quadrant segmentation lines comprise a first segmentation line corresponding to the first biological tissue image and a second segmentation line corresponding to the second biological tissue image, and dividing the tissue image region into the plurality of quadrant regions according to the quadrant segmentation lines comprises: dividing the first tissue image region into the inner quadrant region and the outer quadrant region in the first biological tissue image according to the first segmentation line; dividing the second tissue image region into the upper quadrant region and the lower quadrant region in the second biological tissue image according to the second segmentation line; and determining the inner quadrant region, the outer quadrant region, the upper quadrant region, and the lower quadrant region as the quadrant regions. (Nie, Paragraph [0061], Figure 7, Figure 8, Paragraph [0071]-[0072]) “Additionally, anatomical zones, such as standard, quadrant and clock zones of the breast area may be determined and mapped onto the mammography image to display the location of the lesion as it corresponds to the zones”)
As per Claim 13, Claim 13 claims a medical image processing apparatus performing the medical image processing method as claimed in Claim 1. Therefore the rejection is analogous to that made in Claim 1.
As per Claim 14, Claim 14 claims the same scope as claimed in Claim 2. Therefore the rejection is analogous to that made in Claim 2.
As per Claim 18, Claim 18 claims the same scope as claimed in Claim 6. Therefore the rejection is analogous to that made in Claim 6.
As per Claim 19, Claim 19 claims the same scope as claimed in Claim 7. Therefore the rejection is analogous to that made in Claim 7.
As per Claim 20, Claim 20 claims a non-transitory computer storage medium, storing a computer program, the computer program comprising program instructions (¶0060), the program instructions performing the medical image processing method as claimed in Claim 1. Therefore the rejection is analogous to that made in Claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-5 and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Nie et al. US2011/0123087 hereinafter referred to as Nie as applied to Claims 2 and 13 respectively and further in view of Dal Mutto et al. US2019/0108396 hereinafter referred to as Dal Mutto.
As per Claim 3, Nie teaches the method according to claim 2,
Nie does not explicitly teach wherein the biological tissue image comprises a first image and a second image, and the first image and the second image are images of the in different views; and the recognizing a second region of the first object in the image comprises:
obtaining an image semantic segmentation model, and determining a first marked region of the first object in the first image based on the image semantic segmentation model; determining a second marked region of the first object in the second image based on the image semantic segmentation model; and determining the first marked region and the second marked region as the second region.
Dal Mutto teaches wherein the biological tissue image comprises a first image and a second image, and the first image and the second image are images of the in different views; and the recognizing a second region of the first object in the image comprises:
obtaining an image semantic segmentation model, and determining a first marked region of the first object in the first image based on the image semantic segmentation model; determining a second marked region of the first object in the second image based on the image semantic segmentation model; and determining the first marked region and the second marked region as the second region. (Dal Mutto, Paragraph [0156], “. In some embodiments, pixel-accurate image segmentation and semantic classification are performed based on a framework based on Fully Convolutional Neural Networks (FCNNs) and described in Song, S., S. P. Lichtenberg, and J. Xiao. Sun RGB-D: A RGB-D scene understanding benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015. In more detail, in one embodiment, a training set is generated by randomly placing a set of 3-D models on a surface (e.g., using physics modeling engines to make sure that they respect fundamental physical properties such as gravity and to ensure that the objects do not interpenetrate or “clip” through each other), and then rendering such 3D models from multiple points of view, saving which class (e.g., which object or the type of the object) each pixel in the rendered image belongs to. This is called semantic segmentation and can be solved with Fully Convolutional Networks (FCNs). FCNs are usually obtained by modification of traditional convolutional neural networks (CNNs) for classification through two classical techniques: “convolutionalization” of the final fully connected layers of the CNN and by the introduction of skip layers. The same operation can be applied to a Multi-View classification CNN or to a CNN obtained by considering only one layer of such a CNN.”)
Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Dal Mutto into Nie because by utilizing a segmentation model and a fully convolutional network will improve the accuracy of the lesion recognition method of Nie.
Therefore it would have been obvious to one of ordinary skill to combine the two references to obtain the invention in Claim 3.
As per Claim 4, Nie in view of Dal Mutto teaches the method according to claim 3, wherein the determining a first marked region of the first tissue object in the first biological tissue image based on the image semantic segmentation model comprises: performing forward convolution and backward convolution on the first biological tissue image based on a forward convolutional layer and a transposed convolutional layer in the image semantic segmentation model, to obtain a convolutional feature map; determining an object attribute of each pixel in the first biological tissue image according to the convolutional feature map, the object attribute comprising a first tissue attribute indicating whether a corresponding pixel pertains to the first tissue object; and using an image region formed by pixels pertaining to the first tissue object as the first marked region of the first tissue object in the first biological tissue image. (Nie, Paragraph [0061], [0066], [0076] and Dal Mutto, Paragraph [0156], [0139], [0132])
The rationale applied to the rejection of claim 3 has been incorporated herein.
As per Claim 5, Nie in view of Dal Mutto teaches the method according to claim 4, wherein the quadrant segmentation lines comprise a first segmentation line corresponding to the first biological tissue image and a second segmentation line corresponding to the second biological tissue image, and the object attribute further comprises a second tissue attribute; and the determining quadrant segmentation lines in the biological tissue image according to the second region comprises: obtaining an edge boundary of the biological tissue in the first biological tissue image, and determining the first segmentation line in the first biological tissue image according to the first marked region and the edge boundary; using an image region formed by pixels pertaining to the second tissue attribute as an object region, of a second tissue object in the second biological tissue image, in the second biological tissue image; and determining an object boundary of the object region, and determining the second segmentation line in the second biological tissue image according to the second marked region and the object boundary.
The rationale applied to the rejection of claim 4 has been incorporated herein.
As per Claim 15, Claim 15 claims the same limitation as Claim 3 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 3.
As per Claim 16, Claim 16 claims the same limitation as Claim 4 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 4.
As per Claim 17, Claim 17 claims the same limitation as Claim 5 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 5.
Allowable Subject Matter
Claims 8-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MING HON whose telephone number is (571)270-5245. The examiner can normally be reached M-F 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on 571-270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MING Y HON/Primary Examiner, Art Unit 2666