Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Objections to the claims are withdrawn.
Applicant's arguments filed 10/22/2025 have been fully considered but they are not persuasive.
PNG
media_image1.png
626
640
media_image1.png
Greyscale
On page 8, Applicant argues,
Examiner respectfully disagrees. As outlined under Step 2A: prong 1 of the office action dated 07/22/2025, the claim is directed to a mental process. For example, claim 1 recites receiving segmentation results, determining differences between those results, and using the differences to detect anatomical abnormalities. A physician reviewing segmented medical scans could mentally compare two segmentations, identify regions of disagreement between those results, and apply a simple mental rule (e.g., deciding that a number of differences exceeds a limit) for abnormality detection.
Examiner notes that the “receive” step of claim 1 merely requires receiving segmentation data obtained from segmentation algorithms rather than requiring the processor to perform the claimed segmentation algorithms. As a result, the limitation can reasonably be interpreted as a person receiving two segmentation maps for analysis, and does not require a human to process tens of thousands of calculations required for anatomical segmentation, as argued by applicant. This mental process can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper. Thus, under Step 2A: prong 1, claim 1, as drafted, falls within the mental processes grouping (see MPEP 2106.04(a)(2)(III)(B.)), and the claim is directed to a judicial exception (mental process).
PNG
media_image2.png
53
632
media_image2.png
Greyscale
PNG
media_image2.png
53
632
media_image2.png
Greyscale
PNG
media_image1.png
626
640
media_image1.png
Greyscale
On pages 8-9, Applicant Argues,
PNG
media_image5.png
58
602
media_image5.png
Greyscale
PNG
media_image6.png
282
630
media_image6.png
Greyscale
PNG
media_image2.png
53
632
media_image2.png
Greyscale
Step 2A: prong 2 considers if the claim contains additional elements which integrate the judicial exception into a practical application (see MPEP 2106.04(d)).
In this case, claim 1 includes additional elements: (i) “a memory that stores a plurality of instructions; and a processor that couples to the memory…”, and (ii) “the two segmentation maps obtained by a first segmentation and a second segmentation, the first segmentation implementing a shape-prior-based segmentation algorithm, and the second segmentation implementing a segmentation algorithm that is not based on a shape-prior or the second segmentation accounting for one or more shape priors at a lower weight as compared to the first segmentation”.
Regarding additional element (i), reciting a memory and processor amounts to merely implementing the abstract idea on a generic computer. The additional element is recited at a high level of generality such that they amount to merely using a computer as a tool to implement the abstract idea. Accordingly, the additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Regarding additional element (ii), although the act of receiving and interpreting segmentation maps is reasonably interpreted as part of the mental process identified under Step 2A: prong 1, the further limitations which specify that the received data is “obtained by a first segmentation and a second segmentation…” is being treated as an additional element. Similar to the arguments made under Step 2A: prong 1, the additional element does not require the claimed processor to perform any specific segmentation based on segmentation algorithms, but rather it merely characterizes the data of the segmentation maps being received. Accordingly, the additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
MPEP 2106.05(a) states:
“An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107.”
Applicant argues that the claim contains a specific technical improvement for detecting abnormalities using a boundary distance threshold. However, claim 1 contains no recitation of such boundary distance thresholding. The claim, as drafted, contains additional elements which do not integrate the exception into a practical application because the additional elements do not meaningfully limit the judicial exception. For example, the claim does not include any specific details regarding the segmentation output comparison, or anatomical abnormality detection, which amount to a particular solution or particularly way of achieving the desired outcome. The claim merely links the mental process to a technological environment without including any specific technical steps to implement the alleged improvement. Thus, claim 1 fails to integrate the abstract idea into a practical application under Step 2A: prong 2.
PNG
media_image9.png
286
644
media_image9.png
Greyscale
On page 10, Applicant argues,
Under Step 2B the additional elements identified above are evaluated to determine if they are sufficient to amount to significantly more than the judicial exception (see MPEP 2106.05).
The additional elements of claim 1 can be reasonably interpreted as well-understood, routine, and conventional computer elements and functions used to acquire, compare, and process data. The claims limitations are generally recited such that merely include receiving segmentation data, determining a difference between two segmentation maps, and deciding whether the difference exceeds a threshold. The claim does not specify how this difference is computed, what anatomical abnormalities are being detected, how thresholding values are applied, or any other specific steps which would represent an unconventional approach.
Taken individually or in combination, the additional elements of claim 1 do not add meaningful limitations beyond the identified mental process. They do not provide an improvement to the technical field or add any specialized hardware implementing the judicial exception, and thus do not amount to significantly more than the judicial exception. Accordingly, claim 1 does not recite an inventive concept under Step 2B and the claim is not patent eligible under 35 U.S.C. 101.
PNG
media_image10.png
272
634
media_image10.png
Greyscale
PNG
media_image11.png
876
654
media_image11.png
Greyscale
On pages 11-13, Applicant argues,
Examiner respectfully disagrees. Gill, pg. 146, Sections 3 and 3.1, teaches generating two distinct lung segmentation results and determining volumes of agreement and disagreement between them. Gill, pgs. 146-147, Sections 3.2 and 3.2.1, further discloses dividing the disagreement volume into various chunks, which are individually classified as belonging or not belonging to the lung. Accordingly, Gill’s disagreement volume and its associated chunks, correspond to the “ascertain a difference between two segmentation maps” limitation of claim 1.
During validation, Gill, pgs. 150-151, Section 7.2, describes additional classification criteria applied specifically to the chunks of the disagreement volume. Gill explicitly teaches that the additional rule applies to chunks
ω
∈
Ω
c
l
a
r
g
e
, which corresponds to the large-volume component of the difference between the two segmentation methods (see equations in section 3.1). Gill’s additional rule include HU-range thresholding (-21 to 32 HU) to detect pleural fluid voxels. This allows identification of disagreement chunks which primarily represent pleural effusion. Because this identification only occurs when the disagreement chunk (i.e., the difference between the segmentation maps) satisfies the threshold criteria, Gill reasonably teaches detecting an anatomical abnormality based on whether the difference between the two segmentation maps exceeds a predetermined threshold as required by claim 1.
PNG
media_image12.png
389
644
media_image12.png
Greyscale
On pages 12-13, Applicant Argues,
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “a metric quantifying the difference” ) are not recited in the rejected claim. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
As noted above, Gill reasonably teaches detecting an anatomical abnormality based on whether the difference between the two segmentation maps exceeds a predetermined threshold.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9, 11, and 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. The following analysis is in accordance with the Subject Matter Eligibility Test (See flowchart in MPEP 2106).
Step 1
Independent claim 1, is directed to one of the four statutory categories of eligible subject matter
(Machine): thus, the claim passes Step 1 of the Subject Matter Eligibility Test.
Step 2A: prong 1
The claim recites:
“receive two segmentation maps for an input image, the two segmentation maps obtained by a first segmentation and a second segmentation, the first segmentation implanting a shape-prior-based segmentation algorithm, and the second segmentation implementing a segmentation algorithm that is not based on a shape-prior or the second segmentation accounting for one or more shape priors at a lower weight as compared to the first segmentation” which can be reasonably interpreted as a mental process of a human observer receiving two different segmentation results for observation and evaluation.
“ascertain a difference between the two segmentation maps” which can be reasonably interpreted as a mental process of a human observer observing and comparing segmentation maps for to identify differences.
“detect an anatomical abnormality when the difference between the two segmentation maps exceeds a predetermined or user-defined threshold ” which can be reasonably interpreted as a mental process of a human observer, such as a physician, detecting anatomical abnormalities when the amount of differences exceeds a set limit.
Step 2A: prong 2 analysis
The judicial exception is not integrated into a practical application because additional elements:
“a memory that stores a plurality of instructions; and a processor that couples to the memory and is configured to execute the plurality of instructions…” which can be reasonably interpreted as merely using a computer as a tool to implement the abstract idea. Implementing an abstract idea on a computer does not integrate a judicial exception into a practical application.
“the two segmentation maps obtained by a first segmentation and a second segmentation, the first segmentation implanting a shape-prior-based segmentation algorithm, and the second segmentation implementing a segmentation algorithm that is not based on a shape-prior or the second segmentation accounting for one or more shape priors at a lower weight as compared to the first segmentation” which is recited at a high level of generality such that they amount to merely describing the data received by the processor. Accordingly, this additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Step 2B
Finally, the claim does not include additional elements that are sufficient to amount to significantly
more than the judicial exception because of the following:
“a memory that stores a plurality of instructions; and a processor that couples to the memory and is configured to execute the plurality of instructions…” which can be reasonably interpreted as well-understood, routine, and conventional computer elements, and thus, does not amount to significantly more than the judicial exception.
“the two segmentation maps obtained by a first segmentation and a second segmentation, the first segmentation implanting a shape-prior-based segmentation algorithm, and the second segmentation implementing a segmentation algorithm that is not based on a shape-prior or the second segmentation accounting for one or more shape priors at a lower weight as compared to the first segmentation” which can be reasonably interpreted as well-understood, routine, and conventional segmentation algorithms in the field, and thus, does not amount to significantly more than the judicial exception.
Conclusion
The additional elements of the claim do not recite an improvement in the functioning of a computer or
other technology or technical field, the claimed steps are not performed using a particular machine, the claimed steps do not effect a transformation, and the claims do not apply the judicial exception in any meaningful way beyond generically linking the use of the judicial exception to a particular technological environment. Therefore, the analysis in accordance with the Subject Matter Eligibility Test does not result in a conclusion of eligibility.
Independent claims 11 and 16 contain elements found analogous to that of claim 1. Therefore,
claims 15-17 are similarly rejected under 35 U.S.C. 101.
Dependent claims 2-9 recite additional elements that do not integrate the same abstract idea into a practical application or transform the abstract idea in a meaningful way. The dependent claims are rejected for the following reasons:
Claims 2-4 provide additional elements that do not integrate the judicial exception into a practical application because they are merely insignificant extra-solution activity.
5, 6 and 9 provide additional elements that do not integrate the judicial exception into a practical application because the elements are well-understood, routine, and conventional in the field.
Claims 7and 8 recite elements found to be part of the same abstract idea (mental process) identified in the analysis of claim 1.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 4, 7, 9, 11, 15 and 16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gill et al. (“An approach for reducing the error rate in automated lung segmentation”, Computers in biology and medicine 76, 2016), (hereinafter, Gill).
Regarding claim 1, Gill teaches a system for image processing, comprising:
a memory that stores a plurality of instructions; and a processor that couples to the memory and is configured to execute the plurality of instructions to (Gill, “Lung segmentation is one of the first processing steps in computer-aided quantitative lung image analysis.”, pg. 143, 1st column, 1st paragraph, lines 1-2, “In this paper, we propose a segmentation fusion approach based on a classification framework, which selectively combines (components of) two independently generated lung segmentations to form a new segmentation result with no or reduced errors.”, pg. 144, 1st column, 2nd paragraph, lines 1-4, A computer-implemented segmentation is performed to fuse segmentation results. This process includes a processor and memory to execute the detailed segmentation functions.):
receive two segmentation maps for an input image, the two segmentation maps obtained by a first segmentation and a second segmentation, the first segmentation implanting a shape-prior-based segmentation algorithm, and the second segmentation implementing a segmentation algorithm that is not based on a shape-prior or the second segmentation accounting for one or more shape priors at a lower weight as compared to the first segmentation (Gill, “In this paper, we propose a segmentation fusion approach based on a classification framework, which selectively combines (components of) two independently generated lung segmentations to form a new segmentation result with no or reduced errors. The idea behind this approach is to take advantage of the strength of both methods, but without including their errors. In our case, the two segmentations are generated by a region growing and robust active shape model (RASM) based method [6] (Section 2).”, pg. 144, 1st column, 2nd paragraph, lines 1-8, see Fig. 1, (b) Region growing segmentation result and (c) Model-based segmentation result, A dual segmentation approach is proposed to improve the accuracy of lung segmentation. The system applies two distinct segmentation methods: a model-based segmentation using an active shape model and an intensity-based region growing segmentation. This process generates two segmentation maps for each method for fusion.);
ascertain a difference between the two segmentation maps (Gill, “Given the results of two lung segmentation algorithms A and B, we assume that if both methods label a voxel as lung tissue, then the likelihood of the voxel representing lung is high. Therefore, it will be labeled as lung by our fusion method. For components of disagreement, a trained classifier is utilized to individually decide
which components should be added to the volume of mutual agreement between both methods, resulting in the final output segmentation of the algorithm. Note that classification is performed on components of disagreement (i.e., volume chunk). Therefore, all voxels of the volume chunk will receive the same label by the classifier.”, pg. 144, 2nd column, 2nd full paragraph, “The segmentations are divided into two volumes: the common segmentation volume
V
C
=
V
R
G
∩
V
O
S
F
(all voxels that belong to both segmentation volumes) and the difference segmentation volume
V
D
(all voxels that are in one of the segmentations but not in the other).”, pg. 146, 1st column, 1st full paragraph, lines 1-7, see Fig. 2, (d) Components of disagreement, Given the two segmentation maps, the system computes a difference volume to identify components of disagreements. These components correspond to various volume chunks where the segmentation results differ.); and
detect an anatomical abnormality when the difference between the two segmentation maps exceeds a predetermined or user-defined threshold (Gill, “The components of disagreement mainly represent trachea/airways and the region of the costophrenic angle, respectively. For cases 3 and 4, model-based results without region-growing results (OSF/RG) are shown. The components of disagreement mainly represent tumor and fat tissue, respectively. Note that, while some of these components belong to the lung, others do not.”, Fig. 2 description, lines 3-5, “A classifier is learned from a set of training CT volumes and corresponding reference lung segmentations in STrain (Section 4.1) to distinguish between chunks belonging to the lung and those not belonging to the lung.”, pg. 146, 2nd column, 3rd full paragraph, lines 1-4, “a feature descriptor
f
ω
is computed for each chunk
ω
∈
Ω
C
l
a
r
g
e
. The descriptor considers different properties that are captured by calculating the following feature volumes on the input CT scan… (iv) Distance from lung boundary: This feature volume is computed to estimate how close the chunk is to the lung boundary. Also, it is used for distinguishing chunks inside the lung, such as tumors, from chunks outside the lung, such as a leak into colon. Since the model segmentation
V
R
A
S
M
has shown to be successful in including lung tumors [6], it is utilized to compute a signed distance transform. To compare distances across CT volumes, they are normalized by the maximum boundary distance found inside the lung.”, pg. 147, 1st column, lines 3-6 and 26-34, “To enable a fair comparison and, at the same time, demonstrate the impact of a
well adapted classification system, we processed the LOLA11 test set with two variants of our algorithm: FusionST and FusionPE. FusionST represents the standard algorithm as described in Section
3. For FusionPE, the following classification rule was added to the system to enable it to deal with pleural effusion cases that could not be learned from our training data set STrain. The idea behind this rule is to reject large chunks that mainly consist of pleural fluid and constitute a large area of a lung. Therefore, a chunk
ω
=
Ω
C
l
a
r
g
e
is rejected if its volume is at least 20% of the volume of the model-based lung segmentation and the relative amount of pleural fluid is at or exceeds 50% of the volume of
ω
. Pleural fluid voxels are identified with a range-bound threshold operation, and the range [- 21 32] HU was selected by combining the ranges for exudate pleural effusions and transudate effusions that were previously reported by Abramowitz et al. [20].”, pg. 151, 1st column, lines 1-17, The volume chunks identified from the difference volume are evaluated to determine whether or not they belong to the lung. For this, the system computes feature descriptors for each chunk, such as density, texture, and distance from lung boundaries. This enables the classifier to distinguish structures such as tumors or leaks. In addition, Classification rules are implemented to identify pleural fluid voxels for the volume chunks based on identifying voxels which fall within a Hounsfield unit thresholding range (e.g., exceeds -21 but is under 32), thereby detecting a specific anatomical abnormality of the lungs when voxels of the chunks exceed a set threshold. These decision help guide the selective fusion of the two segmentation maps by including only the relevant regions in the final lung segmentation.).
Regarding claim 2, Gill teaches the system of claim 1, wherein an indication of the difference is output (Gill, “As can be seen in the difference image of both segmentations (Fig. 1(d))”, pg. 144, 1st column, 1st full paragraph, lines 9-10, see (d) of Figs. 1 and 2).
Regarding claim 4, Gill teaches the system of claim 2, wherein the indication is coded to represent a magnitude of the difference (Gill, “The following procedure is used to divide the difference volume VD into the three subsets. First, a morphological opening operation is performed using a spherical form element with a radius of 1 mm to differentiate between chunks and boundary bias differences. Second, a connected component analysis is applied, resulting in a set of chunks
Ω
C
and a set of surface bias volumes
Ω
B
. Third, chunks in
Ω
C
that are smaller than ρ voxels are put into
Ω
C
s
m
a
l
l
and the rest into
Ω
C
l
a
r
g
e
so that the classification system can process them separately.”, pg. 146, 1st column 3rd full paragraph, lines 1-9, see (d) of Figs. 1 and 2, A spatial volume is defined for each chunk. These chunks represent a size or magnitude of difference between the segmentation results. This can be seen highlighted in green and outlined in red in (d) of figures 1 and 2.).
Regarding claim 7, Gill teaches the system of claim 1, wherein the first and second segmentations in the two segmentation maps represent a bone tissue or a cancerous tissue (Gill, “This feature volume is used to distinguish between chunks belonging to different structures in a CT volume such as air, lung tissue, fat, bones, etc. based on their Hounsfield units (HU)... This feature volume is computed to estimate how close the chunk is to the lung boundary. Also, it is used for distinguishing chunks inside the lung, such as tumors, from chunks outside the lung, such as a leak into colon.”, pg. 147, 1st column, Section (i) and (iv), lines 1-4 and 1-5, The initial segmentation results are used to identify volume chunks where disagreements occur. This disagreement can represent various structures such as bone or tumors. The first and second segmentation “represent” bone or cancerous tissue, in that their disagreement is used to identify these structures.).
Regarding claim 9, Gill teaches the system of claim 1, wherein the input image is at least one of an X-ray image, an emission image, and a magnetic resonance image (Gill, “We assess fusion performance on a diverse set of 204 lung CT scans and provide a comparison to the performance of both input lung segmentations.”, pg. 144, 1st column, 2nd full paragraph, lines 16-18, Note that in accordance with the claim interpretation outlined above, only one item of the listing is required.).
Claim 11 corresponds to claim 1, reciting a computer-implemented image processing method comprising steps corresponding to functions of the system of claim 1. Gill teaches a computer-implemented image processing method (Gill, “Lung segmentation is one of the first processing steps in
computer-aided quantitative lung image analysis.”, pg. 143, 1st column, 1st paragraph, lines 1-2, see Fig. 3) comprising steps corresponding to functions of the system of claim 1. As indicated in the analysis of claim 1, Gill teaches all the limitations according to claim 1. Therefore, claim 11 is rejected for the same reasons as claim 1.
Claim 16 corresponds to claim 1, additionally reciting a non-transitory computer-readable medium for storing executable instructions, which cause an image processing method to be performed, the method comprising steps corresponding to functions of the image recognition system of claim 1. Gill teaches a non-transitory computer-readable medium for storing executable instructions (Gill, “Lung segmentation is one of the first processing steps in computer-aided quantitative lung image analysis.”, pg. 143, 1st column, 1st paragraph, lines 1-2, “To capture the relative location of chunks, two feature volumes are used to store the location of each voxel in y- (anterior–posterior axis) and z-direction (superior–inferior axis).”, pg. 147, 1st column, (vi) lines 1-4, The system is a computer-implemented lung segmentation. A non-transitory computer-readable medium is necessary to execute the functions of this process, such as to store the spatial information corresponding to disagreement chunks.), which cause an image processing method to be performed, the method comprising steps corresponding to functions of the image recognition system of claim 1. As indicated in the analysis of claim 1, Gill teaches all the limitations according to claim 1. Therefore, claim 16 is rejected for the same reasons as claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Gill et al. (“An approach for reducing the error rate in automated lung segmentation”, Computers in biology and medicine 76, 2016) in view of Feng et al. (US 20130121545 A1), (hereinafter, Feng).
Regarding claim 3, Gill teaches the system of claim 2, wherein the indication and the input image or the first segmentation map, or the second segmentation map are displayed (Gill, see Fig. 1, (a) Coronal slice of the CT scan. (b) Region growing segmentation result. (c) Model-based segmentation result. (d) Difference volume between the segmentations in (b) and (c).).
Gill does not teach the indication and the input image or the first segmentation map or the second segmentation map are displayed on a display device.
However, Feng teaches the indication and the input image or the first segmentation map or the second segmentation map are displayed on a display device (Feng, “Returning to FIG. 2, at step 208, the lung segmentation results are output. The lung segmentation results can be output by displaying the lung segmentation results, for example, on a display device of a computer system.”, pg. 3, paragraph 0028, Lung segmentation results are output for display on a display device.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Gill by incorporating a display device as taught by Feng (Feng, pg. 3, paragraph 0028), in order to display the lung segmentation results (e.g., those depicted in Gill’s Fig. 1). The motivation for doing so would have been to output the various segmentation results for visual review, thereby enabling assessment of segmentation accuracy. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Gill with Feng to obtain the invention as specified in claim 3.
Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Gill et al. (“An approach for reducing the error rate in automated lung segmentation”, Computers in biology and medicine 76, 2016) in view of Hu et al. (“An effective approach for CT lung segmentation using mask region-based convolutional neural networks”, Artificial Intelligence in Medicine 103, 2020), (hereinafter, Hu).
Regarding claim 5, Gill teaches the system of claim 1. Gill does not teach wherein the second segmentation is based on a machine learning model.
However, Hu teaches wherein the second segmentation is based on a machine learning model (Hu, “Our study proposes a new lung segmentation approach applied to CT images using the Convolutional Neural Network (CNN) Mask R-CNN combined with ANN with emphasis on supervised and non-supervised models.”, pg. 2, 1st column, 1st full paragraph, “Fig. 2(A) presents the training of a model based on the Mask R-CNN network designed for mapping lung regions in CT images. The training set acts as the ground truth since it only uses images already segmented by a specialist… Fig. 2(B) shows the result of the lung mapping of a new lung image using the knowledge stored by the Mask R-CNN Lung model in the previous step. Eq. (8) explains the lung map point. LungMap(x, y) returns 1 when Mask R-CNN finds a lung region, and 0 otherwise (background region).”, pg. 4, 2nd column, 4th and 5th full paragraphs, See figs. 1 and 2, The method employs a Mask R-CNN trained to identify lung region from input image data. The model relies on supervised learning from already segmented ground truth lung images.).
Gill teaches fusing a region-growing and a model-based segmentation to assess components of disagreements (Gill, “Fig. 2 provides several examples for a region growing and model-based lung segmentation approach that will be utilized in this paper. Differences in generated lung masks result in local volume components of disagreement (Fig. 2d), which can have many causes.”, pg. 144, 2nd column, 1st full paragraph, lines 3-7). Gill further teaches that the region-growing segmentation is selected because it offers a simple, density-based lung segmentation (Gill, “Basically, methods developed can be grouped into three categories given below: (a) Simple, low complexity methods like region growing [2,3], which are based on simple assumptions (e.g., density range of lung tissue). These methods typically work well for normal lungs, but may fail in the case of diseased lungs or imaging artefacts.”, pg. 1143, 1st column, 2nd paragraph, lines 1-4 and 2nd column, lines 1-3). Hu teaches a machine learning-based segmentation approach using Mask R-CNN for lung CT segmentation, which provides a similarly simple binary segmentation output (Hu, see Fig. 1, output mask) and does not rely on predefined shape priors. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Gill by replacing the region-growing segmentation with the Mask R-CNN lung segmentation of Hu (Hu, pg. 4, 2nd column, 4th and 5th full paragraphs, See figs. 1 and 2). The motivation for doing so would have been to improve robustness and generalization of the segmentation, while preserving the simplicity and non-shape-prior characteristics, thereby improving the final segmentation fusion result. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Gill with Hu to obtain the invention as specified in claim 5.
Regarding claim 6, Gill in view of Hu teaches the system of claim 5, wherein the machine learning model is based on an artificial neural network (Hu, “Our study proposes a new lung segmentation approach applied to CT images using the Convolutional Neural Network (CNN) Mask R-CNN combined with ANN with emphasis on supervised and non-supervised models.”, pg. 2, 1st column, 1st full paragraph).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Gill et al. (“An approach for reducing the error rate in automated lung segmentation”, Computers in biology and medicine 76, 2016) in view of Ruikar et al. (“Segmentation and Analysis of CT Images for Bone Fracture Detection and Labeling”, Medical Imaging. CRC Press, 2019), (hereinafter, Ruikar).
Regarding claim 8, Gill teaches the system of claim 7. Gill does not teach wherein the anatomical abnormality is a bone fracture, and wherein the difference is indicative of the bone fracture.
However, Ruikar teaches wherein the anatomical abnormality is a bone fracture, and wherein the difference is indicative of the bone fracture. (Ruikar, “In the proposed work, we have developed a CAD system for bone fracture detection and analysis. The proposed system works in stages: unwanted artifacts removal, bone region extraction, and unique label assignment. In the first step, the acquiesced CT images are preprocessed to remove unwanted artifacts and to enhance bone regions. A histogram modeling and point processing-based image enhancement technique are devised to erase the flesh surrounded by bone tissue, and to enhance bone tissue regions. A 2D region growing-based segmentation method is adopted to extract bone tissue regions from the preprocessed image. The seed points are selected automatically. In the last stage, the hierarchical labeling scheme is used to assign the unique labels to each fractured piece by considering patient-specific bone anatomy.”, pg. 132, 3rd full paragraph, lines 1-12, “In order to test the performance of the proposed method, this is used to segment and label both healthy and fractured bones from the real patient-specific CT stack… Figure 7.10 (a) shows a healthy patella and femur. Figure 7.10 (b) shows the resultant image. The labels represented in the current CT stack have two healthy bones without fractures. Figure 7.10 (c) shows a CT with a fracture in the tibia and fibula. Figure 7.10 (d) shows the result. The labels indicate that the image has two individual bones and each has two fractured pieces.”, pg. 147, 1st paragraph, see fig. 7.10 (d), Bone CT images are segmented using a region growing approach, and individual bones and their fractured pieces are uniquely labeled based on patient-specific anatomy, thereby enabling detecting and analysis of bone fractures.).
Gill teaches a fusion-based segmentation approach for lung CT images that combines results of a simple density-based region growing segmentation and a more complex “structurally aware” model-based segmentation (Gill, “In this paper, we propose a segmentation fusion approach based on a classification framework, which selectively combines (components of) two independently generated lung segmentations to form a new segmentation result with no or reduced errors.”, pg. 144, 1st column, 2nd full paragraph, lines 1-4). This allows for the analysis of disagreements between the two segmentations (e.g., volume chunks) which are processed to identify various structures to be selectively accept or reject to reduce segmentation error, particularly for diseased lungs (Gill, “(a) Simple, low complexity methods like region growing [2,3], which are based on simple assumptions (e.g., density range of lung tissue). These methods typically work well for normal lungs, but may fail in the case of diseased lungs or imaging artefacts. An advantage of such methods is the low computational complexity. (b) Advanced, more robust algorithms that try to overcome the problems of category (a) and typically show higher computational complexity.”, pg. 143, 1st column, 2nd paragraph, lines 5-6, and 2nd column, lines 1-7, “This feature volume is computed to estimate how close the chunk is to the lung boundary. Also, it is used for distinguishing chunks inside the lung, such as tumors, from chunks outside the lung, such as a leak into colon. Since the model segmentation VRASM has shown to be successful in including lung tumors [6], it is utilized to compute a signed distance transform.”, pg. 147, 1st column, Section (iv), lines 1-7). Ruikar teaches a bone CT image segmentation approach which enables bone fracture detection and unique labeling (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the lung segmentation method of Gill by applying its segmentation fusion approach to bone CT images as taught by Ruikar (Ruikar, pg. 132, 3rd full paragraph, lines 1-12, pg. 147, 1st paragraph, see fig. 7.10 (d)). The motivation for doing so would have been to improve the robustness and accuracy of bone segmentation for fracture detection, by exploiting complementary segmentations and selectively analyzing their disagreements (as suggested by Gill, “We have presented a fusion approach to increase the robustness of automated lung segmentation by selectively combining the output of a region growing and a model-based lung segmentation method.", pg. 151, 2nd column, 6th full paragraph, line 1 and pg. 152, 1st column, lines 1-3). Additionally, Gill teaches toward the extension of the method for other application domains (Gill, “The increased robustness make the fusion approach an attractive selection for applications requiring high volume processing like multi-site clinical trials. In addition, the algorithm can be generalized to other application domains.”, pg. 152, 2nd column, line 4 and pg. 153, 1st column, lines 1-4).
The combination of Gill in view of Ruikar would tailor Gill’s fusion-based segmentation to the bone domain and exploit the dual segmentation method to identify disagreement structures. In particular, the disease aware segmentation of the lungs would be modified to identify bone related diseases and/or abnormalities, such as the bone fractures taught by Ruikar. See link included in footnote below for an example bone CT image that illustrates the type of bone fracture structures that could be detected using the proposed combination of Gill in view of Ruikar1. Thus, Gill in view of Ruikar teaches wherein the anatomical abnormality is a bone fracture, and wherein the difference is indicative of the bone fracture. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Gill with Ruikar to obtain the invention as specified in claim 8.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CONNOR LEVI HANSEN whose telephone number is (703)756-5533. The examiner can normally be reached Monday-Friday 9:00-5:00 (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublish