DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1, and 12 have been amended.
Claims 3, and 14 have been cancelled.
Claims 1-2, 4-12, and 15-20 are still pending for consideration.
Response to Arguments
Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-2, 8, 10-13, 17, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 20220092786 A1). in view of Gooding et al. (US 20230100255 A1).
Regarding claim 1, Xu et al. teaches an apparatus, comprising: at least one processor (see Fig. 8 disclose apparatus, para [0038]; “software modules running on a processor of a computing system or a medical imaging system”) configured to: provide a visual representation of a medical image (see para [0016]; “The present invention is described herein to give a visual understanding of methods for localization in medical images. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations typically accomplished in the memory or other circuitry/hardware of a computer system”), wherein the medical image includes a tubular structure associated with a human body (see para [0050]; “medical image data can be a 3D medical image of at least a part of the tubular structure”). Xu et al. provides the tubular network and localized organ segments that are to be annotated, but not specifically teach obtain, based on one or more user inputs, a marking of the tubular structure in the medical image; wherein the marking includes one or more lines drawn through or around the tubular structure; and generate, based on the marking of the tubular structure and a pre-trained machine- learning (ML) image annotation model, an annotation of the tubular structure.
In the same field of endeavor, Gooding et al. teaches obtain, based on one or more user inputs, a marking of the tubular structure in the medical image; wherein the marking includes one or more lines drawn through or around the tubular structure (see para [0003]; “The segmentation of healthy organs and cancerous regions on an image is known in the clinic as “contouring”, as a contour is drawn around each structure on the image, generally image slice by image slice. As used herein, a contour is defined as the outline of a structure of interest, such as an organ or a tumour”, se also para [0013; “a user may interact with a single image slice by contouring a structure in that image slice”, see also para [0011]; “For example, a tumour node may be indistinguishable from tubular structures such as arteries, when looking at a single image slice”, and para [0061]; “the data representing an input contour is a user-generated contour…. the data representing an input contour is obtained by one or more of manual contouring, auto contouring, or user interactive contouring”, Note: the user contour (marking) can reasonably be a line drawn around a tubular structure); and generate, based on the marking of the tubular structure and a pre-trained machine- learning (ML) image annotation model, an annotation of the tubular structure (see para [00412]; “predicting target contour data for the selected target image slice that identifies at least one of the same one or more structures of interest within the target image slice, based on one or more of the received input 2D image slices and the data representing an input contours…. Preferably, the target contour prediction is done using a machine learning model” Note: the “annotation tubular structure” implied the predicted contour/label of the structure of interest (tubular structure), and takes the input contour + image slices and predict contour annotations for the same structure on other slices i.e., pre-trained ML model). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general method for automatically localizing organ segments in a three-dimensional image and analyzed tubular network as image data of Xu et al. in view of the use imaging system providing tools for an interactive application for contouring medical images of Gooding et al. in order to provide quality contour and speed for successful radiotherapy planning (see para [0003]).
Regarding claim 2, the rejection of claim 1 is incorporated herein.
The combination of Xu et al. and Gooding et al. further teach wherein the annotation includes a segmentation mask associated with the tubular structure (see Gooding et al. para [0026]; “proposed a ML model that takes user interaction at one image slice to estimate the segmentation mask at nearby image slices”).
Regarding claim 8, the rejection of claim 1 is incorporated herein.
The combination of Xu et al. and Gooding et al. further teach wherein the at least one processor is further configured to provide one or more annotation tools to a user of the apparatus, and wherein the one or more user inputs are received as a result of the user using the one or more annotation tools (see Gooding et al. para [0090]; “The medical image contouring system described herein, provides the methods and tools for contouring a 3D medical image, composed of a stack of 2D image slices. In an example of the invention the stack may include all sequential images, or a range of one or more images selected from a sequence”).
Regarding claim 10, the rejection of claim 1 is incorporated herein.
The combination of Xu et al. and Gooding et al. further teach wherein the tubular structure includes a blood vessel of the human body or a medical device inserted or implemented into the human body (see Xu et al. para [0042]; “Accordingly, the tubular structures are preferably blood vessels and/or bronchi”).
Regarding claim11, the rejection of claim 1 is incorporated herein.
The combination of Xu et al. and Gooding et al. further teach wherein the at least one processor is further configured to store or export the annotation of the tubular structure (see Xu et al. para [0116]; “In a step OUT, the segmented and localized lung segments S1, S2, S3, . . . are output as image data, i.e. they are e.g. stored, transmitted or displayed…. Optionally also the traced tubular network CL and/or the specific tertiary bronchi TB1, TB2, TB3, . . . can preferably be output as an overlay with adjustable transparency and additionally or alternatively as separate images”).
Regarding claim 12, the scope of claim 12 is fully incorporated herein, and the rejection analysis of claim 1 is fully applicable here.
Regarding claim 13, the rejection of claim 12 is incorporated herein.
The combination of Xu et al. and Gooding et al. further teach wherein the annotation includes a segmentation mask associated with the tubular structure (see Gooding et al. para [0026]; “proposed a ML model that takes user interaction at one image slice to estimate the segmentation mask at nearby image slices”).
Regarding claim17, the rejection of claim 12 is incorporated herein.
The combination of Xu et al. and Gooding et al. further teach wherein the ML image annotation model is learned from a training dataset that comprises marked images of the tubular structure paired with ground truth annotations of the tubular structure (see Gooding et al.para [0003]; “A Ground Truth (GT) is a contour provided by a clinical expert, used for reference, as well as for model training, testing and evaluation”, see also para [0006]; “ML-based approaches learn from training sets (image +contours) of previously contoured patients in order to infer the shape and location of contours in new unseen images”).
Regarding claim 19, the rejection of claim 12 is incorporated herein.
The combination of Xu et al. and Gooding et al. further teach further comprising providing one or more annotation tools to a user, and wherein the one or more user inputs are received as a result of the user using the one or more annotation tools (see Gooding et al. para [0090]; “The medical image contouring system described herein, provides the methods and tools for contouring a 3D medical image, composed of a stack of 2D image slices. In an example of the invention the stack may include all sequential images, or a range of one or more images selected from a sequence”).
Regarding claim 20, the rejection of claim 12 is incorporated herein.
The combination of Xu et al. and Gooding et al. further teach wherein the tubular structure includes a blood vessel of the human body or a medical device inserted or implemented into the human body (see Xu et al. para [0042]; “Accordingly, the tubular structures are preferably blood vessels and/or bronchi”).
Claims 4-5, 9 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. in view of Gooding et al. as applied in claims 1, and 12 above, and further in view of Wang et al. (US 20230058183 A1).
Regarding claim 4, the rejection of claim 1 is incorporated herein. The combination of Xu et al. and Gooding et al. as a whole does not teach wherein the at least one processor being configured to obtain the marking of the tubular structure comprises the at least one processor being configured to: generate, automatically, a preliminary marking of the tubular structure; present the preliminary marking to a user of the apparatus; and obtain the marking of the tubular structure based on the one or more user inputs that modify the automatically generated preliminary marking of the tubular structure.
In the same field of endeavor Wang et al. teach wherein the at least one processor being configured to obtain the marking of the tubular structure comprises the at least one processor being configured to: generate, automatically, a preliminary marking of the tubular structure (see para [0026]; “an input step of receiving an input of three-dimensional volume data rendering a tubular organ and a preliminary annotation result obtained by annotating, in advance, the tubular organ in the three-dimensional volume data”, see also para [0062]; “For example, the process of generating the complete blood vessel region from the fitted blood vessel centerline may be realized, by implementing a conventional blood vessel segmentation method based on image intensities or geometrical characteristics or by implementing a segmentation method based on Deep Learning (DL)”); present the preliminary marking to a user of the apparatus (see Fig. 10 disclose presenting the preliminary marking); and obtain the marking of the tubular structure based on the one or more user inputs that modify the automatically generated preliminary marking of the tubular structure (see para [0092]; “The gray blood vessel pattern is a missing part of the tubular organ (the blood vessel) in the preliminary annotation result. Accordingly, as illustrated in FIG. 10(b), at step S300′, annotation is implemented as described at step S400 in the first embodiment on the missing part ml. Subsequently, as illustrated in FIG. 10(c), the inverse mapping and the fitting process, or the like as described at step S500 in the first embodiment are performed on the annotated missing part, by using the mapping matrix”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general method for automatically localizing organ segments in a three-dimensional image and analyzed tubular network as image data of Xu et al. in view of the use imaging system providing tools for an interactive application for contouring medical images of Gooding et al. and apparatus to annotate and identify tubular organ in the three-dimensional volume data by inversely mapping of Wang et al. in order to make it easier for a user to immediately recognize which blood vessel branch is missing or interrupted (see para [0026]).
Regarding claim 5, the rejection of claim 4 is incorporated herein.
The combination of Xu et al., Gooding et al. and Wang et al. further teach wherein the preliminary marking of the tubular structure is generated based on an ML image segmentation model (see Wang et al. para [0062]; “For example, the process of generating the complete blood vessel region from the fitted blood vessel centerline may be realized, by implementing a conventional blood vessel segmentation method based on image intensities or geometrical characteristics or by implementing a segmentation method based on Deep Learning (DL)”).
Regarding claim 9, the rejection of claim 8 is incorporated herein.
The combination of Xu et al., Gooding et al. and Wang et al. further teach wherein at least one of the one or more annotation tools has a pixel-level accuracy (see Wang et al. para [0055]; “After the key points on the centerline of the blood vessel were annotated in the LMIP image at step S400, the key points are inversely mapped, at step S500, onto the original three-dimensional volume data, while using the mapping matrix obtained at step S300, with respect to the pixel positions of the annotated key points in the LMIP image”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general method for automatically localizing organ segments in a three-dimensional image and analyzed tubular network as image data of Xu et al. in view of the use imaging system providing tools for an interactive application for contouring medical images of Gooding et al. and apparatus to annotate and identify tubular organ in the three-dimensional volume data by inversely mapping of Wang et al. in order to provide edge orientations and improve model performance (see para [0055]).
Regarding claim 15, the rejection of claim 12 is incorporated herein.
The combination of Xu et al., Gooding et al. and Wang et al. further teach wherein obtaining the marking of the tubular structure comprises: generating, automatically, a preliminary marking of the tubular structure (see Wang et al. para [0026]; “an input step of receiving an input of three-dimensional volume data rendering a tubular organ and a preliminary annotation result obtained by annotating, in advance, the tubular organ in the three-dimensional volume data”, see also para [0062]; “For example, the process of generating the complete blood vessel region from the fitted blood vessel centerline may be realized, by implementing a conventional blood vessel segmentation method based on image intensities or geometrical characteristics or by implementing a segmentation method based on Deep Learning (DL)”); presenting the preliminary marking to a user; and obtaining the marking of the tubular structure based on the one or more user inputs that modify the automatically generated preliminary marking of the tubular structure (see Wang et al. Fig. 10 disclose presenting the preliminary marking, para [0092]; “The gray blood vessel pattern is a missing part of the tubular organ (the blood vessel) in the preliminary annotation result. Accordingly, as illustrated in FIG. 10(b), at step S300′, annotation is implemented as described at step S400 in the first embodiment on the missing part ml. Subsequently, as illustrated in FIG. 10(c), the inverse mapping and the fitting process, or the like as described at step S500 in the first embodiment are performed on the annotated missing part, by using the mapping matrix”). provides an indication in form of a specific label for each tubular structure in the 3D image”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general method for automatically localizing organ segments in a three-dimensional image and analyzed tubular network as image data of Xu et al. in view of the use imaging system providing tools for an interactive application for contouring medical images of Gooding et al. and an apparatus to annotate and identify tubular organ in the three-dimensional volume data by inversely mapping of Wang et al. in order to make it easier for a user to immediately recognize which blood vessel branch is missing or interrupted (see para [0092]).
Regarding claim 16, the rejection of claim 15 is incorporated herein.
The combination of Xu et al., Gooding et al. and Wang et al. further teach wherein the preliminary marking of the tubular structure is generated based on an ML image segmentation model (see Wang et al. para [0062]; “For example, the process of generating the complete blood vessel region from the fitted blood vessel centerline may be realized, by implementing a conventional blood vessel segmentation method based on image intensities or geometrical characteristics or by implementing a segmentation method based on Deep Learning (DL)”).
Claims 6-7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. in view of Gooding et al. as applied in claims 1, 12, and 17 above, and further in view of Zhou et al. (US 20210110135 A1).
Regarding claim 6, the rejection of claim 1 is incorporated herein. The combination of Xu et al. and Gooding et al. as a whole does not teach wherein the ML image annotation model is learned from a training dataset that comprises marked images of the tubular structure paired with ground truth annotations of the tubular structure.
In the same field of endeavor Zhou et al. teaches wherein the ML image annotation model is learned from a training dataset that comprises marked images of the tubular structure paired with ground truth annotations of the tubular structure (see para [0086]; “At step 902, training images are obtained. For landmark detection, the training images are medical images with known annotated ground truth landmark locations. For anatomical object segmentation, the training images are medical image with known annotated boundaries of the target anatomical object. The training images may be obtained by loading existing annotated training images from a database”, see also para [0045]; “For example, algorithms designed for segmenting tubular structures generally perform well in arteries and veins”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general method for automatically localizing organ segments in a three-dimensional image and analyzed tubular network as image data of Xu et al. in view of the use imaging system providing tools for an interactive application for contouring medical images of Gooding et al. and system for artificial intelligence based medical image segmentation of Zhou et al. in order to improve segmentation accuracy and robustness (see para [0044]).
Regarding claim 7, the rejection of claim 6 is incorporated herein.
The combination of Xu et al., Gooding et al. and Zhou et al. further teach wherein the ML image annotation model is learned using an artificial neural network (ANN) (see Xu et al. para [0025]; “A machine learning model can be any suitable artificial intelligence model and/or algorithm that can map a set (e.g., one or more) of input variables to a set (e.g., one or more) of output variables”).
The combination of Xu et al., Gooding et al. and Zhou et al. further teach wherein, during training of the ANN, the ANN is configured to predict a segmentation mask for the tubular structure based on a marked training image of the tubular structure (see Zhou et al. Fig. 12, para [0045]; “Typically, medical image segmentation algorithms are designed and optimized with a specific context of use. For example, algorithms designed for segmenting tubular structures generally perform well in arteries and veins”, see also para [0057]; “Referring to FIG. 3, at step 302 training images and corresponding ground truth segmentations (segmentation masks) are obtained or generated. Medical images with already existing ground truth segmentations may be obtained by loading the medical images and ground truth segmentations from a database. Medical images without ground truth segmentations can be manually annotated to generate ground truth segmentations”) and adjust parameters of the ANN based on a difference between the predicted segmentation mask and a corresponding ground truth segmentation mask (see Zhou et al. para [0060]; “As shown in FIG. 4, Network 1 is a deep neural network trained on the segmentation masks and Network 2 is a deep neural network trained on the distance maps. Network 1 inputs a medical image and estimates a segmentation mask, and the loss function for Network 1 (Loss1) is an error between the estimated segmentation masks and the ground truth segmentation masks over the set of training samples”, see also para [0073]; “the trained DNN predicts action-values corresponding to adjustments to each of the parameters based on the learned policy”).
Regarding claim 18, the rejection of claim 17 is incorporated herein.
The combination of Xu et al., Gooding et al. and Zhou et al. further teach wherein the ML image annotation model is learned using an artificial neural network (ANN) (see Xu et al. para [0025]; “A machine learning model can be any suitable artificial intelligence model and/or algorithm that can map a set (e.g., one or more) of input variables to a set (e.g., one or more) of output variables”).
The combination of Xu et al., Gooding et al. and Zhou et al. further teach wherein, during training of the ANN, the ANN is configured to predict a segmentation mask for the tubular structure based on a marked training image of the tubular structure (see Zhou et al. see Fig. 12, para [0045]; “Typically, medical image segmentation algorithms are designed and optimized with a specific context of use. For example, algorithms designed for segmenting tubular structures generally perform well in arteries and veins”, see also para [0057]; “Referring to FIG. 3, at step 302 training images and corresponding ground truth segmentations (segmentation masks) are obtained or generated. Medical images with already existing ground truth segmentations may be obtained by loading the medical images and ground truth segmentations from a database. Medical images without ground truth segmentations can be manually annotated to generate ground truth segmentations”) and adjust parameters of the ANN based on a difference between the predicted segmentation mask and a corresponding ground truth segmentation mask (see Zhou et al. para [0060]; “As shown in FIG. 4, Network 1 is a deep neural network trained on the segmentation masks and Network 2 is a deep neural network trained on the distance maps. Network 1 inputs a medical image and estimates a segmentation mask, and the loss function for Network 1 (Loss1) is an error between the estimated segmentation masks and the ground truth segmentation masks over the set of training samples”, see Zhou et al. also para [0073]; “the trained DNN predicts action-values corresponding to adjustments to each of the parameters based on the learned policy”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the general method for automatically localizing organ segments in a three-dimensional image and analyzed tubular network as image data of Xu et al. in view of the use imaging system providing tools for an interactive application for contouring medical images of Gooding et al. and a system for artificial intelligence based medical image segmentation of Zhou et al. in order to provide priors for edge orientations (see para [0045]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WINTA GEBRESLASSIE/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677