Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s Response
In the Applicant’s Response dated 12/23/25, the Applicant amended Claims 36, 40, 47-51,canceled claim 37 and argued Claims rejected in the Office Action dated 4/22/25. Claims 36, 38 and 40-54 are pending examination.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/23/26 has been entered.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 36-38 and 40-54 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite “receiving at least one input 2D image slice and at least one set of data representing an input contour identifying structures of interest”, “receiving at least one selected target image slice” and “predicting target contour data for the selected target image slice that identifies at least one of the same one or more structures of interest within the target image slice”. The limitations are directed towards the “mental process” groupings of abstract ideas. “Predicting” is considered a concept performed in the human mind (including an observation, evaluation, judgment, opinion).
This judicial exception is not integrated into a practical application because the claim recites 2 additional elements which are “receiving” steps. These steps are considered to be “data gathering” steps, which is a form of insignificant extra-solution activity. These limitations are no more than mere instructions to apply the exception using a generic computer components.
The limitation reciting “contour prediction engine implemented as a machine learning model” provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to 8 perform an existing process; and (3) the particularity or generality of the application of the judicial exception.
The judicial exception of “predicting, by a contour prediction engine implemented as a machine learning model, target contour data for the selected target image slice” is performed “by a contour prediction engine implemented as a machine learning model” The contour prediction engine implemented as a machine learning model is used to generally apply the abstract idea without placing any limits on how the machine learning model functions. Rather, these limitations only recite the outcome of “predicting target contour data” do not include any details about how the “predicting” is accomplished. See MPEP 2106.05(f). The recitation of “a contour prediction engine implemented as a machine learning model” merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element “a contour prediction engine implemented as a machine learning model” limits the identified judicial exceptions “predicting, by a contour prediction engine implemented as a machine learning model, target contour data for the selected target image slice” this type of limitation merely confines the use of the abstract idea to a particular technological environment (neural networks) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Thus, even when viewed as a whole, nothing in the claim adds to significantly more (i.e. an inventive concept) to the abstract idea. The Claims are ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 36, 38 and 40-54 are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al., United States Patent Publication 2018/0061058 A1 (hereinafter “Xu”), in view of Sivaramakrishna et al., United States Patent Publication 2004/0101184 (hereinafter “Sivaramakrishna”).
Claim 36:
Xu discloses:
A method of contouring a three-dimensional (3D) image, comprising:
receiving at least one input 2D image slice, from a set of two-dimensional (2D) image slices constituting the 3D image, and at least one set of data representing an input contour identifying one or more structures of interest in the 3D image within the at least one input 2D image slice (see paragraphs [0049]-[0051]). Xu teaches receiving at least one 2D slice and the data representing the structures in the images such as shape, size and/or type of anatomical structure in one image;
wherein the one or more structures of interest comprise defined anatomical structures and previously unidentified structure (see paragraphs [0049]-[0051]). Xu teaches the structure was previous unknown, possible due to mislabeling or noise.
receiving at least one selected target image slice, from the set of the 2D image slices (see paragraphs [0052] and [0078]). Xu teaches receiving a target image slice by selecting slices of the image in a user interface; and
predicting, by a contour prediction engine implemented as a machine learning model, target contour data for the selected target image slice that identifies at least one of the same one or more structures of interest within the target image slice, based on a spatial relationship between one or more of the received input 2D image slices and the data representing an input contours (see paragraphs [0025], [0064] and [0081]). Xu teaches predict the anatomical structure each voxel of a 3D image represents based on the spatial relationship. When the image segmentation is completed, segmentation unit may output a 3D label map, associating each voxel of the 3D image to an anatomical structure. The 3D label map may be displayed in the user interface, and/or stored in medical image database for further use in treatment planning.
Xu fails to expressly disclose the structure of interest being unidentified.
Sivaramakrishna discloses:
wherein the one or more structures of interest comprise defined anatomical structures and previously unidentified structures, and wherein the previously unidentified structures are obtained through input contours that identify structures not represented in pre-existing training data (see paragraph [0008]). Sivaramakrishna teaches the structure is unknown, when the object being contoured is not uniform in appearance. When the object is not uniform, it is not in pre-existing training data.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention was made to modify the method disclosed by Xu to include previously unidentified structures not represented in pre-existing training data, for the purpose of sufficiently identifying unknown structures, as taught by Sivaramakrishna.
Xu and Sivaramakrishna fails to disclose the learning model using contextual features to predict the contours.
Yang discloses:
wherein the predicting comprises extracting and aggregating contextual information at least from the data representing the input contour and applying the aggregated contextual information to generate the target contour data for the selected target image slice (see paragraph [0019], [0025], [0026]). Yang teaches predicting by extracting and aggregating contextual information related to the image to input into a learning model to determine the target contour.
Accordingly, it would be obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Xu and Sivaramakrishna to include using a learning model to predict the target contour and also use contextual data for the purpose of accurately perform computer-based automatic liver segmentation in medical images, as taught by Yang.
EXAMINERS NOTE: This claim can be interpreted as an abstract idea. The claim involves receiving images, a selection and making a prediction. Although the prediction is made using a machine learning model in dependent claim 2, the independent claim 1 can be performed mentally. The Examiner suggests amending the claim to overcome the abstract idea.
Claim 38:
Xu discloses:
where the machine learning model is one or more of a neural network or random forest (see paragraphs [0025] and [0026]). Xu teaches the machine learning model is a convolutional neural network.
Claim 40:
Xu discloses:
wherein at least one of: the target image slice, together with the input image slice, and the input contour, provides contextual information to identify a relevant location for a contour on the target image slice (see paragraph [0050]). Xu teaches the input image slice having information to identify a relevant location for region of interest.
Claim 41:
Xu discloses:
wherein the contextual information is provided from a plurality of sources comprised of at least one input image slices, at least one input contour, and a target image (see paragraph [0050]). Xu teaches the input image slice having information to information contextual information.
Claim 42:
Xu discloses:
wherein the contextual information comprises one or more of information on image features and/or contour features, or spatial relations between image data and/or contour data (see paragraphs [0049]-[0051]). Xu teaches the contextual information includes spatial relations between slices.
Claim 43:
Xu discloses:
wherein the contextual information on spatial relations between image data and/or contour data is learnt from a training data set (see paragraph [0055]). Xu teaches the spatial relations are learned by the training data used to train the CNN.
Claim 44:
Xu discloses:
wherein the contextual information is information relating to one or more features shared between image slices in the set of 2D image slices (see paragraph [0049]). Xu teaches the contextual information relates to features shared between adjacent images.
Claim 45:
Xu discloses:
wherein the image slices in the set of 2D image slices are consecutive image slices (see paragraph [0048]). Xu teaches wherein the images slices are consecutive and adjacent.
Claim 46:
Xu discloses:
wherein the image is a medical image and the modality of the 3D image is one of: CT, MRI, Ultrasound, CBCT (see paragraph [0043]). Xu teaches a CT, CBCT or Ultrasound 3D image.
Claim 47:
Xu discloses:
where the machine learning model for predicting target contour data has been trained using an image dataset that includes a plurality of images each with one or more structures of interest shown on the images in the image dataset (see paragraphs [0024] and [0025]). Xu teaches the ML models are any algorithm that can learn a model or a pattern based on existing information or knowledge, and predict or estimate output using input of new information or knowledge.
Claim 48:
Xu discloses:
where the training of the machine learning model is performed on a plurality of different imaging modalities (see paragraphs [0038]). Xu teaches the ML models image segmentation systems, methods, devices, and processes can be applied to segmenting 3D images obtained from any type of imaging modalities.
Claim 49:
Xu discloses:
further comprising the step of updating the machine learning model based on user edits to the structures on one or more target image slices (see paragraph [0076]). Xu teaches the user interface may be used for selecting sets of training images, adjusting one or more parameters of the training process (e.g., the number of adjacent image slices in each stack), selecting or modifying a framework of a CNN model, and/or manually or semi-automatically segmenting an image for training.
Claim 50:
Xu discloses:
where contours for adjacent slices from the set of two-dimensional (2D) image slices are sequentially predicted (see paragraph [0064]). Xu teaches the contours are sequentially predicted by predicting the anatomical structure represented by each slice of all adjacent slices until all are predicted.
Claim 51:
Xu discloses:
wherein a first structure of interest is selected for a first 2D image slice and contours for the first structure are predicted for a first 2D image slice, and the predicted contours for the first 2D image slice are used for contouring the same structure of interest for one or more subsequent 2D image slices from the set of 2D image slices (see paragraphs [0049]-[0051]). Xu teaches using adjacent slice contours to predict the contours of the adjacent slices.
Claim 52:
Xu discloses:
wherein the predicted contours are propagated through sequential image slices using direct propagation of the predicted contours (see paragraph [0111]). Xu teaches the label map is applied to all of the sequential image slices.
Claim 53:
Xu discloses:
wherein the predicted contours are propagated through sequential image slices by iterative propagation, with predicted contours for each subsequent image propagated based on iteration of the contours for the immediately preceding image slice (see paragraphs [0049] and [0064]). Xu teaches the shape, size, and/or type of an anatomical structure in one image may provide information of the shape, size, and/or type of the anatomical structure in another adjacent image along the same plane. The classification is performed on all images until all images are classified based on the contours.
Claim 54:
Xu discloses:
wherein the data representing an input contour is either a user-generated contour, or obtained by one or more of manual contouring, auto- contouring, or user-interactive contouring. (see paragraphs [0046] and [0047]). Xu teaches wherein the contour is auto contoured by the feature extraction portion of the model.
Response to Arguments
Applicant's arguments filed 12/23/25 in regards to the 35 USC 101 has been fully considered but they are not persuasive.
Applicant argues This amendment clarifies that the recited "predicting" is performed by a contour prediction engine implemented as a machine learning model that computationally extracts and aggregates contextual information from multiple sources, including the target image slice, one or more input image slices, and associated input contours, and applies learned spatial relations between image data and contour data to generate the target contour. This amendment is fully supported by previously presented claims 37 and 40 as well as the Specification, which repeatedly describes that contour prediction is carried out by a machine learning based contour prediction engine that learns and applies contextual information, including salient image features and spatial relations, across image slices rather than through human visual or mental judgment (see, e.g., paragraphs [0013], [0014], and [0041]-[0045], and FIGS. 3-6).
The Examiner disagrees.
See the amended 35 USC 101 rejection. It further explains by the using the “predicting, by a contour prediction engine implemented as a machine learning model, target contour data for the selected target image slice” does not overcome the 35 USC 101 rejections because this limitation only provides the output of a machine learning model but does not go into detail of how the machine learning model achieves the outcome or limit it specifically. Therefore, the rejection is maintained.
Applicant’s arguments, see REM, filed 12/23/25, with respect to the rejection of claims 36-38, 40-57 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Xu, Sa Sivaramakrishna and Yang.
Claim 36:
Applicant argues Xu does not teach or suggest a machine learning model that extracts contextual information from input contour data identifying a previously unidentified structure and applies that information as a computational basis for predicting corresponding contours in other image slices. Sivaramakrishna does not cure these deficiencies.
The Examiner agrees.
The Examiner introduced new art, Yang to teaches a machine learning model that uses contextual information from the input contour data to generate output data (see the above rejection of Claim 1). The combination of Xu, Sivaramakrishna and Yang disclose the limitations of the amended claim.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIONNA M BURKE whose telephone number is (571)270-7259. The examiner can normally be reached M-F 8a-4p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571)272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TIONNA M BURKE/Examiner, Art Unit 2178 4/2/26