Prosecution Insights
Last updated: April 19, 2026
Application No. 18/198,538

SYSTEMS AND METHODS FOR TISSUE EVALUATION AND CLASSIFICATION

Non-Final OA §103
Filed
May 17, 2023
Examiner
DRYDEN, EMMA ELIZABETH
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Oneprojects Design And Innovation Ltd.
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
7 granted / 12 resolved
-3.7% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
34 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
56.4%
+16.4% vs TC avg
§102
16.6%
-23.4% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant claims priority to provisional application 63/343,757. Claims 1-20 are supported by the provisional application. Accordingly, the priority date for claims 1-20 is 05/19/2022. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's RCE submission filed on 02/02/2026 has been entered. Response to Amendment The amendment filed 02/02/2026 has been entered. Claims 1-20 remain pending in the application. Regarding claim 2, although it is indicated as “Previously Presented” in the claim set filed 02/02/2026, the claim is different than the claim 2 presented in the most recently examined claim set filed 08/29/2025. Claim 2 will be interpreted to be amended and examined as recited in the most recent claim set filed 02/02/2026, which should read as follows: 2. (Currently Amended) The method of claim 1, wherein the reference image data further comprises one or more images of the known tissue obtained and processed via one or more Response to Arguments Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Interpretation Regarding claim 2, the plain language meaning of “one or more of” listed elements conjoined with “and” is interpreted to require at least one of every listed element. However, based on the original claim (The imaging modality is selected from a group consisting of the listed elements) and the specification (For example, pg. 3, ln 8 requires only at least one other imaging modality besides CT), the claim is interpreted to be a disjunctive list, and thus only requires at least one listed element. Regarding claims 3 and 13, the claims are interpreted based on their plain language meaning. Therefore, the MRI system is interpreted to perform at least one of “late gadolinium enhanced MRI” and at least one of “diffusion weighted MRI sequences”, in accordance with the Federal Circuit’s 2004 Superguide Corp. v. DirecTV Enterprises, Inc. decision. Regarding claims 6 and 16, the claims are interpreted based on their plain language meaning. Therefore, the classification data comprising “characteristics of the reference lesion” is interpreted to include at least one of each of the listed elements, in accordance with the Federal Circuit’s 2004 Superguide Corp. v. DirecTV Enterprises, Inc. decision. Regarding claims 8 and 18, the claims are interpreted based on their plain language meaning. Therefore, the validation is interpreted to include at least one of each of the listed elements (a binary classification and a probability), in accordance with the Federal Circuit’s 2004 Superguide Corp. v. DirecTV Enterprises, Inc. decision. Regarding claim 12, the plain language meaning of a selection of one or more from “the group consisting of” is interpreted to mean the selection of one or more of the listed elements. Further, the use of “consisting of” indicates that the group is closed to unrecited elements (see MPEP 2111.03(II)). Accordingly, the claims are interpreted to not include types of imaging modalities that are not listed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel et al. (U.S. Patent No. 2023/0177681 A1), hereinafter Oubel, in view of Huizenga (U.S. Patent No. 2012/0130226 A1), hereinafter Huizenga. Regarding claim 1, Oubel teaches a method for training a neural network for evaluating and classifying tissue (Oubel, para 86: “neural network has previously been trained on a database of medical images”; para 110: “the neural network classifies each voxel as a region with ablation or without ablation”), the method comprising: providing, to a computing system (Oubel, para 74: “instructions for which are processed by a computer processor 182 of the electronic control device 181”; methods are performed by the processor), a plurality of training data sets (Oubel, para 86: “database of medical images”), wherein each training data set comprises: reference image data comprising at least computed tomography (CT) data associated with a known tissue (Oubel, para 77: “the pre-operative and post-operative medical images are preferably acquired by means of computed tomography”; para 87: “Advantageously, the post-operative medical image of the anatomical structure of interest 130 of the individual 110 is acquired in the same way as for the medical images in the training database for the neural network”), wherein the reference image data is associated with a reference lesion formed in the known tissue via an ablation procedure (Oubel, lesion that was ablated, see para 89 citation below); and classification data associated with the known tissue, wherein the classification data is associated with the reference lesion and comprises one or more characteristics of the reference lesion (Oubel, where the lesion was ablated, para 89: “In order to train the neural network, the ablation region of each post-operative image in the database, where the lesion was ablated, has previously been segmented by at least two operators, in order to increase the relevance of the learning and therefore of the analysis results obtained by the neural network”), wherein the plurality of training data sets excludes digital histopathology data (Oubel, see Figure 4 wherein the annotated images show anatomical structure beyond what can be gathered from tissue samples of histopathology data); and training a neural network from the plurality of training data sets such that the neural network is suitable for evaluating and classifying a tissue characteristic related to an ablation procedure based on an association of the classification data with the reference image data (Oubel, see para 89 citation above; para 89: “The neural network is further trained to classify the voxels of a medical image in a region with ablation or without ablation”; the neural network is trained on the reference images and corresponding labels, thus the segmentation/classification of the image data is based on an association of the reference data and classification data). Oubel teaches in a non-limiting example wherein ablation regions are manually validated by an operator(s) (Oubel, para 89-90: “the learning may be performed using a single expert annotator who delineates the ablation regions in the medical images. The operator’s experience is then important so that the neural network may arrive at well-defined ablation regions”), but fails to explicitly teach wherein the reference image data is validated with histological data. However, Huizenga teaches a similar computer vision method (Huizenga, para 42: “automated, non-invasive, and objective detection and analysis (e.g., plaque identification and classification) of atherosclerotic (AT) lesions”; see also use of computerized axial tomography in the method in para 10) wherein the reference image data is validated with histological data (Huizenga, para 29: “To develop an automated system for classifying plaque, the model must be "trained" on known examples ("ground truth"). One can train a model to mimic the performance of an expert, but it is preferred to label these images, or the data used to generate images, with the most objective criteria possible, such as validation using histopathology sections of the tissue”). Thus, Huizenga teaches wherein medical images used to train a computer vision model are validated with histological data to confirm targets in the image that will be classified by the model (see previous citations from Huizenga). Oubel teaches in a non-limiting example wherein ablation regions are manually validated by an operator(s), but does not disclose validation with histological data. Huizenga presents validation by histological data as an alternative to images labeled by a human expert (Huizenga, para 130: “In the context of plaque detection and analysis (e.g., classification), the targets correspond to, as examples, images labeled by a human expert or validated by histological examination”). Thus, each disclose a method for validating labels in medical images to be used to train a computer vision model. A person of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that the histological validation taught by Huizenga could have been substituted for the operator validation of Oubel because both serve the purpose of validating training image labels in order to ensure the accuracy of the trained model. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to substitute the histological validation of Huizenga for the operator validation of Oubel according to known methods to yield the predictable result of improving the trained model by validating the presence of lesions in training images with a secondary modality (with histological examination, described by Huizenga as the most objective criteria possible in para 29). In the same way as the claimed invention, the substitution combination of Oubel in view of Huizenga teaches wherein histology data is used to validate training image samples (reference image data), but the histopathology data is not input as training data for the model. Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Stroebel et al. (Stroebel, J., Horng, A., Armbruster, M., Mittone, A., Reiser, M., Bravin, A., & Coan, P., Convolutional neuronal networks combined with X-ray phase-contrast imaging for a fast and observer-independent discrimination of cartilage and liver diseases stages, 2020, Scientific Reports, 10(1), 20007), hereinafter Stroebel. Regarding claim 2 (dependent on claim 1), Oubel in view of Huizenga fails to explicitly teach wherein the reference image data further comprises one or more images of the known tissue obtained and processed via one or more of a transmission imaging system; a brightfield or darkfield imaging system; a fluorescence imaging system; a phase contrast imaging system; a differential interference contrast imaging system; a hyperspectral imaging system; a Raman or surface-enhanced Raman imaging system, and a magnetic resonance imaging (MRI) system. However, Stroebel teaches reference image data (Stroebel, training images described in the last paragraph on pg. 3) obtained and processed via a phase contrast imaging system (Stroebel, last para on pg. 3: “PCI CT image”; abstract on pg. 1: “We applied transfer learning using Convolutional Neuronal Networks to high resolution X-ray phase contrast computed tomography datasets and tested the potential of the systems to accurately classify Computed Tomography images of different stages of two diseases”). Oubel teaches training a machine learning model with reference images obtained via a CT imaging system (Oubel, CT image data). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have utilized the phase contrast CT imaging system of Stroebel in the method of Oubel in view of Huizenga in order to improve the model’s performance by training the machine learning model with images with increased contrast of anatomical features (Stroebel, pg. 2, 1st paragraph: “X-ray phase contrast imaging (PCI) has proven to provide enhanced sensitivity and accuracy for pathology detection in a not destructive way… As a result, the visibility of low-absorbing structures and of features with similar attenuation properties is largely enhanced. Combined with computed tomography (CT) methodologies, it can provide a highly contrasted 3D representation of the imaged volumes”). Regarding claim 3 (dependent on claim 2), Oubel in view of Huizenga and Stroebel teaches wherein the MRI system performs at least one of late gadolinium enhanced MRI and diffusion weighted MRI sequences (MRI was not selected from the group of claim 2, and therefore the limitations of claim 3 are not required). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Stroebel and Horry et al. (Horry, M. J., Chakraborty, S., Paul, M., Ulhaq, A., Pradhan, B., Saha, M., & Shukla, N. (2020). COVID-19 detection through transfer learning using multimodal imaging data. Ieee Access, 8, 149808-149824.), hereinafter Horry. Regarding claim 4 (dependent on claim 2), Oubel in view of Huizenga and Stroebel fails to explicitly teach wherein the reference image data further comprises images of the known tissue obtained and processed via an ultrasound imaging system. However, Horry teaches reference image data (Horry, training data, pg. 149814, 1st paragraph in section B: “sample data sets”) obtained and processed via an ultrasound imaging system (Horry, pg. 149813, see Table 2 and the 1st paragraph). Oubel teaches training a machine learning model with reference images from one imaging modality (Oubel, CT image data), while Horry teaches training a machine learning model with reference images from both ultrasound and CT imaging systems (See previous citations above). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the multimodal imaging data model of Horry with the method of Oubel in view of Huizenga and Stroebel in order to improve the model’s classification performance (Horry, last paragraph on pg. 149820: “Data fusion concept allows us to combine multiple modes of data to improve model classification performance”). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Stroebel, Horry, and Dobay et al. (Dobay, A., Ford, J., Decker, S., Ampanozi, G., Franckenberg, S., Affolter, R., et al., Potential use of deep learning techniques for postmortem imaging, 2020, Forensic Science, Medicine and Pathology, 16, 671-679), hereinafter Dobay. Regarding claim 5, (dependent on claim 4), Oubel in view of Huizenga, Stroebel, and Horry teaches wherein the reference image data comprises three-dimensional (3D) ultrasound image data and computed tomography (CT) image data of the known tissue (Oubel, medical images utilized are acquired in three dimensions, para 80: “images acquired in three dimensions”; see combination with Horry in claim 4 regarding the use of ultrasound image data), wherein the CT image data comprises phase contrast CT image data of the known tissue (Taught in combination with Stroebel, see claim 2 rejection). Oubel in view of Huizenga, Stroebel, and Horry fails to explicitly teach wherein the CT image data comprises post-mortem CT image data of the known tissue for anatomical reference. However, Dobay teaches a deep learning system utilizing reference image data including CT image data comprising post-mortem CT image data (Dobay, pg. 671, 3rd paragraph: “Image analysis for postmortem computed tomography”; pg. 675, Fig. 3, post-mortem data used in neural network training process) of known tissue for anatomical reference (Dobay, utilizing images for NN training acts as an anatomical reference because it requires anatomical information, such as classification labels, for training; see Fig. 3 caption on pg. 675 referencing “ground truth” training). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the post-mortem CT image data of Dobay with the method of Oubel in view of Huizenga, Stroebel, and Horry in order to increase access to images with optimal quality for machine learning training (Dobay, end of pg. 671 to 1st paragraph of pg. 672: “As the radiation dose does not need to be considered, the scan protocol is optimized for image quality, which leads to more detailed images and therefore larger datasets”). Increasing the number of training samples, including the number of high-quality training samples, will improve the performance of the trained model. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Auerbach et al. (U.S. Patent No. 2022/0036555 A1), hereinafter Auerbach, and Doron et al. (U.S. Patent No. 2022/0079499 A1), hereinafter Doron. Regarding claim 6, (dependent on claim 1), Oubel in view of Huizenga teaches wherein the one or more characteristics of the reference lesion comprise including at least one of: a location of the reference lesion on the known tissue (Oubel, see Fig. 4 and para 89 wherein the lesion and ablation segmentations are annotated for training); a size of the reference lesion (Oubel, segmentation of the reference lesion reflects its size, which can be compared with the size of the ablation, para 119: “An ablation margin is then determined between the segmentation of the lesion and the ablation mask established previously, in sub-step 243. The ablation margin corresponds to the minimum margin, i.e. the minimum distance, taken between the segmentation of the lesion and the ablation mask”; see also Fig. 4). While Oubel discloses determining a pathway to the reference lesion (Oubel, para 52-54) and predicting the success of the treated reference lesion (Oubel, para 121-123), Oubel fails to explicitly teach wherein the neural network is trained using these characteristics, as required by claim 1, and thus fails to explicitly teach: a pathway of the reference lesion; a depth of the reference lesion, and a known success of the reference lesion in the treatment of a cardiac-related condition. However, Auerbach teaches classification data comprising characteristics of a lesion including depth of the lesion (Auerbach, para 100: “the data collects the benefits from each of the underlying modalities (e.g., ultrasound data enable depth of tissue analysis”; this data is utilized in scar identification/classification, para 19: “The data of the ultrasound images after adjustment and interpretation is used by the system and method operating to identify scar tissue to automatically identify scar areas within the organic tissue”; used with a neural network, para 90: “The machine learning software/hardware may include, but is not limited to, neural networks, artificial neural networks, convoluted neural networks”) and a pathway of the lesion (Auerbach, scar tissue identification/classification is based on mapping of cardiac tissue, which includes the scar tissue/lesions, para 32: “The improved images, scans, and/or maps supported by the scar tissue identifier can provide multiple pieces of information regarding the electrophysiological properties of the intra-body organ (e.g., heart and/or organic tissue including the scar tissue) that represent the cardiac substrates (anatomical and functional) of these challenging arrhythmias”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the training of a model using lesion depth and pathway characteristics, taught by Auerbach, with the method of Oubel in view of Huizenga in order to improve the accuracy of the planned cardiac treatment based on the depth/pathway of the lesion (Auerbach, para 102: “In turn, the physicians and/or medical professionals may utilize the improved image data and/or the improved images into one or more of ablation-ultrasound technologies, planning and diagnosis of lesions, and assessment and diagnosis of magnetic resonance to address a disease state”). Training the neural network of Oubel with the depth and pathway characteristics can further teach the machine learning system how to determine a precise location of the lesion, and thus improve accuracy in classifying the associated ablated regions. Further, Doron teaches classification data comprising characteristics of a lesion including a known success of the lesion in the treatment of a cardiac-related condition (Doron, predict success of treatment, para 39: “All or part of the system 100 can be used to collect information (e.g., data/inputs, such as biometric data and/or a training dataset) and/or used to implement an optimization engine 101 (e.g., a ML/AI algorithm or model thereof). The optimization engine 101 can be defined as an optimization in which model parameters that best fit data and prior statistical knowledge are estimated in an iterative process to identify ablation gaps and predict success”; based on image data, para 50: “For example, the catheter 110 can use the electrodes 111 to implement intravascular ultrasound and/or Mill catheterization to image the heart 120 (e.g., obtain and process the biometric data)”; analysis of lesion, para 125: “At block 620, the optimization engine 101 receives data with respect to performance of ablation procedures for a current patient (e.g., the patient 125)”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the training of a model using lesion success, taught by Doron, with the method of Oubel in view of Huizenga in order to improve the prediction of success of the treatment using a trained model, further aiding a physician in treating a patient (Doron, para 90: “The optimization engine 101 can then utilize models, algorithms (e.g., the unsupervised and/or supervised ML/AI algorithm), neural network to identify ablation gaps and provide success predictions to the physician 1515 to transform operations the system 100 that raise a success probability for a next procedure and/or eliminate unnecessary subsequent procedures”). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Auerbach, Doron, and Horry. Regarding claim 7, (dependent on claim 6), Oubel in view of Huizenga, Auerbach, and Doron teaches further comprising: obtaining one or more images of sample tissue undergoing an ablation procedure (Oubel, para 76: “The post-treatment evaluation method 200 comprises a first step 210 of acquiring a post-operative medical image of the anatomical structure of interest 130”; the sample tissue is part of ongoing treatment using an ablation procedure, see para 50-53 wherein further ablation is planned); processing the one or more images and inputting sample image data, obtained in the processing step, into the computing system (Oubel, the post-treatment evaluation method is performed by computer processor 182, see para 74); correlating the sample image data with the reference lesion and known tissue data (Oubel, the neural network is trained on the reference lesion and known tissue data, see claim 1, and thus the segmentation/classification of the sample image data is based on correlations with the training data); and outputting results of the correlating step, wherein the results of the correlating step comprises identification of one or more lesion formations in the sample tissue (Oubel, para 118: “automatic segmentation based on a deep learning method is performed in order to determine the three-dimensional location of the lesion 510 and of the ablation region 520”; see also note below) and classification of the identified one or more lesion formations based on identified characteristics of the one or more lesion formations (Oubel, identification of ablated regions of lesions is performed, para 85: “The post-operative medical image of the anatomical structure of interest 130 is then analyzed by a neural network, which is a machine learning method, in a second step 220 in order to automatically segment the ablation region”). Oubel in view of Huizenga, Auerbach, and Doron fails to explicitly teach (1) wherein the one or more images of sample tissue are obtained via an ultrasound imaging system (emphasis added) and (2) that the neural network referenced in claim 1 and the correlating step above explicitly identifies lesion formations (performed by the deep learning method in the para 118 citation). Regarding (2), it is recognized that the citations and evidence provided above are derived from potentially different machine learning models of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to employ combinations and sub-combinations of these complementary elements. Oubel explicitly motivates the combination of different features in paragraphs 69, disclosing: “The present description is provided without limitation, each feature of one embodiment being able to be combined with any other feature of any other embodiment in an advantageous manner” and otherwise motivating experimentation and optimization. Oubel teaches one model that can identify the location of both lesions and ablated regions (para 118 citation above). Doing so would increase the efficiency of the system by utilizing one neural network to segment two types of regions, as taught by paragraph 118, instead of two neural networks. Additionally, Auerbach teaches obtaining, via an ultrasound imaging system, one or more images of sample tissue undergoing an ablation procedure (Auerbach, para 20: “The system and method operating to identify scar tissue may be practically applied, but not limited to, ablation-ultrasound technologies”; para 22: “The system 100 may include components, such as a catheter 105, that are configured to use intravascular ultrasound and/or MRI catheterization to image of an intra-body organ”; see Figure 1). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the application of the scar detection method during an ablation procedure using ultrasound images of Auerbach with the method of Oubel in view of Huizenga, Auerbach, and Doron in order to aid physicians during intracardiac ablation treatments utilizing the available imaging systems (Auerbach, para 20: “The technical effects and benefits of the system and method operating to identify scar tissue include generating more accurate and higher resolution real-time image data for the ultrasound images (e.g., enhanced accuracy over or higher resolution than original data of the first modality) without relying on a human operator's subjective interpretation (as in conventional imaging modalities)”; see para 22 citation above regarding ultrasound catheterization). Lastly, although Oubel teaches training the neural network with CT reference image data, Horry teaches reference image data (Horry, training data, pg. 149814, 1st paragraph in section B: “sample data sets”) obtained and processed via an ultrasound imaging system and a CT imaging system (Horry, pg. 149813, see Table 2 and the 1st paragraph). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the multimodal imaging data model of Horry with the method of Oubel in view of Huizenga, Auerbach, and Doron in order to improve the model’s classification performance (Horry, last paragraph on pg. 149820: “Data fusion concept allows us to combine multiple modes of data to improve model classification performance”). Accordingly, the multimodal imaging data model, taught in the combination above, could still be combined with the teachings of Auerbach above. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Auerbach, Doron, Horry, and Golden et al. (WO 2019/103912 A2), hereinafter Golden. Regarding claim 8 (dependent on claim 7), Oubel in view of Huizenga, Auerbach, Doron, and Horry teaches wherein the results of the correlating step further comprise validation of one or more lesion formations (Oubel, validation step, para 98-100; medical images include the lesions), wherein validation comprises one or more of a binary classification of tissue ablated or not-ablated (Oubel, the neural network performs a binary classification of ablated or not, and thus validation weights and bias depend on this classification, para 104: “validate the weight W and the bias b determined beforehand for each neuron of the neural network, on the basis of the medical images in the validation database, in order to verify the results of the neural network”), but fails to explicitly teach wherein the validation comprises a probability of tissue being ablated or not ablated. However, Golden teaches a similar system that utilizes a neural network to detect and segment lesions (Golden, pg. 7, ln 8-16: “classifies the entire input anatomical structure as containing a lesion candidate”). Golden teaches further: validation of one or more lesion formulations, wherein validation comprises a probability of tissue classification (Golden, pg. 11, ln 28-31: “processes the received image data through a fully convolutional neural network (CNN) model to generate probability maps for each image of the image data, wherein the probability of each pixel represents the probability of whether or not the pixel is part of a lesion candidate”; model performance is validated, pg. 25, ln 23-24: “Metrics other than validation loss, such as validation accuracy, could also be used to indicate evaluate model performance”; pg. 47, ln 19-21: “To choose the optimal model, a random search over these hyperparameters is performed and the model with the highest validation accuracy is chosen”). Oubel discloses a method to validate the performance of a neural network, but does not specify utilizing probability of the predicted classification. Golden teaches a method to validate the performance of a neural network utilizing the probability of the predicted classification. A person having ordinary skill in the art, before the effective filing date of the claimed invention, could have applied the known technique, as taught by Golden, in the same way to the method of Oubel in view of Huizenga, Auerbach, Doron, and Horry and achieved predictable results of improving the classification task performed by the neural network by adjusting parameters of the model based on probability of the classification result. Including the probability of the predicted result allows the model to give more weight to results that are classified at a higher probability. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of O’Brien et al. (U.S. Patent No. 2020/0297284 A1), hereinafter O’Brien. Regarding claim 9 (dependent on claim 1), Oubel in view of Huizenga teaches wherein the computing system comprises an autonomous machine learning system (Oubel, neural network of para 89) that associates the classification data with the reference image data (Oubel, para 89: “The neural network is further trained to classify the voxels of a medical image in a region with ablation or without ablation”), but fails to explicitly teach wherein the machine learning system comprises a deep learning neural network that includes an input layer, a plurality of hidden layers, and an output layer. However, O’Brien teaches a similar computing system comprising an autonomous machine learning system (O’Brien, para 9: “scar detection network using machine learning techniques”; para 44: “automatic cardiac scar detection”) that associates classification data with reference image data (O’Brien, para 38: “matches that of the data associated with the anatomical mask training data 206 used to train the CNN 260…identify scar tissue locations and quantity in a similar manner”), wherein the machine learning system comprises a deep learning neural network that includes an input layer, a plurality of hidden layers, and an output layer (O’Brien, para 40: “the convolutional neural network may have an input layer that is configured to receive the anatomical mask data 242 as one or more images, multiple hidden layers (e.g. Cony, ReLu, and Crop pooling), which function to filter, rectify, and downsample the processed data, as well as an output layer that is configured to classify pixels in the image data as cardiac scar tissue, as non-cardiac scar tissue, or as any other suitable type of tissue in accordance with the training of the CNN 260”). Oubel discloses a neural network, but does not specify a deep learning neural network including a plurality of layers. O’Brien teaches the claimed deep learning neural network layers, and a known technique of utilizing a deep learning neural network to perform a classification task with image data. A person having ordinary skill in the art, before the effective filing date of the claimed invention, could have applied the known technique, as taught by O’Brien, in the same way to the method of Oubel in view of Huizenga and achieved predictable results of improving the classification task performed by the neural network by allowing the model to learning more complex features using a deep network with multiple hidden layers. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of O’Brien and Isgum et al. (U.S. Patent No. 2019/0333216 A1), hereinafter Isgum. Regarding claim 10 (dependent on claim 9), Oubel in view of Huizenga and O’Brien fails to explicitly teach explicitly wherein the autonomous machine learning system represents the training data set using a plurality of features, wherein each feature comprises a feature vector; however, Isgum teaches a machine learning system (Isgum, para 74: “machine learning”) that represents the training data using a plurality of features (Isgum, para 75: “features”), wherein each feature comprises a feature vector (Isgum, para 75: “Given a (large) database of images and extracted feature vectors whose labels are known and were used beforehand to train the machine-learning algorithm, classifying unseen images based on the features extracted”). Oubel discloses a machine learning system utilizing classified training data (Oubel, para 89) to train the system to detect image features (Oubel, identification of ablated region), but does not explicitly disclose how the data is stored. Isgum discloses a machine learning system utilizing classified training data (Isgum, para 75: “labels”) to train the system to detect image features (Isgum, para 75: “classifying unseen images based on the features extracted”) using feature vectors. Thus, Oubel and Isgum each disclose a machine learning system to detect image features. A person of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that the annotated training data of Oubel could have been substituted for the labeled training data of Isgum because both serve the purpose of training a machine learning system to classify images based on detected features. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to substitute the labeled training data of Oubel, in Oubel in view of Huizenga and O’Brien, for the feature vectors of Isgum according to known methods to yield the predictable result of training a machine learning system for accurate feature classification using identified feature vectors from the image. Claims 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Stroebel. Regarding claim 11, Oubel teaches a method for classifying tissue (Oubel, para 110: “the neural network classifies each voxel as a region with ablation or without ablation”), the method comprising: providing tissue data of a patient (Oubel, para 76: “post-operative medical image”) to a computer running a neural network (Oubel, para 74: “instructions for which are processed by a computer processor 182 of the electronic control device 181”; para 86: “neural network”; the post-treatment evaluation method is performed by computer processor 182, see para 74), wherein the tissue data comprises one or more images of sample tissue undergoing or having undergone an ablation procedure (Oubel, para 76: “The post-treatment evaluation method 200 comprises a first step 210 of acquiring a post-operative medical image of the anatomical structure of interest 130”; the sample tissue is part of ongoing treatment using an ablation procedure, see para 50-53 wherein further ablation is planned), wherein the neural network has been trained to classify tissue and the neural network has been trained using a plurality of training data sets (Oubel, para 89: “The neural network is further trained to classify the voxels of a medical image in a region with ablation or without ablation”), each training data set comprising: reference image data comprising at least CT data associated with a known tissue (Oubel, para 77: “the pre-operative and post-operative medical images are preferably acquired by means of computed tomography”; para 87: “Advantageously, the post-operative medical image of the anatomical structure of interest 130 of the individual 110 is acquired in the same way as for the medical images in the training database for the neural network”), wherein the reference image data is associated with a reference lesion formed in the known tissue via an ablation procedure (Oubel, lesion that was ablated, see para 89 citation below); and known classification data associated with the known tissue and the reference lesion, the known classification data comprising one or more characteristics of the reference lesion (Oubel, where the lesion was ablated, para 89: “In order to train the neural network, the ablation region of each post-operative image in the database, where the lesion was ablated, has previously been segmented by at least two operators, in order to increase the relevance of the learning and therefore of the analysis results obtained by the neural network”), wherein the plurality of training data sets excludes digital histopathology data (Oubel, see Figure 4 wherein the annotated images show anatomical structure beyond what can be gathered from tissue samples of histopathology data); and classifying the tissue data of the patient using the neural network and based on an association of the tissue data with the known classification data with the reference image data (Oubel, see para 89 citations above; para 89: “The neural network is further trained to classify the voxels of a medical image in a region with ablation or without ablation”; the neural network is trained on the reference images and corresponding labels, thus the segmentation/classification of the image data is based on an association of the reference data and classification data). Oubel teaches in a non-limiting example wherein ablation regions are manually validated by an operator(s) (Oubel, para 89-90: “the learning may be performed using a single expert annotator who delineates the ablation regions in the medical images. The operator’s experience is then important so that the neural network may arrive at well-defined ablation regions”), but fails to explicitly teach wherein the reference image data is validated with histological data. Oubel also fails to explicitly teach wherein the CT data is phase contrast CT data. However, Huizenga teaches a similar computer vision method (Huizenga, para 42: “automated, non-invasive, and objective detection and analysis (e.g., plaque identification and classification) of atherosclerotic (AT) lesions”; see also use of computerized axial tomography in the method in para 10) wherein the reference image data is validated with histological data (Huizenga, para 29: “To develop an automated system for classifying plaque, the model must be "trained" on known examples ("ground truth"). One can train a model to mimic the performance of an expert, but it is preferred to label these images, or the data used to generate images, with the most objective criteria possible, such as validation using histopathology sections of the tissue”). Thus, Huizenga teaches wherein medical images used to train a computer vision model are validated with histological data to confirm targets in the image that will be classified by the model (see previous citations from Huizenga). Oubel teaches in a non-limiting example wherein ablation regions are manually validated by an operator(s), but does not disclose validation with histological data. Huizenga presents validation by histological data as an alternative to images labeled by a human expert (Huizenga, para 130: “In the context of plaque detection and analysis (e.g., classification), the targets correspond to, as examples, images labeled by a human expert or validated by histological examination”). Thus, each disclose a method for validating labels in medical images to be used to train a computer vision model. A person of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that the histological validation taught by Huizenga could have been substituted for the operator validation of Oubel because both serve the purpose of validating training image labels in order to ensure the accuracy of the trained model. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to substitute the histological validation of Huizenga for the operator validation of Oubel according to known methods to yield the predictable result of improving the trained model by validating the presence of lesions in training images with a secondary modality (with histological examination, described by Huizenga as the most objective criteria possible in para 29). In the same way as the claimed invention, the substitution combination of Oubel in view of Huizenga teaches wherein histology data is used to validate training image samples (reference image data), but the histopathology data is not input as training data for the model. Additionally, Stroebel teaches the use of phase contrast CT image data in lesion detection machine learning (Stroebel, pg. 2, last paragraph before Methods: “Our objective is to apply and compare the performance of different CNN systems in terms of their capability in discriminating different stages of cartilage and liver diseases using, as input, datasets images acquired by highly sensitive PCI methods”; PCI is phase contrast imaging, see also abstract on pg. 1). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the phase contrast CT image data of Stroebel with the method of Oubel in order to improve the model’s performance by training the machine learning model with images with increased contrast of anatomical features (Stroebel, pg. 2, 1st paragraph: “X-ray phase contrast imaging (PCI) has proven to provide enhanced sensitivity and accuracy for pathology detection in a not destructive way… As a result, the visibility of low-absorbing structures and of features with similar attenuation properties is largely enhanced. Combined with computed tomography (CT) methodologies, it can provide a highly contrasted 3D representation of the imaged volumes”). Regarding claim 12 (dependent on claim 11), Oubel in view of Huizenga and Stroebel teaches wherein the reference image data further comprises one or more images of the known tissue obtained and processed via one or more imaging modalities selected from the group consisting of: an ultrasound imaging system; a transmission imaging system; a brightfield or darkfield imaging system; a fluorescence imaging system; a phase contrast imaging system; a differential interference contrast imaging system; a hyperspectral imaging system; a Raman or surface-enhanced Raman imaging system, and a magnetic resonance imaging (MRI) system (Phase contrast imaging system taught in combination with Stroebel in claim 11 above). Regarding claim 13 (dependent on claim 12), Oubel in view of Huizenga and Stroebel teaches wherein the MRI system performs at least one of late gadolinium enhanced MRI and diffusion weighted MRI sequences (MRI was not selected from the group of claim 12, and therefore the limitations of claim 13 are not required). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Stroebel and Horry. Regarding claim 14 (dependent on claim 12), Oubel in view of Huizenga and Stroebel, fails to explicitly teach wherein the reference image data further comprises images of the known tissue obtained and processed via an ultrasound imaging system. However, Horry teaches reference image data (Horry, training data, pg. 149814, 1st paragraph in section B: “sample data sets”) obtained and processed via an ultrasound imaging system (Horry, pg. 149813, see Table 2 and the 1st paragraph). Oubel teaches training a machine learning model with reference images from one imaging modality (Oubel, CT image data), while Horry teaches training a machine learning model with reference images from both ultrasound and CT imaging systems (See previous citations above). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the multimodal imaging data model of Horry with the method of Oubel in view of Huizenga and Stroebel in order to improve the model’s classification performance (Horry, last paragraph on pg. 149820: “Data fusion concept allows us to combine multiple modes of data to improve model classification performance”). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Stroebel, Horry, and Dobay. Regarding claim 15 (dependent on claim 14), Oubel in view of Huizenga, Stroebel, and Horry teaches wherein the reference image data comprises three-dimensional (3D) ultrasound image data and computed tomography (CT) image data of the known tissue (Oubel, medical images utilized are acquired in three dimensions, para 80: “images acquired in three dimensions”; see combination with Horry in claim 12 regarding the use of ultrasound image data), wherein the CT image data comprises phase contrast CT image data of the known tissue (taught in combination with Stroebel, see claim 11 rejection), but fails to explicitly teach wherein the CT image data comprises post-mortem CT image data of the known tissue for anatomical reference. However, Dobay teaches a deep learning system utilizing reference image data including CT image data comprising post-mortem CT image data (Dobay, pg. 671, 3rd paragraph: “Image analysis for postmortem computed tomography”; pg. 675, Fig. 3, post-mortem data used in neural network training process) of known tissue for anatomical reference (Dobay, utilizing images for NN training acts as an anatomical reference because it requires anatomical information, such as classification labels, for training; see Fig. 3 caption on pg. 675 referencing “ground truth” training). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the post-mortem CT image data of Dobay with the method of Oubel in view of Huizenga, Stroebel, and Horry in order to increase access to images with optimal quality for machine learning training (Dobay, end of pg. 671 to 1st paragraph of pg. 672: “As the radiation dose does not need to be considered, the scan protocol is optimized for image quality, which leads to more detailed images and therefore larger datasets”). Increasing the number of training samples, including the number of high-quality training samples, will improve the performance of the trained model. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Stroebel, Auerbach, and Doron. Regarding claim 16, (dependent on claim 11), Oubel in view of Huizenga and Stroebel teaches wherein the one or more characteristics of the reference lesion comprise including at least one of: a location of the reference lesion on the known tissue (Oubel, see Fig. 4 and para 89 wherein the lesion and ablation segmentations are annotated for training); a size of the reference lesion (Oubel, segmentation of the reference lesion reflects its size, which can be compared with the size of the ablation, para 119: “An ablation margin is then determined between the segmentation of the lesion and the ablation mask established previously, in sub-step 243. The ablation margin corresponds to the minimum margin, i.e. the minimum distance, taken between the segmentation of the lesion and the ablation mask”; see also Fig. 4). While Oubel discloses determining a pathway to the reference lesion (Oubel, para 52-54) and predicting the success of the treated reference lesion (Oubel, para 121-123), Oubel fails to explicitly teach wherein the neural network is trained using these characteristics, as required by claim 11, and thus fails to explicitly teach: a pathway of the reference lesion; a depth of the reference lesion, and a known success of the reference lesion in the treatment of a cardiac-related condition. However, Auerbach teaches classification data comprising characteristics of a lesion including depth of the lesion (Auerbach, para 100: “the data collects the benefits from each of the underlying modalities (e.g., ultrasound data enable depth of tissue analysis”; this data is utilized in scar identification/classification, para 19: “The data of the ultrasound images after adjustment and interpretation is used by the system and method operating to identify scar tissue to automatically identify scar areas within the organic tissue”; used with a neural network, para 90: “The machine learning software/hardware may include, but is not limited to, neural networks, artificial neural networks, convoluted neural networks”) and a pathway of the lesion (Auerbach, scar tissue identification/classification is based on mapping of cardiac tissue, which includes the scar tissue/lesions, para 32: “The improved images, scans, and/or maps supported by the scar tissue identifier can provide multiple pieces of information regarding the electrophysiological properties of the intra-body organ (e.g., heart and/or organic tissue including the scar tissue) that represent the cardiac substrates (anatomical and functional) of these challenging arrhythmias”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the training of a model using lesion depth and pathway characteristics, taught by Auerbach, with the method of Oubel in view of Huizenga and Stroebel in order to improve the accuracy of the planned cardiac treatment based on the depth/pathway of the lesion (Auerbach, para 102: “In turn, the physicians and/or medical professionals may utilize the improved image data and/or the improved images into one or more of ablation-ultrasound technologies, planning and diagnosis of lesions, and assessment and diagnosis of magnetic resonance to address a disease state”). Training the neural network of Oubel with the depth and pathway characteristics can further teach the machine learning system how to determine a precise location of the lesion, and thus improve accuracy in classifying the associated ablated regions. Further, Doron teaches classification data comprising characteristics of a lesion including a known success of the lesion in the treatment of a cardiac-related condition (Doron, predict success of treatment, para 39: “All or part of the system 100 can be used to collect information (e.g., data/inputs, such as biometric data and/or a training dataset) and/or used to implement an optimization engine 101 (e.g., a ML/AI algorithm or model thereof). The optimization engine 101 can be defined as an optimization in which model parameters that best fit data and prior statistical knowledge are estimated in an iterative process to identify ablation gaps and predict success”; based on image data, para 50: “For example, the catheter 110 can use the electrodes 111 to implement intravascular ultrasound and/or Mill catheterization to image the heart 120 (e.g., obtain and process the biometric data)”; analysis of lesion, para 125: “At block 620, the optimization engine 101 receives data with respect to performance of ablation procedures for a current patient (e.g., the patient 125)”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the training of a model using lesion success, taught by Doron, with the method of Oubel in view of Huizenga and Stroebel in order to improve the prediction of success of the treatment using a trained model, further aiding a physician in treating a patient (Doron, para 90: “The optimization engine 101 can then utilize models, algorithms (e.g., the unsupervised and/or supervised ML/AI algorithm), neural network to identify ablation gaps and provide success predictions to the physician 1515 to transform operations the system 100 that raise a success probability for a next procedure and/or eliminate unnecessary subsequent procedures”). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Stroebel, Auerbach, Doron, and Horry. Regarding claim 17, (dependent on claim 16), Oubel in view of Huizenga, Stroebel, Auerbach and Doron teaches further comprising: obtaining one or more images of sample tissue undergoing an ablation procedure (Oubel, para 76: “The post-treatment evaluation method 200 comprises a first step 210 of acquiring a post-operative medical image of the anatomical structure of interest 130”; the sample tissue is part of ongoing treatment using an ablation procedure, see para 50-53 wherein further ablation is planned); processing the one or more images and inputting sample image data, obtained in the processing step, into the computing system (Oubel, the post-treatment evaluation method is performed by computer processor 182, see para 74); correlating the sample image data with the reference lesion and known tissue data (Oubel, the neural network is trained on the reference lesion and known tissue data, see claim 11, and thus the segmentation/classification of the sample image data is based on correlations with the training data); and outputting results of the correlating step, wherein the results of the correlating step comprise identification of one or more lesion formations in the sample tissue (Oubel, para 118: “automatic segmentation based on a deep learning method is performed in order to determine the three-dimensional location of the lesion 510 and of the ablation region 520”; see note below) and classification of the identified one or more lesion formations based on identified characteristics of the one or more lesion formations (Oubel, identification of ablated regions of lesions is performed, para 85: “The post-operative medical image of the anatomical structure of interest 130 is then analyzed by a neural network, which is a machine learning method, in a second step 220 in order to automatically segment the ablation region”). Oubel in view of Huizenga, Auerbach, and Doron fails to explicitly teach (1) wherein the one or more images of sample tissue are obtained via an ultrasound imaging system (emphasis added) and (2) that the neural network referenced in claim 11 and the correlating step above explicitly identifies lesion formations (performed by the deep learning method in the para 118 citation). Regarding (2), it is recognized that the citations and evidence provided above are derived from potentially different machine learning models of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to employ combinations and sub-combinations of these complementary elements. Oubel explicitly motivates the combination of different features in paragraphs 69, disclosing: “The present description is provided without limitation, each feature of one embodiment being able to be combined with any other feature of any other embodiment in an advantageous manner” and otherwise motivating experimentation and optimization. Oubel teaches one model that can identify the location of both lesions and ablated regions (para 118 citation above). Doing so would increase the efficiency of the system by utilizing one neural network to segment two types of regions, as taught by paragraph 118, instead of two neural networks. Additionally, Auerbach teaches obtaining, via an ultrasound imaging system, one or more images of sample tissue undergoing an ablation procedure (Auerbach, para 20: “The system and method operating to identify scar tissue may be practically applied, but not limited to, ablation-ultrasound technologies”; para 22: “The system 100 may include components, such as a catheter 105, that are configured to use intravascular ultrasound and/or MRI catheterization to image of an intra-body organ”; see Figure 1). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the application of the scar detection method during an ablation procedure using ultrasound images of Auerbach with the method of Oubel in view of Huizenga, Stroebel, Auerbach and Doron in order to aid physicians during intracardiac ablation treatments utilizing the available imaging systems (Auerbach, para 20: “The technical effects and benefits of the system and method operating to identify scar tissue include generating more accurate and higher resolution real-time image data for the ultrasound images (e.g., enhanced accuracy over or higher resolution than original data of the first modality) without relying on a human operator's subjective interpretation (as in conventional imaging modalities)”; see para 22 citation above regarding ultrasound catheterization). Lastly, although Oubel teaches training the neural network with CT reference image data, Horry teaches reference image data (Horry, training data, pg. 149814, 1st paragraph in section B: “sample data sets”) obtained and processed via an ultrasound imaging system and a CT imaging system (Horry, pg. 149813, see Table 2 and the 1st paragraph). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the multimodal imaging data model of Horry with the method of Oubel in view of Huizenga, Stroebel, Auerbach and Doron in order to improve the model’s classification performance (Horry, last paragraph on pg. 149820: “Data fusion concept allows us to combine multiple modes of data to improve model classification performance”). Accordingly, the multimodal imaging data model, taught in the combination above, could still be combined with the teachings of Auerbach above. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Stroebel, Auerbach, Doron, Horry, and Golden. Regarding claim 18 (dependent on claim 17), Oubel in view of Huizenga, Stroebel, Auerbach, Doron, and Horry teaches wherein the results of the correlating step further comprise validation of one or more lesion formations (Oubel, validation step, para 98-100; medical images include the lesions), wherein validation comprises one or more of a binary classification of tissue ablated or not-ablated (Oubel, the neural network performs a binary classification of ablated or not, and thus validation weights and bias depend on this classification, para 104: “validate the weight W and the bias b determined beforehand for each neuron of the neural network, on the basis of the medical images in the validation database, in order to verify the results of the neural network”), but fails to explicitly teach wherein the validation comprises a probability of tissue being ablated or not ablated. However, Golden teaches a similar system that utilizes a neural network to detect and segment lesions (Golden, pg. 7, ln 8-16: “classifies the entire input anatomical structure as containing a lesion candidate”). Golden teaches further: validation of one or more lesion formulations, wherein validation comprises a probability of tissue classification (Golden, pg. 11, ln 28-31: “processes the received image data through a fully convolutional neural network (CNN) model to generate probability maps for each image of the image data, wherein the probability of each pixel represents the probability of whether or not the pixel is part of a lesion candidate”; model performance is validated, pg. 25, ln 23-24: “Metrics other than validation loss, such as validation accuracy, could also be used to indicate evaluate model performance”; pg. 47, ln 19-21: “To choose the optimal model, a random search over these hyperparameters is performed and the model with the highest validation accuracy is chosen”). Oubel discloses a method to validate the performance of a neural network, but does not specify utilizing probability of the predicted classification. Golden teaches a method to validate the performance of a neural network utilizing the probability of the predicted classification. A person having ordinary skill in the art, before the effective filing date of the claimed invention, could have applied the known technique, as taught by Golden, in the same way to the method of Oubel in view of Huizenga, Stroebel, Auerbach, Doron, and Horry and achieved predictable results of improving the classification task performed by the neural network by adjusting parameters of the model based on probability of the classification result. Including the probability of the predicted result allows the model to give more weight to results that are classified at a higher probability. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Stroebel and O’Brien. Regarding claim 19 (dependent on claim 11), Oubel in view of Huizenga and Stroebel teaches wherein the computing system comprises an autonomous machine learning system (Oubel, neural network of para 89) that associates the classification data with the reference image data (Oubel, para 89: “The neural network is further trained to classify the voxels of a medical image in a region with ablation or without ablation”), but fails to explicitly teach wherein the machine learning system comprises a deep learning neural network that includes an input layer, a plurality of hidden layers, and an output layer. However, O’Brien teaches a similar computing system comprising an autonomous machine learning system (O’Brien, para 9: “scar detection network using machine learning techniques”; para 44: “automatic cardiac scar detection”) that associates classification data with reference image data (O’Brien, para 38: “matches that of the data associated with the anatomical mask training data 206 used to train the CNN 260…identify scar tissue locations and quantity in a similar manner”), wherein the machine learning system comprises a deep learning neural network that includes an input layer, a plurality of hidden layers, and an output layer (O’Brien, para 40: “the convolutional neural network may have an input layer that is configured to receive the anatomical mask data 242 as one or more images, multiple hidden layers (e.g. Cony, ReLu, and Crop pooling), which function to filter, rectify, and downsample the processed data, as well as an output layer that is configured to classify pixels in the image data as cardiac scar tissue, as non-cardiac scar tissue, or as any other suitable type of tissue in accordance with the training of the CNN 260”). Oubel discloses a neural network, but does not specify a deep learning neural network including a plurality of layers. O’Brien teaches the claimed deep learning neural network layers, and a known technique of utilizing a deep learning neural network to perform a classification task with image data. A person having ordinary skill in the art, before the effective filing date of the claimed invention, could have applied the known technique, as taught by O’Brien, in the same way to the method of Oubel in view of Huizenga and Stroebel and achieved predictable results of improving the classification task performed by the neural network by allowing the model to learning more complex features using a deep network with multiple hidden layers. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Oubel in view of Huizenga, in further view of Stroebel, O’Brien, and Isgum. Regarding claim 20 (dependent on claim 19), Oubel in view of Huizenga, Stroebel, and O’Brien fails to explicitly teach explicitly wherein the autonomous machine learning system represents the training data set using a plurality of features, wherein each feature comprises a feature vector; however, Isgum teaches a machine learning system (Isgum, para 74: “machine learning”) that represents the training data using a plurality of features (Isgum, para 75: “features”), wherein each feature comprises a feature vector (Isgum, para 75: “Given a (large) database of images and extracted feature vectors whose labels are known and were used beforehand to train the machine-learning algorithm, classifying unseen images based on the features extracted”). Oubel discloses a machine learning system utilizing classified training data (Oubel, para 89) to train the system to detect image features (Oubel, identification of ablated region), but does not explicitly disclose how the data is stored. Isgum discloses a machine learning system utilizing classified training data (Isgum, para 75: “labels”) to train the system to detect image features (Isgum, para 75: “classifying unseen images based on the features extracted”) using feature vectors. Thus, Oubel and Isgum each disclose a machine learning system to detect image features. A person of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that the annotated training data of Oubel could have been substituted for the labeled training data of Isgum because both serve the purpose of training a machine learning system to classify images based on detected features. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to substitute the labeled training data of Oubel, in Oubel in view of Huizenga, Stroebel, and O’Brien, for the feature vectors of Isgum according to known methods to yield the predictable result of training a machine learning system for accurate feature classification using identified feature vectors from the image. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Shan et al. (CN Patent No. 113538257 A) teaches training a computer vision model using postmortem CT data (pg 35: “The real dataset used in this invention comes from a real-world dataset from [16] that includes 850 CT scans of dead pigs obtained by a GE scanner…708 of these were used for training”). Zhang et al. (previously cited - Zhang, S., Wu, S., Shang, S., Qin, X., Jia, X., Li, D., ... & Wan, M. (2019). Detection and monitoring of thermal lesions induced by microwave ablation using ultrasound imaging and convolutional neural networks. IEEE Journal of Biomedical and Health Informatics, 24(4), 965-973.) teaches the detection of ablation lesions using ultrasound imaging and a CNN. Linte et al. (previously cited - Linte, C. A., Camp, J. J., Rettmann, M. E., Haemmerich, D., Aktas, M. K., Huang, D. T., ... & Holmes III, D. R. (2018). Lesion modeling, characterization, and visualization for image-guided cardiac ablation therapy monitoring. Journal of Medical Imaging, 5(2), 021218-021218.) teaches real-time image-guided monitoring during cardiac ablation. Blondel et al. (previously cited - U.S. Patent No. 2025/0078979 A1) teaches a similar method including a likelihood of success of an ablation treatment (abstract: “likelihood of success of said treatment for the ablation of the lesion”). Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMA E DRYDEN whose telephone number is (571)272-1179. The examiner can normally be reached M-F 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW BEE can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMMA E DRYDEN/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

May 17, 2023
Application Filed
Jun 03, 2025
Non-Final Rejection — §103
Aug 29, 2025
Response Filed
Oct 31, 2025
Final Rejection — §103
Feb 02, 2026
Request for Continued Examination
Feb 10, 2026
Response after Non-Final Action
Mar 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561873
IMAGE PROCESSING APPARATUS AND METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12543950
SLIT LAMP MICROSCOPE, OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC SYSTEM, METHOD OF CONTROLLING SLIT LAMP MICROSCOPE, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12526379
AUTOMATIC IMAGE ORIENTATION VIA ZONE DETECTION
2y 5m to grant Granted Jan 13, 2026
Patent 12340443
METHOD AND APPARATUS FOR ACCELERATED ACQUISITION AND ARTIFACT REDUCTION OF UNDERSAMPLED MRI USING A K-SPACE TRANSFORMER NETWORK
2y 5m to grant Granted Jun 24, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
83%
With Interview (+25.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month