Prosecution Insights
Last updated: April 19, 2026
Application No. 18/764,905

APPARATUS AND METHOD FOR CORRECTING MACHINE LEARNING MODEL PREDICTIONS

Non-Final OA §103
Filed
Jul 05, 2024
Examiner
SHINE, NICHOLAS B
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Anumana, Inc.
OA Round
3 (Non-Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
5y 1m
To Grant
82%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
14 granted / 37 resolved
-17.2% vs TC avg
Strong +45% interview lift
Without
With
+44.6%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
25 currently pending
Career history
62
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
46.0%
+6.0% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 37 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 06/24/2025 has been entered. Claims 1 and 12 are amended. No claim has been cancelled, and there are no new claims. Claims 1–22 are pending for examination. Response to Arguments In reference to 35 USC § 101 Applicant’s arguments, filed on 06/24/2025, with respect to the § 101 rejections have been fully considered and are persuasive. Applicant argues, beginning on Pg. 5 in the Remarks, that the newly amended claims “reflect a technically specific solutions for improving machine learning model performance in the context of cardiac signal annotation.” Specifically, applicant argues “Claim 1 of the present application recites a structured process implemented in a specific technological context: an apparatus comprising a processor and memory configured to process cardiac signals, generate automated annotations using a machine learning model, incorporate user corrections, and update both the training data and the model itself in response to these corrections.” Examiner agrees. Examiner notes that while the claims recite several limitations that are abstract ideas (mental concepts including an observation, evaluation, judgment, opinion), the claims as a whole are not directed to an abstract idea. Applicant amended the claims, which recite auto-populating and updating a cardiac signal (“wherein the processor is configured to update the user interface as a function of a user input, wherein the user input comprises at least a user query, wherein the at least a user query comprises across-session state variable configured to autopopulate historical user query data from previous session, and wherein the historical user query data is configured to be used to further update the at least a cardiac signal.”) is not an abstract idea (see MPEP 2106.04(a)(1)). Thus, these limitations must be considered additional elements to the abstract idea. Examiner notes that these additional element integrates the abstract idea into a practical application because the entire claim amounts to a detailed method of generating automated annotations (as opposed to a broad recitation of generating and updating at high levels of generality), and the specific method of generating and updating recited in the elements amounts to an improvement to the functioning of a computer, as set forth by MPEP 2106.05(a)), which states “the claim must include the components or steps of the invention that provide the improvement described in the specification.” Pursuant to this requirement set forth by the MPEP, Examiner points out that the Specification states in at least [0051, 101]: “With continued reference to FIG. 1 , correction 164 may include automatically processing cardiac signal 116. For instance, apparatus 100 may analyze, modify, and/or synthesize a signal representative of cardiac signal 116 in order to improve the cardiac signal 116, for instance by improving transmission, storage efficiency, or signal-to-noise ratio … With continued reference to FIG. 3 , any process of training, retraining, deployment, and/or instantiation of any machine learning model and/or algorithm may be performed and/or repeated after an initial deployment and/or instantiation to correct, refine, and/or improve the machine learning model and/or algorithm.” Thus, the additional elements reflects the improvement set forth and explains what the resulting improvement is. In reference to 35 USC § 103 Applicant’s arguments filed on 06/24/2025, with respect to the newly amended limitations have been considered but are not persuasive. Applicant argues, beginning on Pg. 9 in the Remarks, that “the above-quoted language does not disclose ‘wherein the processor is configured to update the user interface as a function of a user input, wherein the user input comprises at least a user query, wherein the at least a user query comprises a cross-session state variable configured to autopopulate historical user query data from previous session, and wherein the historical user query data is configured to be used to further update the at least a cardiac signal’.” Examiner agrees. The Final Rejection mailed 03/24/2025 did not include specific rejections for the newly amended limitations. However, examiner respectfully points to the §103 rejections below which now address the newly amended limitations. Examiner notes that Golden teaches all of the newly amended limitations. Without any other specific arguments to the contrary, the §103 rejections are maintained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1–3, 5–8, 10–14, 16–19, and 21–22 are rejected under 35 U.S.C. 103 as being unpatentable over Golden et al., (US 20180259608 A1), hereinafter “Golden”, in view of Gardner et al., (US 9058317 B1), hereinafter “Gardner”, and further in view of Chakravarthy et al., (US 20200352466 A1), hereinafter “Chakravarthy”. Regarding claim 1, Golden teaches: an apparatus for correcting machine learning model predictions, the apparatus comprising: a processor (Golden ¶0028: “A machine learning system may be summarized as including at least one nontransitory processor-readable storage medium that stores at least one of processor-executable instructions or data; and at least one processor communicably coupled to the at least one nontransitory processor-readable storage medium, the at least one processor: receives learning data including a plurality of batches of labeled image sets, each image set including image data representative of an anatomical structure, and each image set including at least one label which identifies the region of a particular part of the anatomical structure depicted in each image of the image set; trains a fully convolutional neural network (CNN) model to segment at least one part of the anatomical structure utilizing the received learning data; and stores the trained CNN model in the at least one nontransitory processor-readable storage medium of the machine learning system”—[emphasis added]); and a memory communicatively connected to the processor, wherein the memory contains instructions configuring the processor to (Golden ¶0028: “A machine learning system may be summarized as including at least one nontransitory processor-readable storage medium that stores at least one of processor-executable instructions or data; and at least one processor communicably coupled to the at least one nontransitory processor-readable storage medium, the at least one processor: receives learning data including a plurality of batches of labeled image sets, each image set including image data representative of an anatomical structure, and each image set including at least one label which identifies the region of a particular part of the anatomical structure depicted in each image of the image set; trains a fully convolutional neural network (CNN) model to segment at least one part of the anatomical structure utilizing the received learning data; and stores the trained CNN model in the at least one nontransitory processor-readable storage medium of the machine learning system”—[emphasis added]): receive at least a cardiac signal having a plurality of segments [from a plurality of electrodes] (Golden ¶0028: “A machine learning system may be summarized as including at least one nontransitory processor-readable storage medium that stores at least one of processor-executable instructions or data; and at least one processor communicably coupled to the at least one nontransitory processor-readable storage medium, the at least one processor: receives learning data including a plurality of batches of labeled image sets, each image set including image data representative of an anatomical structure, and each image set including at least one label which identifies the region of a particular part of the anatomical structure depicted in each image of the image set; trains a fully convolutional neural network (CNN) model to segment at least one part of the anatomical structure utilizing the received learning data; and stores the trained CNN model in the at least one nontransitory processor-readable storage medium of the machine learning system”; see also Golden Fig. 4, ¶0023: “An example case where both the left ventricle and left atrium are visible in a single slice is shown in images 400a and 400b of FIG. 4. If the clinician fails to refer to the current SAX slice projected on a corresponding LAX view, it may not be obvious that the SAX slice spans both the ventricle and atrium. Further, even if the LAX view is available, it may be difficult to tell on the SAX slice where the valve is located, and therefore, where the segmentation of the ventricle should end, since the ventricle and atrium have similar signal intensities. Segmentation near the base of the heart is therefore one of the major sources of error for ventricular segmentation”—[(emphasis added) wherein each image is a segment of a surgical recording of an anatomical structure (i.e., plurality of cardiac signal segments; e.g., heart surgery)]); generate, for at least a segment of the plurality of segments, a label representing at least a signal feature (Golden ¶0059: “The at least one processor may, for each of one or more landmarks of the anatomical structure, define a 3D label map based at least in part on the received sets of 3D MRI images and the received plurality of annotations, each 3D label map may encode a likelihood that the landmark is located at a particular location on the 3D label map, wherein the at least one processor may train the CNN model to segment the one or more landmarks utilizing the 3D MRI images and the generated 3D label maps. The images in each of the plurality of sets may represent a heart of a patient at different respective time points of a cardiac cycle, and each annotation may be indicative of a landmark of a heart of a patient depicted in a corresponding image”—[wherein the processor defines (i.e., generates) a 3D label map based on the landmarks of the anatomical structure (i.e., a label representing a signal feature)]); generate at least an automated annotation for the at least a segment as a function of the label using an annotation machine learning model trained with annotation training data wherein the annotation training data comprises a plurality of exemplary cardiac signals as inputs correlated with a plurality of exemplary annotations as outputs (Golden ¶0059: “The at least one processor may, for each of one or more landmarks of the anatomical structure, define a 3D label map based at least in part on the received sets of 3D MRI images and the received plurality of annotations, each 3D label map may encode a likelihood that the landmark is located at a particular location on the 3D label map, wherein the at least one processor may train the CNN model to segment the one or more landmarks utilizing the 3D MRI images and the generated 3D label maps. The images in each of the plurality of sets may represent a heart of a patient at different respective time points of a cardiac cycle, and each annotation may be indicative of a landmark of a heart of a patient depicted in a corresponding image”; see also Golden ¶0208: “The SAX segmentation model uses both the raw SAX DICOM data as well as the predicted projected lines from the LAX model(s) as inputs in order to make its prediction. The predicted LAX lines serve to guide and bound the SAX predictions, and particularly aid the model near the base of the heart and valve plane, where the segmentations are often ambiguous when viewed on the SAX stack alone”—[(emphasis added) wherein the processor trains the CNN model to segment and label the MRI images with landmarks including annotations indicative of a landmark of the heart, and wherein the segments are annotated with the predicted project lines which are used as inputs in order to make the annotation predictions]); generate, using a correction module, at least a correction upon detecting an absence of annotations (Golden ¶¶0157–0158: “Unlike previous models which were only concerned with two classes for a cell discrimination task, foreground and background, the SSFP model disclosed herein attempts to distinguish four classes, namely, background, LV Endocardium, LV Epicardium and RV Endocardium. To accomplish this, the network output may include three probability maps, one for each non-background class. During training, ground truth binary masks for each of the three classes are provided to the network, along with the pixel data. The network loss may be determined as the sum of the loss over the three classes. If any of the three ground truth masks are missing for an image (meaning that we have no data, as opposed to the ground truth being an empty mask), that mask may be ignored when calculating the loss. Missing ground truth data is explicitly accounted for during the training process. For example, the network may be trained on an image for which the LV endocardium contour is defined, even if the LV epicardium and RV endocardium contour locations are not known. A more basic architecture that could not account for missing data could only have been trained on a subset (e.g., 20 percent) of training images that have all three types of contours defined. Reducing the training data volume in this way would result in significantly reduced accuracy. Thus, by explicitly modifying the loss function to account for missing data, the full training data volume is used, allowing the network to learn more robust features”—[(emphasis added) wherein the BRI of correction module is any code, instruction, or process executed on a processor, and wherein the system explicitly modifies the loss function to account for missing data]); receive, using a user interface, at least an input from a user (Golden ¶0011: “The most basic method of creating ventricular contours is to complete the process manually with some sort of polygonal or spline drawing tool, without any automated algorithms or tools. In this case, the user may, for example, create a freehand drawing of the outline of the ventricle, or drop spline control points which are then connected with a smoothed spline contour. After initial creation of the contour, depending on the software's user interface, the user typically has some ability to modify the contour, e.g., by moving, adding or deleting control points or by moving the spline segments”; see also Golden ¶0178: “At the same time that contours (e.g., contours 1002, 1102 and 1202) are displayed to the user, the system calculates and shows ventricle volumes at ED and ES to the user, as well as multiple computed measurements. An example interface 1300 is shown in FIG. 13 which displays multiple computed measurements. In at least some implementations, these measurements include stroke volume (SV) 1302, which is the volume of blood ejected from the ventricle in one cardiac cycle; ejection fraction (EF) 1304, which is the fraction of the blood pool ejected from the ventricle in one cardiac cycle; cardiac output (CO) 1306, which is the average rate at which blood leaves the ventricle, ED mass 1308, which is the mass of the myocardium (i.e., epicardium-endocardium) for the ventricle at end diastole; and ES mass 1310, which is the mass of the myocardium for the ventricle at end systole”; see also Golden ¶0249: “In order to have a more accurate segmentation of the left and right ventricles, it may be advantageous to identify the position and orientation of the valves of the heart. In at least some implementations, within the aforementioned ventricle segmentation interface, a user is able to mark points that lie on the valve plane using the available long axis views. The valve plane is determined from these input points by performing a regression to find the plane that best fits. The normal for the plane is set to point away from the apex of the ventricle. Once the plane has been defined, any portion of the volume that lies on the positive side is subtracted from the total volume for the ventricle. This ensures that nothing outside the valve is included in determining the volume of the ventricle”—[(emphasis added)]); create at least a user annotation within the at least a cardiac signal using the at least an input (Golden ¶0218: “First, the data handling pipeline is described. This section details the process which is followed to create the database of images with their annotations, along with the specific method used to encode landmark location. Second, the architecture of the machine learning approach is presented. How the network transforms the input 3D image into a prediction of landmark location is presented. Third, how the model is trained to the available data is described. Finally, the inference pipeline is detailed. It is shown how one can apply the neural network to an image never used before to predict the region of all six landmarks”; see also Golden ¶0249: “In order to have a more accurate segmentation of the left and right ventricles, it may be advantageous to identify the position and orientation of the valves of the heart. In at least some implementations, within the aforementioned ventricle segmentation interface, a user is able to mark points that lie on the valve plane using the available long axis views. The valve plane is determined from these input points by performing a regression to find the plane that best fits. The normal for the plane is set to point away from the apex of the ventricle. Once the plane has been defined, any portion of the volume that lies on the positive side is subtracted from the total volume for the ventricle. This ensures that nothing outside the valve is included in determining the volume of the ventricle”—[(emphasis added)]), and generating, for each incorrect automatic annotation identified based on the validation, the at least a user annotation (Golden ¶0196: “At 2122, the generated splines are forwarded back to the web server after all batches have been processed, where the splines are joined with the inference results from other inference nodes. The web server ensures that the volume is contiguous (i.e., no missing contours in the middle of the volume) by interpolating between neighboring slices if a contour is missing. At 2124, the web server saves the contours in the database, then presents the contours to the user via the web application. If the user edits a spline, the spline's updated version is saved in the database alongside the original, automatically-generated version. In at least some implementations, comparing manually edited contours with their original, automatically-generated versions, may be used to re-train or fine-tune a model only on inference results that required manual correction”; see also Golden ¶0237: “In at least some implementations, the dataset is made of clinical studies uploaded on the web application by previous users. The annotations may be placed by the user on the different images. As explained previously, this dataset is split into a train, validation, and test set”; see also Golden ¶0233: “The training database may be split into a training set used to train the model, a validation set used to quantify the quality of the model, and a test set. The split enforces all the images from a single patient to lie in the same set. This guarantees that the model is not validated with patients used for training. At no point is data from the test set used when training the model. Data from the test set may be used to show examples of landmark localization, but this information is not used for training or for ranking models with respect to one another”—[emphasis added]); update the at least a cardiac signal and the annotation machine learning model as a function of the at least a correction and the at least a user annotation (Golden ¶0196: “At 2122, the generated splines are forwarded back to the web server after all batches have been processed, where the splines are joined with the inference results from other inference nodes. The web server ensures that the volume is contiguous (i.e., no missing contours in the middle of the volume) by interpolating between neighboring slices if a contour is missing. At 2124, the web server saves the contours in the database, then presents the contours to the user via the web application. If the user edits a spline, the spline's updated version is saved in the database alongside the original, automatically-generated version. In at least some implementations, comparing manually edited contours with their original, automatically-generated versions, may be used to re-train or fine-tune a model only on inference results that required manual correction”—[emphasis added]); wherein updating the annotation machine learning model comprises: replacing one or more portions of the cardiac signal by synthesizing the cardiac signal as a function of the correction (Golden “Synthesis of Other Views for Automated Volumes,” ¶¶0204–0208: “Thus, a two-stage ventricle segmentation model may be utilized. In a first stage, the ventricles are segmented in one or more LAX planes. Because of the high spatial resolution of these images, the segmentation can be very precise. A disadvantage is the LAX plane consists of only a single plane instead of a volume. If this LAX segmentation is projected to the SAX stack, the LAX segmentation appears as a line on each of the SAX images. This line may be created precisely if the line is aggregated across segmentations from multiple LAX views (e.g., 2CH, 3CH, 4CH; see the heading ‘Interface for defining valve planes for manual LV/RV volumes’ below). This line may be used to bound the SAX segmentation, which is generated via a different model that operates on the SAX images. The SAX segmentation model uses both the raw SAX DICOM data as well as the predicted projected lines from the LAX model(s) as inputs in order to make its prediction. The predicted LAX lines serve to guide and bound the SAX predictions, and particularly aid the model near the base of the heart and valve plane, where the segmentations are often ambiguous when viewed on the SAX stack alone”—[wherein the system replaces portions of the cardiac signal (e.g., projecting LAX segmentation on the SAX images) which are used to bound the SAX segmentations (i.e., synthesizing the cardiac signal) and input to the model for predictions]); updating the cardiac signal by replacing each incorrect automatic annotation with the at least a user annotation (Golden ¶0011: “After initial creation of the contour, depending on the software's user interface, the user typically has some ability to modify the contour, e.g., by moving, adding or deleting control points or by moving the spline segments”—[(emphasis added)]); and adding the updated cardiac signal including the one or more replaced portions of the cardiac signal to the annotation training data in order to increase an accuracy of the annotation machine learning model (Golden Figs. 14, 16–18, 21, “Training Database Creation for 4D Flow Data” ¶¶0180–0191: “Whereas the SSFP DICOM files are acquired and stored in SAX orientation, 4D Flow DICOMs are collected and stored as axial slices. In order to create a SAX multi-planar reconstruction (MPR) of the data, the user may need to place the relevant landmarks for the left and/or right heart. These landmarks are then used to define unique SAX planes for each ventricle as defined by the ventricle apex and valves. FIG. 14 shows a set 1400 of SAX planes (also referred to as a SAX stack) for the LV in which each SAX plane is parallel for a two chamber view 1402, a three chamber view 1404 and a four chamber view 1406 … FIG. 18 shows a process 1800 of creating a training LMDB from clinician annotations. 4D Flow annotations may be stored in a MongoDB 1802. At 1804 and 1806, the system extracts the contours and landmarks, respectively. Contours are stored as a series of (x, y, z) points defining the splines of the contour. Landmarks are stored as a single four-dimensional coordinate (x, y, z, t) for each landmark”—[(emphasis added)]); and display, using the user interface, the updated at least a cardiac signal to the user (Golden ¶0178: “At the same time that contours (e.g., contours 1002, 1102 and 1202) are displayed to the user, the system calculates and shows ventricle volumes at ED and ES to the user, as well as multiple computed measurements. An example interface 1300 is shown in FIG. 13 which displays multiple computed measurements. In at least some implementations, these measurements include stroke volume (SV) 1302, which is the volume of blood ejected from the ventricle in one cardiac cycle; ejection fraction (EF) 1304, which is the fraction of the blood pool ejected from the ventricle in one cardiac cycle; cardiac output (CO) 1306, which is the average rate at which blood leaves the ventricle, ED mass 1308, which is the mass of the myocardium (i.e., epicardium-endocardium) for the ventricle at end diastole; and ES mass 1310, which is the mass of the myocardium for the ventricle at end systole”; see also Golden ¶0196: “At 2122, the generated splines are forwarded back to the web server after all batches have been processed, where the splines are joined with the inference results from other inference nodes. The web server ensures that the volume is contiguous (i.e., no missing contours in the middle of the volume) by interpolating between neighboring slices if a contour is missing. At 2124, the web server saves the contours in the database, then presents the contours to the user via the web application. If the user edits a spline, the spline's updated version is saved in the database alongside the original, automatically-generated version. In at least some implementations, comparing manually edited contours with their original, automatically-generated versions, may be used to re-train or fine-tune a model only on inference results that required manual correction”—[emphasis added]). wherein the processor is configured to update the user interface as a function of a user input (Golden ¶0196: “At 2122, the generated splines are forwarded back to the web server after all batches have been processed, where the splines are joined with the inference results from other inference nodes. The web server ensures that the volume is contiguous (i.e., no missing contours in the middle of the volume) by interpolating between neighboring slices if a contour is missing. At 2124, the web server saves the contours in the database, then presents the contours to the user via the web application. If the user edits a spline, the spline's updated version is saved in the database alongside the original, automatically-generated version. In at least some implementations, comparing manually edited contours with their original, automatically-generated versions, may be used to re-train or fine-tune a model only on inference results that required manual correction”; see also Golden ¶0168: “The inference service is responsible for loading a model, generating contours, and displaying them for the user. After inference is invoked at 902, at 904 images are sent to an inference server. At 906, the production model or network that is used by the inference service is loaded onto the inference server. The network may have been previously selected from the corpus of models trained during hyperparameter search. The network may be chosen based on a tradeoff between accuracy, memory usage and speed of execution. The user may alternatively be given a choice between a “fast” or “accurate” model via a user preference option”—[(emphasis added)]), wherein the user input comprises at least a user query (Golden ¶0168: “The inference service is responsible for loading a model, generating contours, and displaying them for the user. After inference is invoked at 902, at 904 images are sent to an inference server. At 906, the production model or network that is used by the inference service is loaded onto the inference server. The network may have been previously selected from the corpus of models trained during hyperparameter search. The network may be chosen based on a tradeoff between accuracy, memory usage and speed of execution. The user may alternatively be given a choice between a “fast” or “accurate” model via a user preference option”; see also Golden ¶0224: “Note that alternative strategies may also be used to define the standard deviation (arbitrary value, parameter search) and may lead to comparable results. FIG. 29 shows this transition from a landmark position, identified with a cross 2902 in a view 2904, to a Gaussian 2906 in a view 2908 evaluated on the image for the 2D case”—[(emphasis added)]), wherein the at least a user query comprises across-session state variable configured to autopopulate historical user query data from previous session (Golden ¶¶0167–0168: “At 902, after a user has loaded a study in the web application, the user may invoke the inference service (e.g., by clicking a “generate missing contours” icon), which automatically generates any missing (not yet created) contours. Such contours may include LV Endo, LV Epi, or RV Endo, for example. In at least some implementations, inference may be invoked automatically when the study is either loaded by the user in the application or when the study is first uploaded by the user to a server. If inference is performed at upload time, the predictions may be stored in a nontransitory processor-readable storage medium at that time but not displayed until the user opens the study … The network may have been previously selected from the corpus of models trained during hyperparameter search”; see also Golden ¶0231: “Third, the parameters selected after the hyperparameter search can differ from the DeepVentricle parameters, and are specifically selected to solve the problem at hand. Additionally, the standard deviation used to define the label maps, discussed above, may be considered as a hyperparameter”; see also Golden Fig. 13, ¶0178: “At the same time that contours (e.g., contours 1002, 1102 and 1202) are displayed to the user, the system calculates and shows ventricle volumes at ED and ES to the user, as well as multiple computed measurements. An example interface 1300 is shown in FIG. 13 which displays multiple computed measurements”—[(emphasis added) wherein the BRI of across-session state variable is any variable recording data entered on remote device during a previous session (see present disclosure [0046])]), and wherein the historical user query data is configured to be used to further update the at least a cardiac signal (Golden “Synthesis of Other Views for Automated Volumes,” ¶¶0204–0208: “Thus, a two-stage ventricle segmentation model may be utilized. In a first stage, the ventricles are segmented in one or more LAX planes … The SAX segmentation model uses both the raw SAX DICOM data as well as the predicted projected lines from the LAX model(s) as inputs in order to make its prediction. The predicted LAX lines serve to guide and bound the SAX predictions, and particularly aid the model near the base of the heart and valve plane, where the segmentations are often ambiguous when viewed on the SAX stack alone”; see also Golden ¶0024: “In the 4D Flow workflow of a cardiac imaging application, the user may be required to define the regions of different landmarks in the heart in order to see different cardiac views (e.g., 2CH, 3CH, 4CH, SAX) and segment the ventricles”; see also Golden ¶0168: “The inference service is responsible for loading a model, generating contours, and displaying them for the user. After inference is invoked at 902, at 904 images are sent to an inference server. At 906, the production model or network that is used by the inference service is loaded onto the inference server. The network may have been previously selected from the corpus of models trained during hyperparameter search. The network may be chosen based on a tradeoff between accuracy, memory usage and speed of execution. The user may alternatively be given a choice between a “fast” or “accurate” model via a user preference option”—[(emphasis added) wherein the system replaces portions of the cardiac signal (i.e., updating the cardiac signal) which uses the SAX model capturing both the raw SAX DICOM data as well as the predicted projected lines (i.e., historical user query data) provided by the LAX model configured with user hyperparameter search data (i.e., query) required to define the regions of the cardiac views]). Golden does not appear to explicitly teach: [receive at least a cardiac signal having a plurality of segments] from a plurality of electrodes; removing at least one exemplary input and correlated exemplary output of the annotation training data as a function of the user annotation; and wherein creating the at least a user annotation comprises: validating the at least an automated annotation. However, Gardner teaches: wherein creating the at least a user annotation comprises: validating the at least an automated annotation (Gardner Col. 5, lines 52–60: “The routine 200 proceeds from operation 208 to operation 210, where the identified inaccurate annotations are corrected. Operations 208 and 210 may include inspecting and/or validating annotations generated according to the predictive annotations. Inspection may be performed in response to receiving a user selection of one or more types of annotations to review. Following operation 210, at operation 212 one or more new annotations may be created, i.e. annotations that were not generated by predictive annotation”; see also Gardner Col. 7, lines 32–52: “Now referring specifically to buttons within the illustrated group 302, "Inspection" enables the inspection mode and "Creation" enables the creation mode with respect to the selected text; "All" enables the inspection and updating of all annotations to the text; "Sentence" enables inspection and updating of sentence annotations when the Inspection mode is active and creation of sentence annotations when the Creation mode is active; "Phrase" enables the inspection and updating of phrase annotations in the Inspection mode and creation of phrase annotations in the Creation mode; "Token" enables the inspection and updating of token annotations in the Inspection mode and the creation of token annotations in the Creation mode; "Coreference" enables the inspection and updating of coreference annotations in the Inspection mode and creation of coreference annotations in the Creation mode; "Assertion" enables the inspection and updating of assertion annotations in the Inspection mode and the creation of assertion annotations in the Creation mode; and "Note" enables the inspection and updating of note annotations in the Inspection mode and the creation of note annotations in the Creation mode. Within the illustrated group of buttons 304 and 306, "Save" enables the saving of open files; "Predict Annotations" enables the predictions of annotations to a selected file; "Validate DAF" enables the validations of the annotations in a selected file; and "Toggle LTR/RTL" enables toggling text alignment in the selected file”—[emphasis added]); removing at least one exemplary input and correlated exemplary output of the annotation training data as a function of the user annotation (Gardner Figs. 2, 3, Col. 5, line 11 – Col. 6, line 19: “From operation 204, the routine 200 proceeds to operation 206, where predictive annotations to the sequence of characters are generated based at least in part on the identified data features. Generating the predictive annotations may include predictively labeling factual assertions, parts of speech, syntactic roles, token boundaries, and/or categories associated with the sequence of characters. Predictive labeling may be based on resource data that includes lexicons identifying predefined associations between particular sequences of characters and labels. From operation 206, the routine 200 proceeds to operation 208, where inaccurate annotations generated according to the predictive annotations are identified”; see also Gardner Col. 7, line 32 – Col. 8, line 35: “Now referring specifically to buttons within the illustrated group 302, “Inspection” enables the inspection mode and “Creation” enables the creation mode with respect to the selected text; “All” enables the inspection and updating of all annotations to the text”—[wherein the system identifies the inaccurate annotations, based on the input and output of the generative prediction model, and replaces them with the corrected user annotations (e.g., created via the user interface)]); The system of Golden, the teachings of Gardner, and the instant application are analogous art because they pertain to training machine learning models with annotated segmented data. It would be obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of Golden with the teachings of Gardner to provide for validating that the system automated annotation is correct. One would be motivated to do so to correct incorrect annotations (Gardner Col. 5, lines 52–60: “The routine 200 proceeds from operation 208 to operation 210, where the identified inaccurate annotations are corrected. Operations 208 and 210 may include inspecting and/or validating annotations generated according to the predictive annotations. Inspection may be performed in response to receiving a user selection of one or more types of annotations to review. Following operation 210, at operation 212 one or more new annotations may be created, i.e. annotations that were not generated by predictive annotation”). Golden in view of Gardner does not appear to explicitly teach: [receive at least a cardiac signal having a plurality of segments] from a plurality of electrodes; However, Chakravarthy teaches: [receive at least a cardiac signal having a plurality of segments] from a plurality of electrodes (Chakravarthy ¶0047: “Sensing circuitry 52 and communication circuitry 54 may be selectively coupled to electrodes 16A, 16B via switching circuitry 60 as controlled by processing circuitry 50. Sensing circuitry 52 may monitor signals from electrodes 16A, 16B in order to monitor electrical activity of a heart of patient 4 of FIG. 1 and produce cardiac electrogram data for patient 4. In some examples, processing circuitry 50 may perform feature delineation of the sensed cardiac electrogram data to detect an episode of cardiac arrhythmia of patient 4. In some examples, processing circuitry 50 transmits, via communication circuitry 54, the cardiac electrogram data for patient 4 to an external device, such as external device 12 of FIG. 1. For example, IMD 10 sends digitized cardiac electrogram data to network 25 for processing by machine learning system 150 of FIG. 1. In some examples, IMD 10 transmits one or more segments of the cardiac electrogram data in response to detecting, via feature delineation, an episode of arrhythmia”—[(emphasis added)]). The system of Golden, the teachings of Chakravarthy, and the instant application are analogous art because they pertain to training machine learning models with annotated segmented data. It would be obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of Golden in view of Gardner with the teachings of Chakravarthy to provide for the cardiac signals to be received data from electrodes. One would be motivated to do so to monitor and produce sensed electrogram cardiac data for a patient (Chakravarthy ¶0047: “Sensing circuitry 52 and communication circuitry 54 may be selectively coupled to electrodes 16A, 16B via switching circuitry 60 as controlled by processing circuitry 50. Sensing circuitry 52 may monitor signals from electrodes 16A, 16B in order to monitor electrical activity of a heart of patient 4 of FIG. 1 and produce cardiac electrogram data for patient 4”). Regarding claim 2, Golden in view of Gardner and Chakravarthy teaches all the limitations of claim 1. Golden teaches: wherein generating the label comprises: receiving the plurality of training data comprising the plurality of exemplary cardiac signals as inputs correlated with the plurality of exemplary labels as outputs (Golden ¶0028: “A machine learning system may be summarized as including at least one nontransitory processor-readable storage medium that stores at least one of processor-executable instructions or data; and at least one processor communicably coupled to the at least one nontransitory processor-readable storage medium, the at least one processor: receives learning data including a plurality of batches of labeled image sets, each image set including image data representative of an anatomical structure, and each image set including at least one label which identifies the region of a particular part of the anatomical structure depicted in each image of the image set; trains a fully convolutional neural network (CNN) model to segment at least one part of the anatomical structure utilizing the received learning data; and stores the trained CNN model in the at least one nontransitory processor-readable storage medium of the machine learning system”; see also Golden ¶0226: “FIG. 30 shows a process 3000 for a preprocessing pipeline. At 3006 and 3008, the 3D MRIs 3002 and label maps 3004, respectively, are resized to a predefined size $n_x \times n_y \times n_z$ such that all of the MRIs can be fed to the same neural network. At 3010, the intensity of the MM pixels are clipped between the 1st and 99th percentile. This means that the pixel intensity will saturate at the value of the intensity corresponding to the 1st and 99th percentile. This removes outlier pixel intensities that may be caused by artifacts. At 3012, the intensities are then scaled to lie between 0 and 1. At 3014, the intensity histogram is then normalized using contrast limited adaptive histogram equalization to maximize contrast in the image and minimize intra-image intensity differences (as may be caused by, for example, magnetic field inhomogeneities). Finally, at 3016 the image is centered to have zero mean. Other strategies may be used for the normalization of the image intensity, such as normalizing the variance of the input to one, and may yield similar results. This pipeline results in preprocessed images 3018 and labels 3020 which can be fed to the network”—[(emphasis added) wherein the received learning data includes MRI’s and label maps (i.e., exemplary cardiac signals as inputs correlated with a plurality of exemplary labels as outputs)]); training a labeling machine learning model as a function of the plurality of training data (Golden ¶0225: “At 2812, once the 3D volumes have been defined for both the MM and the label map, the images are prepocessed. Generally, the goal is to normalize the images size and appearance for future training”; see also Golden ¶0228: “Returning to FIG. 28, at 2814 an upload ID is defined to be the key that identifies the pair (MRI, label map), which is stored in a training LMDB database at 2816. Finally, at 2818 the pair (MRI, label map) is written to the LMDB”; see also Golden Fig. 8, ¶0232: “The following discussion describes how the deep neural network can be trained using the LMDB database of 3D MM and label map pairs. The overall objective is to tune the parameters of the network such that the network is able to predict the position of the heart landmarks on previously unseen images. A flowchart of the training process is shown in FIG. 8 and described above”); and generating the label using the labeling machine learning model (Golden ¶0231: “The network used for landmark detection differs from the DeepVentricle implementation discussed above in three main ways. First, the architecture is three dimensional: the network processes a 3D MM in a single pass, producing a 3D label map for every landmark. Second, the network predicts 6 classes, one for each landmark. Third, the parameters selected after the hyperparameter search can differ from the DeepVentricle parameters, and are specifically selected to solve the problem at hand. Additionally, the standard deviation used to define the label maps, discussed above, may be considered as a hyperparameter. The output of the network is a 3D map which encodes where the landmark is positioned. High values of the map may correspond to likely landmark position, and low values may correspond to unlikely landmark position”—[emphasis added]). Regarding claim 3, Golden in view of Gardner and Chakravarthy teaches all the limitations of claim 1. Golden teaches: wherein: the at least a cardiac signal comprises time series data, wherein the plurality of segments comprises a plurality of time series segments (Golden ¶0219: “For the presented machine learning approach, a database of 4D Flow data is used, which includes three dimensional (3D) magnetic resonance images (MM) of the heart, stored as series of two dimensional (2D) DICOM images. Typically, around 20 3D volumetric images are acquired throughout a single cardiac cycle, each corresponding to one snapshot of the heartbeat. The initial database thus corresponds to the 3D images of different patients at different time steps. Each 3D MRI presents a number of landmark annotations, from zero landmark to six landmarks, placed by the user of the web application. The landmark annotations, if present, are stored as vectors of coordinates (x, y, z, t) indicating the position (x, y, z) of the landmark in the 3D MRI corresponding to the time point t”; see also Golden ¶0244: “In at least some implementations, the automatic location of cardiac landmarks may be achieved by directly predicting the coordinates (x, y, z) of the different landmarks. For that, a different network architecture may be used. This alternative network may be composed of a contracting path, followed with several fully connected layers, with a length-three vector of (x, y, z) coordinates as the output for each landmark. This is a regression, rather than a segmentation network. Note that, in the regression network, unlike in the segmentation network, there is no expanding path in the network. Other architectures may also be used with the same output format. In at least some implementations, time may also be included in the output as a fourth dimension if 4D data (x, y, z, time) is given as input”—[wherein the data includes 3D images of different patients at different time steps and 4D segment data including time]); and displaying the at least a cardiac signal comprises displaying at least a time series segment of the plurality of time series segments (Golden ¶¶0178–0179: “At the same time that contours (e.g., contours 1002, 1102 and 1202) are displayed to the user, the system calculates and shows ventricle volumes at ED and ES to the user, as well as multiple computed measurements. An example interface 1300 is shown in FIG. 13 which displays multiple computed measurements. In at least some implementations, these measurements include stroke volume (SV) 1302, which is the volume of blood ejected from the ventricle in one cardiac cycle; ejection fraction (EF) 1304, which is the fraction of the blood pool ejected from the ventricle in one cardiac cycle; cardiac output (CO) 1306, which is the average rate at which blood leaves the ventricle, ED mass 1308, which is the mass of the myocardium (i.e., epicardium-endocardium) for the ventricle at end diastole; and ES mass 1310, which is the mass of the myocardium for the ventricle at end systole. For 4D Flow data, the same DeepVentricle architecture, hyperparameter search methodology, and training database as described above for SSFP data may be used. Training a 4D Flow model may be the same as in the SSFP operation discussed above, but the creation of an LMDB and inference may be different for the 4D Flow implementation”; see also Golden ¶0241: “In at least some implementations, the 3D images acquired are 4D Flow sequences. This means that the phase of the signal is also acquired, and may be used to quantify the velocity of the blood flow in the heart and arteries, as shown in the image 3400 of FIG. 34 which shows four different views. This information can be useful to
Read full office action

Prosecution Timeline

Jul 05, 2024
Application Filed
Sep 27, 2024
Non-Final Rejection — §103
Feb 06, 2025
Applicant Interview (Telephonic)
Feb 06, 2025
Examiner Interview Summary
Feb 10, 2025
Response Filed
Mar 18, 2025
Final Rejection — §103
Jun 24, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Dec 10, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579449
HYDROCARBON OIL FRACTION PREDICTION WHILE DRILLING
2y 5m to grant Granted Mar 17, 2026
Patent 12572440
AUTOMATICALLY DETECTING WORKLOAD TYPE-RELATED INFORMATION IN STORAGE SYSTEMS USING MACHINE LEARNING TECHNIQUES
2y 5m to grant Granted Mar 10, 2026
Patent 12561554
ERROR IDENTIFICATION FOR AN ARTIFICIAL NEURAL NETWORK
2y 5m to grant Granted Feb 24, 2026
Patent 12533800
TRAINING REINFORCEMENT LEARNING AGENTS TO LEARN FARSIGHTED BEHAVIORS BY PREDICTING IN LATENT SPACE
2y 5m to grant Granted Jan 27, 2026
Patent 12536428
KNOWLEDGE GRAPHS IN MACHINE LEARNING DECISION OPTIMIZATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
82%
With Interview (+44.6%)
5y 1m
Median Time to Grant
High
PTA Risk
Based on 37 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month