DETAILED ACTION
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4, 6-7, 9-20 are rejected under 35 U.S.C. 103 as being unpatentable over DUVAL (US 20240016551 A1), in view of WUBBELS (US 11580422 B1, provided in IDS).
Re Claim 1, DUVAL discloses a method comprising:
extracting characteristics of input data fed into a trained machine learning model, to produce an input data characterization, wherein the input data includes a medical image (see DUVAL: e. g., --, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]);
although DUVAL discloses “validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model” as recited in above;
DUVAL however does not explicitly disclose extracting characteristics of performance of the trained machine learning model during mapping of the medical image to an output, to produce a model performance characterization;
WUBBELS discloses extracting characteristics of performance of the trained machine learning model during mapping of the medical image to an output, to produce a model performance characterization (see WUBBELS: e.g., --The method involves monitoring medical diagnostic inferences by a target machine learning model over a period of time, and determining a number of inferences that are indicative of a particular medical condition based on the monitoring. A performance anomaly associated with the model is detected in response to determining that the inferences are outside a specified range. The model is caused to be retrained or to be decommissioned based on detection of the anomaly. A determination is made that the anomaly is caused by the model, and performance metric is decreased based on determining the determination.--, in abstract, and,
-- in assess quality of input provided to the machine learning model, ..to detect a deviation from expected performance as defined by pre-deployment validation--, in lines 54-59, col. 2; and,
--To detect the anomaly associated with inference, in step 1540 the processor can determine an acceptable range of the expected distribution of inference…In step 1580, when monitoring the performance and detecting the anomaly indicate a substantial decrease in the accuracy of the machine learning model… --, in lines 9-57, col. 19; and,
--In addition to monitoring age of the subjects, the processor can monitor all the various dimensions collected….” in lines 11-66, col. 20; {those “monitoring” and detecting, and determining the performance of the machine learning model read on, align with claimed limitation of “extracting characteristics of performance of the trained machine learning model”});
DUVAL and WUBBELS are combinable as they are in the same field of endeavor: monitoring and validating the performance of a machine learning model for medical image and data. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify DUVAL’s method using WUBBELS’s teachings by including extracting characteristics of performance of the trained machine learning model during mapping of the medical image to an output, to produce a model performance characterization to DUVAL’s validating the trained machine-learning model in order to monitor medical diagnostic inferences by a target machine learning model and detect performance of the machine learning model (see WUBBELS: e.g., in abstract, in lines 54-59, col. 2; in lines 9-57, col. 19; and in lines 11-66, col. 20);
DUVAL as modified by WUBBELS further disclose extracting characteristics of user feedback received based on the output of the trained machine learning model, to produce a user feedback characterization (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]);
determining an input data deviation by comparing the input data characterization against a plurality of previously determined input data characterizations (see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and also see WUBBELS: e.g., --The method involves monitoring medical diagnostic inferences by a target machine learning model over a period of time, and determining a number of inferences that are indicative of a particular medical condition based on the monitoring. A performance anomaly associated with the model is detected in response to determining that the inferences are outside a specified range. The model is caused to be retrained or to be decommissioned based on detection of the anomaly. A determination is made that the anomaly is caused by the model, and performance metric is decreased based on determining the determination.--, in abstract, and,
-- in assess quality of input provided to the machine learning model, ..to detect a deviation from expected performance as defined by pre-deployment validation--, in lines 54-59, col. 2);
determining a model performance deviation by comparing the model performance characterization against a plurality of previously determined model performance characterizations (see WUBBELS: e.g., --The method involves monitoring medical diagnostic inferences by a target machine learning model over a period of time, and determining a number of inferences that are indicative of a particular medical condition based on the monitoring. A performance anomaly associated with the model is detected in response to determining that the inferences are outside a specified range. The model is caused to be retrained or to be decommissioned based on detection of the anomaly. A determination is made that the anomaly is caused by the model, and performance metric is decreased based on determining the determination.--, in abstract, and,
-- in assess quality of input provided to the machine learning model, ..to detect a deviation from expected performance as defined by pre-deployment validation--, in lines 54-59, col. 2; and,
--compute metrics to evaluate the model performance and compare them with benchmark metrices--, in lines 28-29, col. 27);
determining a user feedback deviation by comparing the user feedback characterization against a plurality of previously determined user feedback characterizations (see WUBBELS: e.g., --The method involves monitoring medical diagnostic inferences by a target machine learning model over a period of time, and determining a number of inferences that are indicative of a particular medical condition based on the monitoring. A performance anomaly associated with the model is detected in response to determining that the inferences are outside a specified range. The model is caused to be retrained or to be decommissioned based on detection of the anomaly. A determination is made that the anomaly is caused by the model, and performance metric is decreased based on determining the determination.--, in abstract, and,
-- in assess quality of input provided to the machine learning model, ..to detect a deviation from expected performance as defined by pre-deployment validation--, in lines 54-59, col. 2; and, --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”}; and, --When the multiple interferences are not substantially the same as the interference of the machine learning model 1100, […] the validator module 1100 can note a decrease in the accuracy of the machine learning model--, {herein “decrease in the accuracy” is the result of comparison to previous characterizations; and is consistent with DUVAL’s confidence weights/scores as will be cited and discussed below}--, in lines 56-60, col. 14;
also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]); and
responding to one or more of the input data deviation exceeding an input data deviation threshold, the model performance deviation exceeding a model performance deviation threshold, and the user feedback deviation exceeding a user feedback deviation threshold, by transmitting an alert to a user device (see WUBBELS: e.g., --The method involves monitoring medical diagnostic inferences by a target machine learning model over a period of time, and determining a number of inferences that are indicative of a particular medical condition based on the monitoring. A performance anomaly associated with the model is detected in response to determining that the inferences are outside a specified range. The model is caused to be retrained or to be decommissioned based on detection of the anomaly. A determination is made that the anomaly is caused by the model, and performance metric is decreased based on determining the determination.--, in abstract, and,
-- in assess quality of input provided to the machine learning model, ..to detect a deviation from expected performance as defined by pre-deployment validation--, in lines 54-59, col. 2; and, --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 2, DUVAL as modified by WUBBELS further disclose wherein extracting characteristics of input data fed into the trained machine learning model to produce the input data characterization comprises:
extracting metadata from the medical image, wherein the metadata does not include personally identifiable information (see DUVAL: e.g., --, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; also see WUBBELS: e.g., --comparing distributions of interference results and other input dimensions (e.g., ethnicity, camera type, technician skill level. Etc.,) over time with incoming interference results and new input data for a period time--, in lines 31-34, col. 17);
aggregating pixel or voxel statistics from the medical image, wherein the pixel or voxel statistics do not preserve individual pixel or voxel intensity values or locations (see DUVAL: e.g., -- The determination of the navigational guidance may include identifying anatomical feature data from the image data using the predictive navigational guidance model. The identification of the anatomical feature data may include: identifying a first classification associated with a first anatomical object within a first target region of the image data; identifying a second classification associated with a second anatomical object from within a second target region bounded by the first target region; detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount.--, in [0007]);
determining one or more tags from an appearance ontology for the medical image (see DUVAL: e.g., -- [0073] Conversely to the foregoing, responsive to determining, at step 710, one or more types of predictive navigational guidance, an embodiment may, at step 720, generate one or more visual representations associated with the determined predictive navigational guidance. In an embodiment, the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.
[0074] At step 725, an embodiment may transmit instructions to a user device to display/overlay the visual representations of the predictive guidance overtop some or all portions of the image data. For example, in an embodiment, the server system 115 may be configured to transmit instructions to the user device to annotate one or more relevant anatomical objects during the medical procedure. In an embodiment, potential annotations may include anatomical object coloring/highlighting (e.g., where each detected relevant anatomical object is colored a different specific color, etc.), ROI designation (e.g., where relevant zones in the image data are delineated via a target box or outline, etc.), text identifiers (e.g., where each detected relevant anatomical object is textually identified, etc.), a combination thereof, and the like. Turning now to FIG. 8, a non-limiting example of annotations overlaid atop image data associated with a target papilla is provided.--, in [0073]-[0074]); and
determining one or more tags from a clinical ontology for the medical image (see DUVAL: e.g., -- [0073] Conversely to the foregoing, responsive to determining, at step 710, one or more types of predictive navigational guidance, an embodiment may, at step 720, generate one or more visual representations associated with the determined predictive navigational guidance. In an embodiment, the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.
[0074] At step 725, an embodiment may transmit instructions to a user device to display/overlay the visual representations of the predictive guidance overtop some or all portions of the image data. For example, in an embodiment, the server system 115 may be configured to transmit instructions to the user device to annotate one or more relevant anatomical objects during the medical procedure. In an embodiment, potential annotations may include anatomical object coloring/highlighting (e.g., where each detected relevant anatomical object is colored a different specific color, etc.), ROI designation (e.g., where relevant zones in the image data are delineated via a target box or outline, etc.), text identifiers (e.g., where each detected relevant anatomical object is textually identified, etc.), a combination thereof, and the like. Turning now to FIG. 8, a non-limiting example of annotations overlaid atop image data associated with a target papilla is provided.--, in [0073]-[0074]).
Re Claim 4, DUVAL as modified by WUBBELS further disclose wherein extracting characteristics of performance of the trained machine learning model during mapping of the medical image to the output, to produce the model performance characterization comprises:
capturing the output of the trained machine learning model, along with one or more intermediate outputs produced by one or more hidden layers of the trained machine learning model (see DUVAL: e. g., --, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]);
determining a confidence score for the output of the trained machine learning model (see DUVAL: e.g., -- The determination of the navigational guidance may include identifying anatomical feature data from the image data using the predictive navigational guidance model. The identification of the anatomical feature data may include: identifying a first classification associated with a first anatomical object within a first target region of the image data; identifying a second classification associated with a second anatomical object from within a second target region bounded by the first target region; detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount.--, in [0007]); and
determining one or more uncertainty metrics for the output of the trained machine learning model (see DUVAL: e.g., -- The determination of the navigational guidance may include identifying anatomical feature data from the image data using the predictive navigational guidance model. The identification of the anatomical feature data may include: identifying a first classification associated with a first anatomical object within a first target region of the image data; identifying a second classification associated with a second anatomical object from within a second target region bounded by the first target region; detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount.--, in [0007]; also see WUBBELS: e.g., --The method involves monitoring medical diagnostic inferences by a target machine learning model over a period of time, and determining a number of inferences that are indicative of a particular medical condition based on the monitoring. A performance anomaly associated with the model is detected in response to determining that the inferences are outside a specified range. The model is caused to be retrained or to be decommissioned based on detection of the anomaly. A determination is made that the anomaly is caused by the model, and performance metric is decreased based on determining the determination.--, in abstract, and,
-- in assess quality of input provided to the machine learning model, ..to detect a deviation from expected performance as defined by pre-deployment validation--, in lines 54-59, col. 2; and, --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”}; and, --When the multiple interferences are not substantially the same as the interference of the machine learning model 1100, […] the validator module 1100 can note a decrease in the accuracy of the machine learning model--, {herein “decrease in the accuracy” is the result of comparison to previous characterizations; and is consistent with DUVAL’s confidence weights/scores as will be cited and discussed below}--, in lines 56-60, col. 14).
Re Claim 6, DUVAL as modified by WUBBELS further disclose wherein extracting characteristics of user feedback received based on the output of the trained machine learning model, to produce the user feedback characterization comprises:
recording one or more of:
a model output rating received via a user input device (see DUVAL: e. g., --, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]); and
a user correction received via the user input device, wherein the user correction
modifies the output of the trained machine learning model (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 7, DUVAL as modified by WUBBELS further disclose wherein extracting characteristics of user feedback received based on the output of the trained machine learning model, to produce the user feedback characterization further comprises:
receiving comments from a user via the user input device (see DUVAL: e.g., -- using a processor associated with the computer server and via application of a trained predictive navigational guidance model to the image data, navigational guidance for the medical device in relation to the at least one anatomical object; generating, based on the determining, at least one visual representation associated with the navigational guidance; and transmitting, to a user device in network communication with the computer server, instructions to display the at least one visual representation associated with the navigational guidance overtop of the image data on a display screen of the user device.--, in [0006]; and, -- he display/UI 105A may be a touch screen or a display with other input systems (e.g., mouse, keyboard, etc.) so that the user(s) may interact with the application and/or the O/S. The network interface 105D may be a TCP/IP network interface for, e.g., Ethernet or wireless communications with the network 110. The processor 105B, while executing the application, may generate data and/or receive user inputs from the display/UI 105A and/or receive/transmit messages to the server system 115, and may further perform one or more operations prior to providing an output to the network 110.--, in [0035]-[0036]; and, -- a user may upload the training dataset to a user device (e.g., user device 105) to manually annotate each article of training data. The user device 105 may or may not store the training dataset in the memory (e.g., 105C). Once annotated, the user device 105 may transmit the annotated training dataset to the server system 115 via a network 101.
[0056] At step 210, the method may include, for each training dataset associated with an ERCP procedure, extracting anatomical feature data from the annotated training data. The extracted anatomical feature data may be used to train the machine-learning model to correctly identify and differentiate, during a live procedure, important anatomical objects relevant to the ERCP procedure. Additional disclosure relating to how the machine-learning model is trained off of the extracted anatomical feature data is further provided below in the discussion of FIG. 3.--, in [0055]-[0056]); and
determining a sentiment score for the output of the trained machine learning model based on the comments see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 9, claim 9 is the corresponding system claim to claim 1 respectively. Claim 9 thus is rejected for the similar reasons for claim 1. See above discussions with regard to claim 1 respectively. DUVAL as modified by WUBBELS further disclose a system comprising: a first device located at a deployment site, wherein the first device comprises: a user input device; a first non-transitory memory including a trained machine learning model, and instructions; and a first processor, wherein, when executing the instructions, the first processor causes the first device to to perform the method (see DUVAL: e. g., --, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]); and
transmit the input data characterization, the model performance characterization, and the user feedback characterization, to a second device; the second device located remotely from the first device, wherein the first device and the second device are communicatively coupled, and wherein the second device comprises: a second non-transitory memory including instructions; and a second processor, wherein, when executing the instructions, the second processor causes the second device to perform the method (see DUVAL: e.g., --More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 10, DUVAL as modified by WUBBELS further disclose wherein the plurality of previously determined input data characterizations, the plurality of previously determined model performance characterizations, and the plurality of previously determined user feedback characterizations, are derived from a training dataset used to train the trained machine learning mode (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 11, DUVAL as modified by WUBBELS further disclose wherein the plurality of previously determined input data characterizations, the plurality of previously determined model performance characterizations, and the plurality of previously determined user feedback characterizations, are derived from previous inferences of the trained machine learning model at the deployment site (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 12, DUVAL as modified by WUBBELS further disclose wherein the input data characterization, the model performance 12. characterization, and the user feedback characterization, each do not include the medical image, and wherein when executing the instructions, the first processor does not transmit the medical image to the second device (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 13, DUVAL as modified by WUBBELS further disclose wherein the alert includes an indication of one or more of the input data deviation exceeding the input data deviation threshold, the model performance deviation exceeding the model performance deviation threshold, and the user feedback deviation exceeding the user feedback deviation threshold (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 14, claim 14 is the corresponding method claim to claim 1 respectively. Claim 14 thus is rejected for the similar reasons for claim 1. See above discussions with regard to claim 1 respectively. DUVAL as modified by WUBBELS further disclose method for monitoring performance of a trained machine learning model (see DUVAL: e. g., --, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049] and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 15, DUVAL as modified by WUBBELS further disclose extracting characteristics of user feedback received based on the output of the trained machine learning model, to produce a user feedback characterization (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]); and
determining a user feedback deviation by comparing the user feedback characterization against a plurality of previously determined user feedback characterizations (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]); and
responding to the user feedback deviation exceeding a user feedback deviation threshold by transmitting an alert to the user device (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 16, DUVAL as modified by WUBBELS further disclose wherein the user feedback comprises a scan plane used to acquire a diagnostic medical image of the imaging subject (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 17, DUVAL as modified by WUBBELS further disclose wherein, extracting characteristics of user feedback received based on the output of the trained machine learning model, to produce the user feedback characterization, comprises: determining center coordinates of the scan plane; and determining three direction cosines uniquely identifying an orientation of the scan plane (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 18, DUVAL as modified by WUBBELS further disclose wherein the medical image comprises a three-dimensional (3D) medical image, and wherein the trained machine learning model is configured to identify anatomical landmarks in the 3D medical image for positioning of a scan plane for acquisition of a diagnostic medical image of a region of interest (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 19, DUVAL as modified by WUBBELS further disclose wherein extracting characteristics of the medical image of the imaging subject to produce the input data characterization comprises: capturing metadata of the 3D medical image, including pixel spacing and slice thickness of the 3D medical image; determining aggregate intensity statistics for the 3D medical image, including an intensity histogram; and determining a clinical ontology for the 3D medical image comprising a list of anatomical regions captured in the 3D medical image (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Re Claim 20, DUVAL as modified by WUBBELS further disclose wherein extracting characteristics of performance of the trained machine learning model during mapping of the medical image to the output, to produce the model performance characterization comprises: extracting properties of one or more segmentation masks produced by the trained machine learning model identifying the anatomical landmarks in the 3D medical image; extracting properties of the scan plane determined based on the anatomical landmarks; and recording a list of the anatomical landmarks identified in the 3D medical image by the trained machine learning model (see WUBBELS: e.g., --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”; also see DUVAL: e.g., --detecting a location of one or more third anatomical objects from within the second target region; and detecting one or more other anatomical objects associated with the first anatomical object. The determination of the navigational guidance for the medical device may include: identifying a confidence weight held by the predictive navigational guidance model for the at least one anatomical object; and determining whether that confidence weight is greater than a predetermined confidence threshold; wherein the generation of the navigational guidance is only performed in response to determining that the confidence weight is greater than the predetermined confidence threshold. The at least one visual representation may include one or more of: at least one trajectory overlay, at least one annotation, and/or at least one feedback notification. The at least one trajectory overlay may include a visual indication, overlaid on top of an image of the at least one anatomical object, of a projected path to an access point of the at least one anatomical object that a component of the medical device may follow to cannulate the at least one anatomical object. The computer-implemented method may also receive position data for the medical device and identify deviation of the medical device from the projected path based on analysis of the position data. The generation of the feedback notification in this situation may be responsive to the detection that the deviation of the medical device from the projected path is greater than a predetermined amount. The at least one annotation may include one or more visual indications, overlaid on top of an image of the at least one anatomical object, indicating predetermined features associated with the at least one anatomical object. The one or more visual indications may include one or more of: a color indication, an outline indication, and/or a text-based indication.--, in [0007]; and,
--, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]; and,
--More particularly, for each completed ERCP procedure, the training data may include images, videos, medical reports, etc. associated with one or more anatomical objects of interest detected during previously completed ERCP procedures (e.g., a papilla, one or more orifices on the papilla, a biliary duct, a pancreatic duct etc.). This data may have been captured using one or more sensors associated with the medical device (e.g., an optical camera) and/or other imaging modalities (X-ray imaging, fluoroscopy, etc.). In an embodiment, the training data may also include position and/movement data of a medical device (e.g., an endoscope) and/or components thereof (e.g., a guidewire) in relation to one or more anatomical objects during the procedure. The position and/or movement data may have been captured using one or more other sensors (e.g., electromagnetic (EM) sensors, accelerometers, gyroscopes, fiber optics, ultrasound transducer, capacitive or inductive position sensors, etc.), and/or may have been obtained via any other suitable means, e.g., via observation by a person and/or automated system, via feedback of a controller for the medical device, etc. In an embodiment, the training data may also contain an indication of the outcome of each of the completed ERCP procedures (e.g., positive outcome, negative outcome, severity of negative outcome, etc.).--, in [0053]; and, --the one or more visual representations may correspond to one or more: annotations identifying relevant anatomical objects, trajectory recommendations for maneuvering the medical device and/or components thereof, and/or feedback notifications alerting a medical device operator to updates occurring in the medical procedure.--, in [0073], and [0076]).
Claims 3, 5, 8 are rejected under 35 U.S.C. 103 as being unpatentable over DUVAL as modified by WUBBELS, and further in view of Makrinich (US 20210313052 A1).
Re Claim 3, DUVAL as modified by WUBBELS however do not explicitly disclose
encoding the metadata, the pixel or voxel statistics, the one or more tags from the appearance ontology, and the one or more tags from the clinical ontology, as a feature vector;
Makrinich discloses encoding the metadata, the pixel or voxel statistics, the one or more tags from the appearance ontology, and the one or more tags from the clinical ontology, as a feature vector (see Makrinich: e.g., -- [0057] In some embodiments, analyzing image data (as described herein) may include analyzing the image data to obtain a preprocessed image data, and subsequently analyzing the image data and/or the preprocessed image data to obtain the desired outcome. Some non-limiting examples of such image data may include one or more images, videos, frames, footages, 2D image data, 3D image data, and so forth. One of ordinary skill in the art will recognize that the followings are examples, and that the image data may be preprocessed using other kinds of preprocessing methods. In some examples, the image data may be preprocessed by transforming the image data using a transformation function to obtain a transformed image data, and the preprocessed image data may include the transformed image data. For example, the transformed image data may include one or more convolutions of the image data. For example, the transformation function may comprise one or more image filters, such as low-pass filters, high-pass filters, band-pass filters, all-pass filters, and so forth. In some examples, the transformation function may include a nonlinear function. In some examples, the image data may be preprocessed by smoothing at least parts of the image data, for example using Gaussian convolution, using a median filter, and so forth. In some examples, the image data may be preprocessed to obtain a different representation of the image data. For example, the preprocessed image data may include: a representation of at least part of the image data in a frequency domain; a Discrete Fourier Transform of at least part of the image data; a Discrete Wavelet Transform of at least part of the image data; a time/frequency representation of at least part of the image data; a representation of at least part of the image data in a lower dimension; a lossy representation of at least part of the image data; a lossless representation of at least part of the image data; a time ordered series of any of the above; any combination of the above; and so forth. In some examples, the image data may be preprocessed to extract edges, and the preprocessed image data may include information based on and/or related to the extracted edges. In some examples, the image data may be preprocessed to extract image features from the image data. Some non-limiting examples of such image features may comprise information based on and/or related to: edges; corners; blobs; ridges; Scale Invariant Feature Transform (SIFT) features; temporal features; and so forth.--, in [0057]-[0058]; and, -- [0196] In various embodiments, how well the measure of the deviation coincides with the desired measure of the deviation may be asserted using any suitable, appropriate mathematical measure function G. For example, if a measure of a deviation for an event is a number, (e.g., d), and the desired measure of the deviation is another number (e.g., d.sub.0) then an example mathematical measure function for a given event E.sub.i may be G.sub.i(d, d.sub.0) may be G.sub.i(d, d.sub.0)=d−d.sub.0, and the measure function may be, for example, a number G=Σ.sub.i G.sub.i(d.sub.i,d.sub.i.sub.0).sup.2. Alternatively, in another example embodiment, G may be a vector G={G.sub.i(d.sub.i,d.sub.i.sub.o)}.--, in [0196]-[0199]; and,
-- the data structure may be a file, and the at least one surgical video clip may be stored in the file. In one example, the data structure may be a database, and the at least one surgical video clip may be stored in the database. In one example, the data structure may be configured to store a sequence of values, and the at least one surgical video clip may be encoded in the sequence of values.--, in [0218], -- The presence or absence of certain events or complications may also affect a subject's competency assessment and corresponding scores. For example, significant bleeding may indicate a relatively lower competency level of the subject, and accordingly, a lower competency-related score. Surgical flow may be assessed using artificial intelligence trained on image data. For example, a machine learning model may be trained using training examples to assess surgical procedure flow from surgical footage, and the trained machine learning model may be used to analyze the plurality of video frames and generate the assessment of the subject's surgical procedure flow. An example of such training example may include surgical footage from a particular prior surgical procedure, together with a label indicating a desired assessment for the surgical procedure flow in the particular prior surgical procedure.--, [0260]; and, -- an operating room may have distinguishing characteristics such as equipment placement (e.g., operating tables, furniture, lights, devices, or other equipment), room layout (e.g., based on the placement of windows, walls or doors; room dimensions, room shape; ceiling contours, or other layout properties), color (e.g., paint color, equipment color, etc.), lighting properties, individuals within the room (e.g., physicians, nurses, technicians, etc.), equipment types or combinations of equipment, types of medical procedures being performed, artwork, patterns, or other visual characteristics that may distinguish a space from other spaces in a medical facility. In some embodiments, the space may include a tag or other distinguishing feature unique to the space. For example, determining the location information may include analyzing one or more images to detect a room number, room name, a scannable code (e.g., a barcode, a quick response (QR) code, an encoded image, a proprietary code, or similar formats), or other visual tags that may be used to identify a room. For example, a piece of medical equipment may include a receiver, and a room may contain a passive or active tag from which the piece of medical equipment can determine its own location.--, in [0309]);
DUVAL (as modified by WUBBELS) and Makrinich are combinable as they are in the same field of endeavor: monitoring and validating the performance of a machine learning model for medical image and data. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify DUVAL (as modified by WUBBELS)’s method using Makrinich’s teachings by including encoding the metadata, the pixel or voxel statistics, the one or more tags from the appearance ontology, and the one or more tags from the clinical ontology, as a feature vector to DUVAL (as modified by WUBBELS)’s training a machine learning model in order to extract and encode image features from the image data and metadata (see Makrinich: e.g. in [0057]-[0058], and [0196]-[0199]);
DUVAL as modified by WUBBELS and Makrinich further disclose comparing the feature vector against a plurality of pre-determined feature vectors corresponding to the plurality of previously determined input data characterizations (see WUBBELS: e.g., --The method involves monitoring medical diagnostic inferences by a target machine learning model over a period of time, and determining a number of inferences that are indicative of a particular medical condition based on the monitoring. A performance anomaly associated with the model is detected in response to determining that the inferences are outside a specified range. The model is caused to be retrained or to be decommissioned based on detection of the anomaly. A determination is made that the anomaly is caused by the model, and performance metric is decreased based on determining the determination.--, in abstract, and,
-- in assess quality of input provided to the machine learning model, ..to detect a deviation from expected performance as defined by pre-deployment validation--, in lines 54-59, col. 2; and, --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”}; and, --When the multiple interferences are not substantially the same as the interference of the machine learning model 1100, […] the validator module 1100 can note a decrease in the accuracy of the machine learning model--, {herein “decrease in the accuracy” is the result of comparison to previous characterizations; and is consistent with DUVAL’s confidence weights/scores as will be cited and discussed below}--, in lines 56-60, col. 14; and, --comparing distributions of interference results and other input dimensions (e.g., ethnicity, camera type, technician skill level. Etc.,) over time with incoming interference results and new input data for a period time--, in lines 31-34, col. 17;
also see DUVAL: e. g., --, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]).
Re Claim 5, DUVAL as modified by WUBBELS and Makrinich further disclose encoding the output of the trained machine learning model, along with the one or more intermediate outputs, the confidence score, and the one or more uncertainty metrics, as a feature vector (see Makrinich: e.g., -- [0057] In some embodiments, analyzing image data (as described herein) may include analyzing the image data to obtain a preprocessed image data, and subsequently analyzing the image data and/or the preprocessed image data to obtain the desired outcome. Some non-limiting examples of such image data may include one or more images, videos, frames, footages, 2D image data, 3D image data, and so forth. One of ordinary skill in the art will recognize that the followings are examples, and that the image data may be preprocessed using other kinds of preprocessing methods. In some examples, the image data may be preprocessed by transforming the image data using a transformation function to obtain a transformed image data, and the preprocessed image data may include the transformed image data. For example, the transformed image data may include one or more convolutions of the image data. For example, the transformation function may comprise one or more image filters, such as low-pass filters, high-pass filters, band-pass filters, all-pass filters, and so forth. In some examples, the transformation function may include a nonlinear function. In some examples, the image data may be preprocessed by smoothing at least parts of the image data, for example using Gaussian convolution, using a median filter, and so forth. In some examples, the image data may be preprocessed to obtain a different representation of the image data. For example, the preprocessed image data may include: a representation of at least part of the image data in a frequency domain; a Discrete Fourier Transform of at least part of the image data; a Discrete Wavelet Transform of at least part of the image data; a time/frequency representation of at least part of the image data; a representation of at least part of the image data in a lower dimension; a lossy representation of at least part of the image data; a lossless representation of at least part of the image data; a time ordered series of any of the above; any combination of the above; and so forth. In some examples, the image data may be preprocessed to extract edges, and the preprocessed image data may include information based on and/or related to the extracted edges. In some examples, the image data may be preprocessed to extract image features from the image data. Some non-limiting examples of such image features may comprise information based on and/or related to: edges; corners; blobs; ridges; Scale Invariant Feature Transform (SIFT) features; temporal features; and so forth.--, in [0057]-[0058]; and, -- [0196] In various embodiments, how well the measure of the deviation coincides with the desired measure of the deviation may be asserted using any suitable, appropriate mathematical measure function G. For example, if a measure of a deviation for an event is a number, (e.g., d), and the desired measure of the deviation is another number (e.g., d.sub.0) then an example mathematical measure function for a given event E.sub.i may be G.sub.i(d, d.sub.0) may be G.sub.i(d, d.sub.0)=d−d.sub.0, and the measure function may be, for example, a number G=Σ.sub.i G.sub.i(d.sub.i,d.sub.i.sub.0).sup.2. Alternatively, in another example embodiment, G may be a vector G={G.sub.i(d.sub.i,d.sub.i.sub.o)}.--, in [0196]-[0199]; and,
-- the data structure may be a file, and the at least one surgical video clip may be stored in the file. In one example, the data structure may be a database, and the at least one surgical video clip may be stored in the database. In one example, the data structure may be configured to store a sequence of values, and the at least one surgical video clip may be encoded in the sequence of values.--, in [0218], -- The presence or absence of certain events or complications may also affect a subject's competency assessment and corresponding scores. For example, significant bleeding may indicate a relatively lower competency level of the subject, and accordingly, a lower competency-related score. Surgical flow may be assessed using artificial intelligence trained on image data. For example, a machine learning model may be trained using training examples to assess surgical procedure flow from surgical footage, and the trained machine learning model may be used to analyze the plurality of video frames and generate the assessment of the subject's surgical procedure flow. An example of such training example may include surgical footage from a particular prior surgical procedure, together with a label indicating a desired assessment for the surgical procedure flow in the particular prior surgical procedure.--, [0260]; and, -- an operating room may have distinguishing characteristics such as equipment placement (e.g., operating tables, furniture, lights, devices, or other equipment), room layout (e.g., based on the placement of windows, walls or doors; room dimensions, room shape; ceiling contours, or other layout properties), color (e.g., paint color, equipment color, etc.), lighting properties, individuals within the room (e.g., physicians, nurses, technicians, etc.), equipment types or combinations of equipment, types of medical procedures being performed, artwork, patterns, or other visual characteristics that may distinguish a space from other spaces in a medical facility. In some embodiments, the space may include a tag or other distinguishing feature unique to the space. For example, determining the location information may include analyzing one or more images to detect a room number, room name, a scannable code (e.g., a barcode, a quick response (QR) code, an encoded image, a proprietary code, or similar formats), or other visual tags that may be used to identify a room. For example, a piece of medical equipment may include a receiver, and a room may contain a passive or active tag from which the piece of medical equipment can determine its own location.--, in [0309]); and
comparing the feature vector against a plurality of pre-determined feature vectors corresponding to the plurality of previously determined model performance characterizations (see WUBBELS: e.g., --The method involves monitoring medical diagnostic inferences by a target machine learning model over a period of time, and determining a number of inferences that are indicative of a particular medical condition based on the monitoring. A performance anomaly associated with the model is detected in response to determining that the inferences are outside a specified range. The model is caused to be retrained or to be decommissioned based on detection of the anomaly. A determination is made that the anomaly is caused by the model, and performance metric is decreased based on determining the determination.--, in abstract, and,
-- in assess quality of input provided to the machine learning model, ..to detect a deviation from expected performance as defined by pre-deployment validation--, in lines 54-59, col. 2; and, --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”}; and, --When the multiple interferences are not substantially the same as the interference of the machine learning model 1100, […] the validator module 1100 can note a decrease in the accuracy of the machine learning model--, {herein “decrease in the accuracy” is the result of comparison to previous characterizations; and is consistent with DUVAL’s confidence weights/scores as will be cited and discussed below}--, in lines 56-60, col. 14; and, --comparing distributions of interference results and other input dimensions (e.g., ethnicity, camera type, technician skill level. Etc.,) over time with incoming interference results and new input data for a period time--, in lines 31-34, col. 17;
also see DUVAL: e. g., --, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]). See the similar obviousness and motivation statements for the combination of cited references as addressed above in the discussions for claim 3.
Re Claim 8, DUVAL as modified by WUBBELS and Makrinich further disclose wherein determining the user feedback deviation by comparing the user feedback characterization against the plurality of previously determined user feedback characterizations comprises: encoding the model output rating and the user correction as a feature vector (see Makrinich: e.g., -- [0057] In some embodiments, analyzing image data (as described herein) may include analyzing the image data to obtain a preprocessed image data, and subsequently analyzing the image data and/or the preprocessed image data to obtain the desired outcome. Some non-limiting examples of such image data may include one or more images, videos, frames, footages, 2D image data, 3D image data, and so forth. One of ordinary skill in the art will recognize that the followings are examples, and that the image data may be preprocessed using other kinds of preprocessing methods. In some examples, the image data may be preprocessed by transforming the image data using a transformation function to obtain a transformed image data, and the preprocessed image data may include the transformed image data. For example, the transformed image data may include one or more convolutions of the image data. For example, the transformation function may comprise one or more image filters, such as low-pass filters, high-pass filters, band-pass filters, all-pass filters, and so forth. In some examples, the transformation function may include a nonlinear function. In some examples, the image data may be preprocessed by smoothing at least parts of the image data, for example using Gaussian convolution, using a median filter, and so forth. In some examples, the image data may be preprocessed to obtain a different representation of the image data. For example, the preprocessed image data may include: a representation of at least part of the image data in a frequency domain; a Discrete Fourier Transform of at least part of the image data; a Discrete Wavelet Transform of at least part of the image data; a time/frequency representation of at least part of the image data; a representation of at least part of the image data in a lower dimension; a lossy representation of at least part of the image data; a lossless representation of at least part of the image data; a time ordered series of any of the above; any combination of the above; and so forth. In some examples, the image data may be preprocessed to extract edges, and the preprocessed image data may include information based on and/or related to the extracted edges. In some examples, the image data may be preprocessed to extract image features from the image data. Some non-limiting examples of such image features may comprise information based on and/or related to: edges; corners; blobs; ridges; Scale Invariant Feature Transform (SIFT) features; temporal features; and so forth.--, in [0057]-[0058]; and, -- [0196] In various embodiments, how well the measure of the deviation coincides with the desired measure of the deviation may be asserted using any suitable, appropriate mathematical measure function G. For example, if a measure of a deviation for an event is a number, (e.g., d), and the desired measure of the deviation is another number (e.g., d.sub.0) then an example mathematical measure function for a given event E.sub.i may be G.sub.i(d, d.sub.0) may be G.sub.i(d, d.sub.0)=d−d.sub.0, and the measure function may be, for example, a number G=Σ.sub.i G.sub.i(d.sub.i,d.sub.i.sub.0).sup.2. Alternatively, in another example embodiment, G may be a vector G={G.sub.i(d.sub.i,d.sub.i.sub.o)}.--, in [0196]-[0199]; and,
-- the data structure may be a file, and the at least one surgical video clip may be stored in the file. In one example, the data structure may be a database, and the at least one surgical video clip may be stored in the database. In one example, the data structure may be configured to store a sequence of values, and the at least one surgical video clip may be encoded in the sequence of values.--, in [0218], -- The presence or absence of certain events or complications may also affect a subject's competency assessment and corresponding scores. For example, significant bleeding may indicate a relatively lower competency level of the subject, and accordingly, a lower competency-related score. Surgical flow may be assessed using artificial intelligence trained on image data. For example, a machine learning model may be trained using training examples to assess surgical procedure flow from surgical footage, and the trained machine learning model may be used to analyze the plurality of video frames and generate the assessment of the subject's surgical procedure flow. An example of such training example may include surgical footage from a particular prior surgical procedure, together with a label indicating a desired assessment for the surgical procedure flow in the particular prior surgical procedure.--, [0260]; and, -- an operating room may have distinguishing characteristics such as equipment placement (e.g., operating tables, furniture, lights, devices, or other equipment), room layout (e.g., based on the placement of windows, walls or doors; room dimensions, room shape; ceiling contours, or other layout properties), color (e.g., paint color, equipment color, etc.), lighting properties, individuals within the room (e.g., physicians, nurses, technicians, etc.), equipment types or combinations of equipment, types of medical procedures being performed, artwork, patterns, or other visual characteristics that may distinguish a space from other spaces in a medical facility. In some embodiments, the space may include a tag or other distinguishing feature unique to the space. For example, determining the location information may include analyzing one or more images to detect a room number, room name, a scannable code (e.g., a barcode, a quick response (QR) code, an encoded image, a proprietary code, or similar formats), or other visual tags that may be used to identify a room. For example, a piece of medical equipment may include a receiver, and a room may contain a passive or active tag from which the piece of medical equipment can determine its own location.--, in [0309]); and
comparing the feature vector against a plurality of pre-determined feature vectors corresponding to the plurality of previously determined user feedback characterizations (see WUBBELS: e.g., --The method involves monitoring medical diagnostic inferences by a target machine learning model over a period of time, and determining a number of inferences that are indicative of a particular medical condition based on the monitoring. A performance anomaly associated with the model is detected in response to determining that the inferences are outside a specified range. The model is caused to be retrained or to be decommissioned based on detection of the anomaly. A determination is made that the anomaly is caused by the model, and performance metric is decreased based on determining the determination.--, in abstract, and,
-- in assess quality of input provided to the machine learning model, ..to detect a deviation from expected performance as defined by pre-deployment validation--, in lines 54-59, col. 2; and, --generate an interference by using the machine learning model 1100 on the input, and can request from multiple reference members 1140, 1150 can be [….] a professional trained to identify the feature, such as a healthcare professional trained to diagnosis a disease--, in lines 48-55, col. 14 {read on “user feedback”, and herein “feature” “diagnosis a disease” align with “extracting characteristics of user feedback”}; and, --When the multiple interferences are not substantially the same as the interference of the machine learning model 1100, […] the validator module 1100 can note a decrease in the accuracy of the machine learning model--, {herein “decrease in the accuracy” is the result of comparison to previous characterizations; and is consistent with DUVAL’s confidence weights/scores as will be cited and discussed below}--, in lines 56-60, col. 14; and, --comparing distributions of interference results and other input dimensions (e.g., ethnicity, camera type, technician skill level. Etc.,) over time with incoming interference results and new input data for a period time--, in lines 31-34, col. 17;
also see DUVAL: e. g., --, a computer-implemented method of training a predictive navigational guidance model is provided. The computer-implemented method, including: receiving, from a database, a training dataset comprising historical medical procedure data associated with a plurality of completed medical procedures; extracting, from image data in the training dataset, anatomical feature data; extracting, from sensor data in the training dataset, medical device positioning data; extracting, from the training dataset, procedure outcome data; and utilizing the extracted anatomical feature data, the extracted medical device positioning data, and the extracted procedure outcome data to train the predictive navigational guidance model.--, in [0008]; --The server system 115 may include and/or act as a repository or source for extracted raw dataset information.--, in [0039]; and, -- a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model, e.g., compare the output of the trained model with the ground truth for that portion of the training data to evaluate an accuracy of the trained model. The training of the machine-learning model may be configured to cause the machine-learning model to learn contextual associations between the raw procedure data and the context with which it is associated with (e.g., which anatomical features and/or medical device actions affected the success rate of the ERCP procedure etc.), such that the trained machine-learning model is configured to provide predictive guidance that may increase the success rate of an ERCP procedure.
[0049] In various embodiments, the variables of a machine-learning model may be interrelated in any suitable arrangement in order to generate the output. For instance, in some embodiments, the machine-learning model may include signal processing architecture that is configured to identify, isolate, and/or extract features, patterns, and/or structure in an image or video.--, in [0048]-[0049]). See the similar obviousness and motivation statements for the combination of cited references as addressed above in the discussions for claim 3.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEIWEN YANG whose telephone number is (571)270-5670. The examiner can normally be reached on Monday-Friday 8:30am-4:30pm east.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WEI WEN YANG/Primary Examiner, Art Unit 2662