DETAILED CORRESPONDENCE
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This final office action on merits is in response to the communication received on 10 December 2025. Amendments to claims 1, 11, and 16 are acknowledged and have been carefully considered. Claims 2 and 12 are cancelled. Claims 1, 3-11, and 13-18 are pending and considered below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 5-11, and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination in view of Nye (20190156484), Wang et al. (20200167930) and Bangia et al. (20210358121) and in further view of Benjamin et al. (20200265946).
Claims 1 and 16: Nye discloses a method and system for prioritizing a set of medical images to be evaluated using a machine learning model, wherein the machine learning module is a convolutional neural network ([92 “Radiologist worklists are prioritized by putting stat images first, followed by images in order from oldest to newest,” 44 “use neural networks and/or other machine learning to implement a new workflow for image and associated patient analysis including generating alerts based on radiological findings may be generated and delivered at the point of care of a radiology exam,” 101 “the learning network 1026, such as a deep learning network, other CNN, and/or other machine learning network, etc., receives the pre-processed image data at its input nodes and evaluates the image data according to the nodes,”]), comprising:
calculating a likelihood score for each medical condition outputs based upon determined statistical parameters that comprise an output mean prediction and standard deviation for the individual predictions of the medical condition output for each of the set of medical images ([103 “probability and/or confidence indicator or score can be associated with the indication of critical and/or other clinical finding(s), a confidence associated with the finding, a location of the finding, a severity of the finding, a size of the finding, and/or an appearance of the finding in conjunction with another finding or in the absence of another finding, etc. For example, a strength of correlation or connection in the learning network 1026 can translate into a percentage or numerical score indicating a probability of correct detection/diagnosis of the finding in the image data,” 108 “the more information provided to an AI algorithm model, the more accurate the prediction generated by the model. As described above, AI models can be used to deploy algorithms on an imaging device to provide bedside, real-time point of care notifications when a patient has a critical finding,” 120 “pre-processor 1024 can apply techniques such as down-sampling, anatomical segmentation, normalizing with mean and/or standard deviation of training population, contrast enhancement, etc., to scale or reduce image data size for further processing (e.g., by presenting the learning network 1026 with fewer samples representing the image data, etc.),”]); and
determining an order of the set of input images to be evaluated based upon the calculated likelihood score and a severity of the medical condition outputs ([124 “image data and associated finding(s) can be provided via the output 1030 to be displayed, reported, logged, and/or otherwise used in a notification or alert 1135 to a healthcare practitioner such as a Tech, nurse, intensivist, trauma surgeon, and/or clinical system, etc., to act quickly on the critical and/or other clinical finding,” 135 “shown in the example of FIG. 13, the priority indication 1320 is high 1322. FIG. 14 illustrates the example GUI 1300 with a priority indication 1320 of medium 1324. FIG. 15 illustrates the example GUI 1300 with a priority indication 1310 of low,” 136 “alerts and notifications can escalate in proportion to an immediacy and severity of the detected condition,” 145-146, 149 “image data and associated finding(s) can be provided via the output 1030 to be displayed, reported, logged, and/or otherwise used in a notification or alert 1135 to a healthcare practitioner such as a Tech, nurse, intensivist, trauma surgeon, and/or clinical system, etc., to act quickly on the critical and/or other clinical finding,” 164]).
Nye does not explicitly disclose however Benjamin discloses:
identifying the set of medical images to be evaluated by the trained machine learning model, comprising identifying a plurality of medical images from a medical image repository, each of the plurality of medical images being unread by a medical professional ([65 “setting an order of priority of the cases the reader is allowed to choose at a given time, by: a) the computer system providing a deadline for reading each case; b) the computer system finding the time to deadline for each case, at the given time; c) the computer system providing an index of importance function of time to deadline and of characteristics of a case; d) the computer system calculating an index of importance for each case according to the index of importance function,” 68 “two different categories corresponding to a higher and a lower level of urgency, for a given time to deadline and other characteristics, the index of importance function is higher for the category corresponding to the higher level of urgency than for the category corresponding to the lower level of urgency,” 72 “assign cases to the workstations of readers who choose them; wherein the software is configured so that, for at least some of the cases, initially only a portion of the readers are allowed or encouraged to choose the case, but over time, as the case becomes more urgent to read if it is still not read, the case is escalated a first time by the computer system adding one or more other readers to the readers who are allowed or encouraged to choose the case, and over further time, if the case is still not read, the case is escalated at least one additional time by the computer system adding one or more other readers to the readers who are allowed or encouraged to choose the case,” 73 “computer system for automatically assigning each of a plurality of medical imaging cases for reading by one of a plurality of readers, the system comprising: a) a receiving module configured to receive medical imaging cases from one or more sites; b) a plurality of workstations, each used by one of the readers; c) a reader I/O module configured to display on a worklist on a workstation of each reader, information about a set of cases that the reader is allowed to choose for reading, the information indicating that the reader is encouraged to preferentially choose a displayed case over another displayed case, to receive information about the readers' choices of cases for reading, and to assign cases to the workstations of the readers,” 74, 77 “computer system displaying together with the worklist on the workstations of at least some of the readers how many unread cases match each of a plurality of different subspecialties, including subspecialties that are not indicated on the profile of that reader, or how many unread cases come from each of a plurality of different sites of origin, or both; d) the computer system receiving information on choices of cases by the readers,” 106 “a case with lower urgency for reading, only a portion of the readers, for example readers who are better qualified or more suitable than other readers to read the case, are allowed or encouraged to choose it for reading initially, but as the urgency for reading the case increases with time, if it is still not read, other readers are added to the portion of readers who are allowed or encouraged to choose it for reading. Allowing or encouraging additional readers to choose a case that has become more urgent is referred to herein as “escalating” the case,” 107-116, 117 “case may be escalated at one or more escalation times while it remains unread. Optionally, the escalation times occur at certain fractions of the total time available for reading a case, for example as specified by the SLA. Alternatively or additionally, the escalation times occur at certain absolute time intervals after the time the study was performed. When a case is escalated, for example because it has remained unread for too long, the criteria for exposing the case are changed to be more inclusive,”]).
Therefore it would be obvious for Nye to identify the set of medical images to be evaluated by the trained machine learning model, comprising identifying a plurality of medical images from a medical image repository, each of the plurality of medical images being unread by a medical professional as per the steps of Benjamin in order to more precisely predict the occurrence of medical conditions and thereby more precisely develop and provide treatments to patients and individuals and thereby more likely have a positive impact on patient health conditions.
Nye does not explicitly disclose however Wang discloses:
training the machine learning model with a training data set of medical images using dropout in which the training links Bayesian inferences in deep Gaussian processes ([234 “deep network trained with dropout can be cast as a Bayesian approximation of a Gaussian process [64]. Given a set of training data and their labels {X,Y}, training a network F(⋅,W) with dropout has the effect of approximating the posterior distribution p(W|{X,Y}) by minimising the Kullback-Leibler divergence term, i.e. KL(q(W)∥p(W|{X,Y})); where q(W) is an approximating distribution over the weight matrices W with their elements randomly set to zero according to Bernoulli random variables,” 235 “extended network is trained with a dropout ratio of 0.5 applied to the newly inserted layer. At test time, the network is sampled N times using dropout. The final segmentation can be obtained by majority voting. The percentage of samples which disagree with the voting results is typically computed at each voxel as the uncertainty estimate,”]) to access uncertainties in outputs of the machine learning model, wherein the outputs include a medical condition shown in the input training data set of medical images ([219 “the medical image computing domain, recent years have seen a growing number of applications using CNNs. Although there have been recent advances in tailoring CNNs to analyse volumetric images, most of the work to date studies image representations in 2D. While volumetric representations are more informative, the number of voxels scales cubically with the size of the region of interest,” 222 “uncertainty of the segmentation is also an important parameter for indicating the confidence and reliability of an algorithm [64, 75, 76]. The high uncertainty of a labelling can be a sign of an unreliable classification. The feasibility of voxel-level uncertainty estimation is demonstrated herein using Monte Carlo samples of the proposed network with dropout at test time, 234, 235, 250 “provides an uncertainty map generated with 100 Monte Carlo samples using dropout, while the bottom row represents an uncertainty map with a threshold at set at 0.1. It is clear from FIG. 41 that the uncertainties near the boundaries of different structures are relatively higher than the other regions. Note that displaying the segmentation uncertainty can be useful to a user, for example, in terms of the most effective locations for providing manual indications (e.g. clicks or scribbles) of the correct segmentation,”]).
running the trained machine learning model multiple times with dropout ([234-236]) on the set of medical images to be evaluated to produce individual predictions ([118 “regularized prediction for each pixel, which may be fed into a logistic loss function layer.” 119]) of a medical condition output for each of the set of medical images ([82 “image may be multimodal, in that it records multiple intensity values according to spatial location for multiple respective imaging modalities, such as for different modes of magnetic resonance (MR) image, and/or to combine images from different types of images, such as X-ray (e.g. computed tomography, CT), ultrasound, gamma ray (e.g. positron emission tomography, etc),” 94 “depending on the results from the proposed segmentation, there may be zero or multiple indications corresponding to a single segment—e.g. there might be two indications denoting regions that should belong to the first segment, but no indications with respect to the second segment,” 107-110, 218 “present application investigates efficient and flexible elements of modern convolutional networks such as dilated convolution and residual connection. With these building blocks, a high-resolution, compact convolutional network is proposed for volumetric image segmentation. To illustrate the efficiency of this approach for learning 3D representation from large-scale image data, the proposed network is validated with the challenging task of parcellating 155 neuroanatomical structures from brain MR images,”]).
Therefore it would be obvious for Nye to perform training the machine learning model with a training data set of medical images using dropout in which the training links Bayesian inferences in deep Gaussian processes to access uncertainties in outputs of the machine learning model, wherein the outputs include a medical condition shown in the input training data set of medical images and running the trained machine learning model multiple times with dropout on the set of medical images to be evaluated to produce individual predictions of a medical condition output for each of the set of medical images as per the steps of Wang in order to more precisely predict the occurrence of medical conditions and thereby more precisely develop and provide treatments to patients and individuals and thereby more likely have a positive impact on patient health conditions.
Nye does not explicitly disclose however Bangia discloses:
determining, for each of the plurality of medical images in the set of medical images to be evaluated, a severity score for the medical condition output associated with a respective medical image, wherein the severity score indicates a severity of the medical condition ([31 “performed with an imaging system (e.g., an endoscope, a video capsule endoscope, etc.), to monitor disease progression or disease improvement, or to quantify an effect of a treatment, such as by tracking a severity score,” 39 “determine at least one score corresponding to the image frame, the at least one score representing at least one of informativeness of the image frame to show a given feature affecting the digestive organ or severity of the given feature; and automatically assigning, via the at least one processor, the at least one score to the image frame, based on the output of the artificial neural network in response to the inputting the at least one region of interest,” 63 “first score within a specified range, where the second score corresponds to a severity of the given feature; and automatically arranging the set of image frames according to at least one of the first score or the second score,” 165-175]);
determining an order of the plurality of medical images in the set of medical images to be evaluated based upon: (1) the calculated likelihood score ([63 “where the first score corresponds to an informativeness of the image frame with respect to a likelihood of the image frame depicting presence or absence of a given feature on the inside surface of the digestive organ,” 104, 148, 156]); and (2) the determined severity score ([63 “automatically assigning a second score to each image frame that has the first score within a specified range, where the second score corresponds to a severity of the given feature; and automatically arranging the set of image frames according to at least one of the first score or the second score,” 80]), wherein determining the order of plurality of medical images in the set of input images comprises sorting the plurality of medical images from highest to lowest based upon the calculated likelihood score and the determined severity score ([64 “arranging includes, with respect to at least one of the first score and the second score, at least one of the following: ranking at least some image frames of the set of image frames in descending order of score; ranking at least some image frames of the set of image frames in ascending order of score; assorting at least some image frames of the set of image frames in random order of score; or positioning at least some image frames of the set of image frames in an array based on adjacent image frames in the array having different scores,” 104, 105, 148-152, 165-168]); and
displaying, on a display, the set of input images to be evaluated, wherein the plurality of medical images in the set of input images are displayed in the determined order ([313 “processor 904 may output a presentation of the set of image frames after the shuffling. The presentation may be a visual display, such as a slide show having an presentation of shuffled image frames, where the automated presentation may be automatically advanced in an order of the shuffling, and where the automated presentation may be further arranged or presented according to additional organizational schemes, such as temporal intervals and/or positional spacing,” 314 “organized presentation of image frames may involve temporal spacing of when each of the image frames (or certain selected image frames) may be presented in a visual slide show, for example, by predetermined intervals, randomized intervals, or methodically separated intervals, such as with respect to display time or visual spacing (e.g., distance between concurrently displayed image frames) for any given image frames in sequence,” 315, 316 “pieces of information such as regions of interest, corresponding image frames, and/or (sub)sets of image frames, the information may be shuffled or otherwise rearranged, and presented in an organized fashion. Thus, the information may be recomposed in new and original ways with respect to displays of the same information,”]).
Therefore it would be obvious for Nye to determine for each of the plurality of medical images in the set of medical images to be evaluated, a severity score for the medical condition output associated with a respective medical image, wherein the severity score indicates a severity of the medical condition, determining an order of the plurality of medical images in the set of medical images to be evaluated based upon: (1) the calculated likelihood score; and (2) the determined severity score, wherein determining the order of plurality of medical images in the set of input images comprises sorting the plurality of medical images from highest to lowest based upon the calculated likelihood score and the determined severity score and displaying, on a display, the set of input images to be evaluated, wherein the plurality of medical images in the set of input images are displayed in the determined order as per the steps of Bangia in order to more precisely predict the occurrence of medical conditions and thereby more precisely develop and provide treatments to patients and individuals and thereby more likely have a positive impact on patient health conditions
Claims 5 and 14: Nye in view of Wang discloses the method and system of claims 1 and 11 above, and Nye further discloses:
wherein determining the order of the set of input images to be evaluated based upon the calculated likelihood score and a severity of the medical condition outputs includes: identifying images in the set of input images that have a high likelihood of having a severe medical condition according to the calculated likelihood score, wherein the high likelihood is above a predefined threshold value ([88 “plurality of training inputs 911 are provided to a network 921 to develop connections in the network 921 and provide an output to be evaluated by an output evaluator 931. Feedback is then provided by the output evaluator 931 into the network 921 to further develop (e.g., train) the network,” 89, 101 “Based on image intensity values, reference coordinate position, proximity, and/or other characteristics, items determined in the image data can be correlated with likely critical and/or other clinical findings such as a severe pneumothorax, tube within the right mainstem, free air in the bowel,”]); and
sorting, from highest to lowest, the identified images based upon their likelihood score and placing the sorted identified images at the top of the order ([101 “Based on image intensity values, reference coordinate position, proximity, and/or other characteristics, items determined in the image data can be correlated with likely critical and/or other clinical findings such as a severe pneumothorax, tube within the right mainstem, free air in the bowel, etc,” 102, 103, 104 “Alert(s) and/or other notification(s) can escalate in proportion to an immediacy and/or other severity of a probable detected condition, for example, 145 “rules can be created to determine image/exam priority, and those rules can be stored such as in a DICOM header of an image sent to the PACS 1044. An AI model can be used to set a score or a flag in the DICOM header (e.g., tag the DICOM header) to be used a rule to prioritize those exams,” 146 “prioritization rules can be made available on a cloud-based server, an edge device, and/or a local server to enable cross-modality prioritization of exams in a worklist,” 164 “such as a pneumothorax, is identified by the AI model in the captured image data. For example, AI results can indicate a likely pneumothorax (PTX) in the analyzed image data. In certain examples, feedback can be obtained to capture whether the user agrees with the AI alerts (e.g., select a thumbs up/down, specify manual determination, etc.),” Figs 13-18]).
Claim 6: Nye in view of Wang discloses the method and system of claim 5 above, and Nye further discloses wherein determining the order of the set of input images to be evaluated based upon the calculated likelihood score and a severity of the medical condition outputs further includes:
sorting, from lowest to highest, images without a high likelihood of having a severe medical condition based upon their calculated likelihood score and placing the sorted images after the sorted identified images in the order ([104 “Alert(s) and/or other notification(s) can escalate in proportion to an immediacy and/or other severity of a probable detected condition, for example,” 145 “rules can be created to determine image/exam priority, and those rules can be stored such as in a DICOM header of an image sent to the PACS 1044. An AI model can be used to set a score or a flag in the DICOM header (e.g., tag the DICOM header) to be used a rule to prioritize those exams,” 146 “prioritization rules can be made available on a cloud-based server, an edge device, and/or a local server to enable cross-modality prioritization of exams in a worklist,” 164 “such as a pneumothorax, is identified by the AI model in the captured image data. For example, AI results can indicate a likely pneumothorax (PTX) in the analyzed image data. In certain examples, feedback can be obtained to capture whether the user agrees with the AI alerts (e.g., select a thumbs up/down, specify manual determination, etc.),” Figs 13-18, and fig. 15 showing a low prioritization]).
Claims 7 and 15: Nye in view of Wang discloses the method and system of claims 1 and 11 above, and Nye further discloses wherein determining the order of the set of input images to be evaluated based upon the calculated likelihood score and a severity of the medical condition outputs includes:
identifying images in the set of input images that have a high likelihood of having a severe medical condition according to the calculated likelihood score, wherein the high likelihood is above a predefined threshold value ([101 “Based on image intensity values, reference coordinate position, proximity, and/or other characteristics, items determined in the image data can be correlated with likely critical and/or other clinical findings such as a severe pneumothorax, tube within the right mainstem, free air in the bowel,” 102, 103 “probability and/or confidence indicator or score can be associated with the indication of critical and/or other clinical finding(s), a confidence associated with the finding, a location of the finding, a severity of the finding, a size of the finding, and/or an appearance of the finding in conjunction with another finding or in the absence of another finding,”]);
calculating an evaluation score based upon the likelihood score and a severity score, wherein the severity score indicates the severity of the medical conditions ([101 “Based on image intensity values, reference coordinate position, proximity, and/or other characteristics, items determined in the image data can be correlated with likely critical and/or other clinical findings such as a severe pneumothorax, tube within the right mainstem, free air in the bowel,” 102, 103 “probability and/or confidence indicator or score can be associated with the indication of critical and/or other clinical finding(s), a confidence associated with the finding, a location of the finding, a severity of the finding, a size of the finding, and/or an appearance of the finding in conjunction with another finding or in the absence of another finding,”]); and
sorting, from highest to lowest, the determined images based upon their evaluation score and placing the determined images at the top of the order ([104 “Alert(s) and/or other notification(s) can escalate in proportion to an immediacy and/or other severity of a probable detected condition, for example,” 145 “rules can be created to determine image/exam priority, and those rules can be stored such as in a DICOM header of an image sent to the PACS 1044. An AI model can be used to set a score or a flag in the DICOM header (e.g., tag the DICOM header) to be used a rule to prioritize those exams,” 146 “prioritization rules can be made available on a cloud-based server, an edge device, and/or a local server to enable cross-modality prioritization of exams in a worklist,” 164 “such as a pneumothorax, is identified by the AI model in the captured image data. For example, AI results can indicate a likely pneumothorax (PTX) in the analyzed image data. In certain examples, feedback can be obtained to capture whether the user agrees with the AI alerts (e.g., select a thumbs up/down, specify manual determination, etc.),” Figs 13-18, and fig. 15 showing a low prioritization).
Claim 8: Nye in view of Wang discloses the method and system of claim 1 above, and Nye further discloses wherein determining the order of the set of input images to be evaluated based upon the calculated likelihood score and a severity of the medical condition outputs includes:
identifying images in the set of input images that have a high likelihood of having a severe medical condition according to the calculated likelihood score ([101 “Based on image intensity values, reference coordinate position, proximity, and/or other characteristics, items determined in the image data can be correlated with likely critical and/or other clinical findings such as a severe pneumothorax, tube within the right mainstem, free air in the bowel,” 102, 103 “probability and/or confidence indicator or score can be associated with the indication of critical and/or other clinical finding(s), a confidence associated with the finding, a location of the finding, a severity of the finding, a size of the finding, and/or an appearance of the finding in conjunction with another finding or in the absence of another finding,”]);
determining which of the identified images have a likelihood score above a threshold value ([88 “Feedback is then provided by the output evaluator 931 into the network 921 to further develop (e.g., train) the network 921. Additional input 911 can be provided to the network 921 until the output evaluator 931 determines that the network 921 is trained (e.g., the output has satisfied a known correlation of input to output according to a certain threshold, margin of error, etc.),”]); and
sorting, from highest to lowest, the determined images based upon their likelihood score and placing the determined images at the top of the order ([104 “Alert(s) and/or other notification(s) can escalate in proportion to an immediacy and/or other severity of a probable detected condition, for example,” 145 “rules can be created to determine image/exam priority, and those rules can be stored such as in a DICOM header of an image sent to the PACS 1044. An AI model can be used to set a score or a flag in the DICOM header (e.g., tag the DICOM header) to be used a rule to prioritize those exams,” 146 “prioritization rules can be made available on a cloud-based server, an edge device, and/or a local server to enable cross-modality prioritization of exams in a worklist,” 164 “such as a pneumothorax, is identified by the AI model in the captured image data. For example, AI results can indicate a likely pneumothorax (PTX) in the analyzed image data. In certain examples, feedback can be obtained to capture whether the user agrees with the AI alerts (e.g., select a thumbs up/down, specify manual determination, etc.),” Figs 13-18, and fig. 15 showing a low prioritization]).
Claim 9: Nye in view of Wang discloses the method and system of claim 8 above, and Nye further discloses wherein determining the order of the set of input images to be evaluated based upon the calculated likelihood score and a severity of the medical condition outputs further includes:
placing the identified images with a likelihood score below the threshold value after the determined images ([103 “a strength of correlation or connection in the learning network 1026 can translate into a percentage or numerical score indicating a probability of correct detection/diagnosis of the finding in the image data, a confidence in the identification of the finding,” 104 “image data and associated finding(s) can be provided via the output 1030 to be displayed, reported, logged, and/or otherwise used in a notification or alert to a healthcare practitioner such as a Tech, nurse, intensivist, trauma surgeon, etc., to act quickly on the critical and/or other clinical finding. In some examples, the probability and/or confidence score, and/or a criticality index/score associated with the type of finding, size of finding, location of finding, etc., can be used to determine a severity, degree, and/or other escalation of the alert/notification to the healthcare provider,”]); and
sorting, from lowest to highest, images with a likelihood score below the threshold value and placing the sorted images after the identified images with a likelihood score below the threshold value in the order ([104 “Alert(s) and/or other notification(s) can escalate in proportion to an immediacy and/or other severity of a probable detected condition, for example,” 145 “rules can be created to determine image/exam priority, and those rules can be stored such as in a DICOM header of an image sent to the PACS 1044. An AI model can be used to set a score or a flag in the DICOM header (e.g., tag the DICOM header) to be used a rule to prioritize those exams,” 146 “prioritization rules can be made available on a cloud-based server, an edge device, and/or a local server to enable cross-modality prioritization of exams in a worklist,” 164 “such as a pneumothorax, is identified by the AI model in the captured image data. For example, AI results can indicate a likely pneumothorax (PTX) in the analyzed image data. In certain examples, feedback can be obtained to capture whether the user agrees with the AI alerts (e.g., select a thumbs up/down, specify manual determination, etc.),” Figs 13-18, and fig. 15 showing a low prioritization]).
Claim 10: Nye in view of Wang discloses the method and system of claim 1 above, and Nye further discloses wherein determining statistical parameters for the different output of the machine learning model includes:
training the machine learning model ([37, 38, 53 “deep learning neural network can be trained on a set of expert classified data, classified and further annotated for object localization, for example. This set of data builds the first parameters for the neural network, and this would be the stage of supervised learning,” 54-60]);
Nye does not explicitly disclose however Wang discloses:
inputting a training data set into a plurality of trained instances of the machine learning model, wherein each of the plurality of trained instances of the machine learning model uses different dropout parameters, and performing a statistical analysis on the outputs from the plurality of trained instances of machine learning models ([117 “stride of each convolutional layer is set to 1 and the number of output channels of convolution in each block is set to a fixed number C. In order to use multi-scale features, we concatenate the features from different blocks to get a composed feature of length 5C. This feature is fed into a classifier which is implemented by two additional layers Conv6 and Conv7, as shown in FIG. 5. Conv6 and Conv7 use convolutional kernels with a size of 1×1 and dilation factor of 1, and the number of output channels for them is 2C and 2 respectively,” 118, 119, 189, 190, 234 “deep network trained with dropout can be cast as a Bayesian approximation of a Gaussian process [64]. Given a set of training data and their labels {X,Y}, training a network F(⋅,W) with dropout has the effect of approximating the posterior distribution p(W|{X,Y}) by minimising the Kullback-Leibler divergence term, i.e. KL(q(W)∥p(W|{X,Y})); where q(W) is an approximating distribution over the weight matrices W with their elements randomly set to zero according to Bernoulli random variables,” 235 “extended network is trained with a dropout ratio of 0.5 applied to the newly inserted layer. At test time, the network is sampled N times using dropout. The final segmentation can be obtained by majority voting. The percentage of samples which disagree with the voting results is typically computed at each voxel as the uncertainty estimate,”]).
Therefore it would be obvious for Nye to input a training data set into a plurality of trained instances of the machine learning model, wherein each of the plurality of trained instances of the machine learning model uses different dropout parameters, and performing a statistical analysis on the outputs from the plurality of trained instances of machine learning models as per the steps of Wang in order to more precisely predict the occurrence of medical conditions and thereby more precisely develop and provide treatments to patients and individuals and thereby more likely have a positive impact on patient health conditions.
Claim 11: Nye discloses a system for prioritizing a set of medical images to be evaluated using a machine learning model, comprising: a memory; a processor connected to the memory ([92 “Radiologist worklists are prioritized by putting stat images first, followed by images in order from oldest to newest,” 44 “use neural networks and/or other machine learning to implement a new workflow for image and associated patient analysis including generating alerts based on radiological findings may be generated and delivered at the point of care of a radiology exam,” 101 “the learning network 1026, such as a deep learning network, other CNN, and/or other machine learning network, etc., receives the pre-processed image data at its input nodes and evaluates the image data according to the nodes,”]), the processor configured to:
calculate a likelihood score for each medical condition outputs based upon a determined statistical parameters for the different outputs of the machine learning model ([103 “probability and/or confidence indicator or score can be associated with the indication of critical and/or other clinical finding(s), a confidence associated with the finding, a location of the finding, a severity of the finding, a size of the finding, and/or an appearance of the finding in conjunction with another finding or in the absence of another finding, etc. For example, a strength of correlation or connection in the learning network 1026 can translate into a percentage or numerical score indicating a probability of correct detection/diagnosis of the finding in the image data,” 108 “the more information provided to an AI algorithm model, the more accurate the prediction generated by the model. As described above, AI models can be used to deploy algorithms on an imaging device to provide bedside, real-time point of care notifications when a patient has a critical finding,” 120 “pre-processor 1024 can apply techniques such as down-sampling, anatomical segmentation, normalizing with mean and/or standard deviation of training population, contrast enhancement, etc., to scale or reduce image data size for further processing (e.g., by presenting the learning network 1026 with fewer samples representing the image data, etc.),”]);
Nye does not explicitly disclose however Benjamin discloses:
identifying the set of medical images to be evaluated by the trained machine learning model, comprising identifying a plurality of medical images from a medical image repository, each of the plurality of medical images being unread by a medical professional ([65 “setting an order of priority of the cases the reader is allowed to choose at a given time, by: a) the computer system providing a deadline for reading each case; b) the computer system finding the time to deadline for each case, at the given time; c) the computer system providing an index of importance function of time to deadline and of characteristics of a case; d) the computer system calculating an index of importance for each case according to the index of importance function,” 68 “two different categories corresponding to a higher and a lower level of urgency, for a given time to deadline and other characteristics, the index of importance function is higher for the category corresponding to the higher level of urgency than for the category corresponding to the lower level of urgency,” 72 “assign cases to the workstations of readers who choose them; wherein the software is configured so that, for at least some of the cases, initially only a portion of the readers are allowed or encouraged to choose the case, but over time, as the case becomes more urgent to read if it is still not read, the case is escalated a first time by the computer system adding one or more other readers to the readers who are allowed or encouraged to choose the case, and over further time, if the case is still not read, the case is escalated at least one additional time by the computer system adding one or more other readers to the readers who are allowed or encouraged to choose the case,” 73 “computer system for automatically assigning each of a plurality of medical imaging cases for reading by one of a plurality of readers, the system comprising: a) a receiving module configured to receive medical imaging cases from one or more sites; b) a plurality of workstations, each used by one of the readers; c) a reader I/O module configured to display on a worklist on a workstation of each reader, information about a set of cases that the reader is allowed to choose for reading, the information indicating that the reader is encouraged to preferentially choose a displayed case over another displayed case, to receive information about the readers' choices of cases for reading, and to assign cases to the workstations of the readers,” 74, 77 “computer system displaying together with the worklist on the workstations of at least some of the readers how many unread cases match each of a plurality of different subspecialties, including subspecialties that are not indicated on the profile of that reader, or how many unread cases come from each of a plurality of different sites of origin, or both; d) the computer system receiving information on choices of cases by the readers,” 106 “a case with lower urgency for reading, only a portion of the readers, for example readers who are better qualified or more suitable than other readers to read the case, are allowed or encouraged to choose it for reading initially, but as the urgency for reading the case increases with time, if it is still not read, other readers are added to the portion of readers who are allowed or encouraged to choose it for reading. Allowing or encouraging additional readers to choose a case that has become more urgent is referred to herein as “escalating” the case,” 107-116, 117 “case may be escalated at one or more escalation times while it remains unread. Optionally, the escalation times occur at certain fractions of the total time available for reading a case, for example as specified by the SLA. Alternatively or additionally, the escalation times occur at certain absolute time intervals after the time the study was performed. When a case is escalated, for example because it has remained unread for too long, the criteria for exposing the case are changed to be more inclusive,”]).
Therefore it would be obvious for Nye to identify the set of medical images to be evaluated by the trained machine learning model, comprising identifying a plurality of medical images from a medical image repository, each of the plurality of medical images being unread by a medical professional as per the steps of Benjamin in order to more precisely predict the occurrence of medical conditions and thereby more precisely develop and provide treatments to patients and individuals and thereby more likely have a positive impact on patient health conditions.
Nye does not explicitly disclose however Wang discloses:
train the machine learning model with a training data set of medical images in which the training links Bayesian inferences in deep Gaussian processes ([234 “deep network trained with dropout can be cast as a Bayesian approximation of a Gaussian process [64]. Given a set of training data and their labels {X,Y}, training a network F(⋅,W) with dropout has the effect of approximating the posterior distribution p(W|{X,Y}) by minimising the Kullback-Leibler divergence term, i.e. KL(q(W)∥p(W|{X,Y})); where q(W) is an approximating distribution over the weight matrices W with their elements randomly set to zero according to Bernoulli random variables,” 235 “extended network is trained with a dropout ratio of 0.5 applied to the newly inserted layer. At test time, the network is sampled N times using dropout. The final segmentation can be obtained by majority voting. The percentage of samples which disagree with the voting results is typically computed at each voxel as the uncertainty estimate,”]) to access uncertainties in outputs of the machine learning model, wherein the outputs include a medical condition shown in the input medical images ([219 “the medical image computing domain, recent years have seen a growing number of applications using CNNs. Although there have been recent advances in tailoring CNNs to analyse volumetric images, most of the work to date studies image representations in 2D. While volumetric representations are more informative, the number of voxels scales cubically with the size of the region of interest,” 222 “uncertainty of the segmentation is also an important parameter for indicating the confidence and reliability of an algorithm [64, 75, 76]. The high uncertainty of a labelling can be a sign of an unreliable classification. The feasibility of voxel-level uncertainty estimation is demonstrated herein using Monte Carlo samples of the proposed network with dropout at test time, 234, 235, 250 “provides an uncertainty map generated with 100 Monte Carlo samples using dropout, while the bottom row represents an uncertainty map with a threshold at set at 0.1. It is clear from FIG. 41 that the uncertainties near the boundaries of different structures are relatively higher than the other regions. Note that displaying the segmentation uncertainty can be useful to a user, for example, in terms of the most effective locations for providing manual indications (e.g. clicks or scribbles) of the correct segmentation,”]);
run the trained machine learning model on the set of medical images to be evaluated to produce a medical condition output for each of the set of medical images ([82 “image may be multimodal, in that it records multiple intensity values according to spatial location for multiple respective imaging modalities, such as for different modes of magnetic resonance (MR) image, and/or to combine images from different types of images, such as X-ray (e.g. computed tomography, CT), ultrasound, gamma ray (e.g. positron emission tomography, etc),” 94 “depending on the results from the proposed segmentation, there may be zero or multiple indications corresponding to a single segment—e.g. there might be two indications denoting regions that should belong to the first segment, but no indications with respect to the second segment,” 107-110, 218 “present application investigates efficient and flexible elements of modern convolutional networks such as dilated convolution and residual connection. With these building blocks, a high-resolution, compact convolutional network is proposed for volumetric image segmentation. To illustrate the efficiency of this approach for learning 3D representation from large-scale image data, the proposed network is validated with the challenging task of parcellating 155 neuroanatomical structures from brain MR images,”]);
Therefore it would be obvious for Nye to train the machine learning model with a training data set of medical images in which the training links Bayesian inferences in deep Gaussian processes to access uncertainties in outputs of the machine learning model, wherein the outputs include a medical condition shown in the input medical images and run the trained machine learning model on the set of medical images to be evaluated to produce a medical condition output for each of the set of medical images as per the steps of Wang in order to more precisely predict the occurrence of medical conditions and thereby more precisely develop and provide treatments to patients and individuals and thereby more likely have a positive impact on patient health conditions.
Nye does not explicitly disclose however Bangia discloses:
determining, for each of the plurality of medical images in the set of medical images to be evaluated, a severity score for the medical condition output associated with a respective medical image, wherein the severity score indicates a severity of the medical condition ([31 “performed with an imaging system (e.g., an endoscope, a video capsule endoscope, etc.), to monitor disease progression or disease improvement, or to quantify an effect of a treatment, such as by tracking a severity score,” 39 “determine at least one score corresponding to the image frame, the at least one score representing at least one of informativeness of the image frame to show a given feature affecting the digestive organ or severity of the given feature; and automatically assigning, via the at least one processor, the at least one score to the image frame, based on the output of the artificial neural network in response to the inputting the at least one region of interest,” 63 “first score within a specified range, where the second score corresponds to a severity of the given feature; and automatically arranging the set of image frames according to at least one of the first score or the second score,” 165-175]);
determining an order of the plurality of medical images in the set of medical images to be evaluated based upon: (1) the calculated likelihood score ([63 “where the first score corresponds to an informativeness of the image frame with respect to a likelihood of the image frame depicting presence or absence of a given feature on the inside surface of the digestive organ,” 104, 148, 156]); and (2) the determined severity score ([63 “automatically assigning a second score to each image frame that has the first score within a specified range, where the second score corresponds to a severity of the given feature; and automatically arranging the set of image frames according to at least one of the first score or the second score,” 80]), wherein determining the order of plurality of medical images in the set of input images comprises sorting the plurality of medical images from highest to lowest based upon the calculated likelihood score and the determined severity score ([64 “arranging includes, with respect to at least one of the first score and the second score, at least one of the following: ranking at least some image frames of the set of image frames in descending order of score; ranking at least some image frames of the set of image frames in ascending order of score; assorting at least some image frames of the set of image frames in random order of score; or positioning at least some image frames of the set of image frames in an array based on adjacent image frames in the array having different scores,” 104, 105, 148-152, 165-168]); and
displaying, on a display, the set of input images to be evaluated, wherein the plurality of medical images in the set of input images are displayed in the determined order ([313 “processor 904 may output a presentation of the set of image frames after the shuffling. The presentation may be a visual display, such as a slide show having an presentation of shuffled image frames, where the automated presentation may be automatically advanced in an order of the shuffling, and where the automated presentation may be further arranged or presented according to additional organizational schemes, such as temporal intervals and/or positional spacing,” 314 “organized presentation of image frames may involve temporal spacing of when each of the image frames (or certain selected image frames) may be presented in a visual slide show, for example, by predetermined intervals, randomized intervals, or methodically separated intervals, such as with respect to display time or visual spacing (e.g., distance between concurrently displayed image frames) for any given image frames in sequence,” 315, 316 “pieces of information such as regions of interest, corresponding image frames, and/or (sub)sets of image frames, the information may be shuffled or otherwise rearranged, and presented in an organized fashion. Thus, the information may be recomposed in new and original ways with respect to displays of the same information,”]).
Therefore it would be obvious for Nye to determine for each of the plurality of medical images in the set of medical images to be evaluated, a severity score for the medical condition output associated with a respective medical image, wherein the severity score indicates a severity of the medical condition, determining an order of the plurality of medical images in the set of medical images to be evaluated based upon: (1) the calculated likelihood score; and (2) the determined severity score, wherein determining the order of plurality of medical images in the set of input images comprises sorting the plurality of medical images from highest to lowest based upon the calculated likelihood score and the determined severity score and displaying, on a display, the set of input images to be evaluated, wherein the plurality of medical images in the set of input images are displayed in the determined order as per the steps of Bangia in order to more precisely predict the occurrence of medical conditions and thereby more precisely develop and provide treatments to patients and individuals and thereby more likely have a positive impact on patient health conditions
Claim 17: Nye in view of Wang discloses the method and system of claim 16 above, and Nye does not explicitly disclose, however Wang discloses wherein the computer readable instructions further cause the processor to: reweight predictions ([118 “CRF-Net gives a regularized prediction for each pixel, which may be fed into a logistic loss function layer,” 119, 131 “logistic loss function and SGD algorithm may be used for optimization. The mini-batch size was set to 1, the momentum to 0.99, and the weight decay to 5×10.sup.−4 (but any appropriate values or policies may be used),” 194 “Pixels with a foreground probability between p.sub.0 and p.sub.1 are taken as data with low confidence; g.sub.i,S is the geodesic distance between i and S, and ϵ is a threshold value. With fixed (X, Y) and the weighted loss function, θ is updated by gradient descent or any other suitable method such as LBFGS (limited memory Broyden-Fletcher-Goldfarb-Shanno),”]).
Therefore it would be obvious for Nye wherein the computer readable instructions further cause the processor to reweight predictions as per the steps of Wang in order to more precisely predict the occurrence of medical conditions and thereby more precisely develop and provide treatments to patients and individuals and thereby more likely have a positive impact on patient health conditions.
Claim 18: Nye in view of Wang discloses the method and system of claim 16 above, and Nye does not explicitly disclose, however Wang discloses wherein the computer readable instructions further cause the processor to: determine statistical parameters for different machine learning model outputs ([112 “convolution layer parameters are represented as (C, 2r+1, I), where C is the number of output channels, 2r+1 is the convolutional kernel size and/is the dilation parameter as well as the padding size. The stride of each convolutional layer is set to 1 to that the resolution is kept the same through the network,” 117 “stride of each convolutional layer is set to 1 and the number of output channels of convolution in each block is set to a fixed number C. In order to use multi-scale features, we concatenate the features from different blocks to get a composed feature of length 5C,” 118, 119, 123, 136 “the output resolution of these two networks is ⅛ of the input resolution; therefore their output was up-sampled to obtain the final result. For fine-tuning, the same learning rate schedules as those used for training P-Net were used, with the maximal number of iterations again being set to 100k,” 168, 189, 190]).
Therefore it would be obvious for Nye wherein the computer readable instructions further cause the processor to: determine statistical parameters for different machine learning model outputs as per the steps of Wang in order to more precisely predict the occurrence of medical conditions and thereby more precisely develop and provide treatments to patients and individuals and thereby more likely have a positive impact on patient health conditions.
Claim(s) 3 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination in view of Nye (20190156484), Wang et al. (20200167930), Bangia et al. (20210358121) and Benjamin et al. (20200265946) and in further view of Hsieh et al. (20190026608).
Claims 3 and 13: Nye in view of Wang discloses the method and system of claims 1 and 11 above, and Nye further discloses wherein the statistical parameters include a predictive value of the outputs, an uncertainty of the outputs, ([103 “probability and/or confidence indicator or score can be associated with the indication of critical and/or other clinical finding(s), a confidence associated with the finding, a location of the finding, a severity of the finding, a size of the finding, and/or an appearance of the finding in conjunction with another finding or in the absence of another finding, etc. For example, a strength of correlation or connection in the learning network 1026 can translate into a percentage or numerical score indicating a probability of correct detection/diagnosis of the finding in the image data,”]).
Nye does not explicitly disclose however Hsieh discloses:
wherein the statistical parameters include the standard deviation of noise ([161 “specific image quality metrics include spatial resolution, noise, etc. At block 840, described above, feedback generated by the reconstruction engine 1440 can be collected and stored. Thus, lessons learned by the system 1500 from the reconstruction of the acquired image data can be fed back into the acquisition learning and improvement factory,” 227 “traditional IQ metrics such as full-width at half maximum (FWHM) of the point spread function (PSF), modulation transfer function (MTF) cutoff frequency, maximum visible frequency in line pairs, standard deviation of noise, etc., are not reflecting true task-based image quality. Instead, certain examples provide it is impactful to estimate IQ directly from acquired clinical images]).
Therefore it would be obvious for Nye wherein the statistical parameters include the standard deviation of noise as per the steps of Hsieh in order to more precisely predict the occurrence of medical conditions and thereby more precisely develop and provide treatments to patients and individuals and thereby more likely have a positive impact on patient health conditions.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nye (20190156484) in view of Wang et al. (20200167930), Bangia et al. (20210358121), in view of Hsieh (20190026608) and in further view of WIPO Patent Publication No. WO 2017068146 A1 to Lobigs (“Lobigs”).
Regarding claim 4, the combination discloses each of the limitations of claim 3 as discussed above, and further discloses:
wherein the likelihood score is calculated for a specific output (Nye, 0103: a probability or confidence indicator score that is associated with the critical finding and indicates strength of correlation in the diagnosis)
Nye does not explicitly recite, but Hsieh teaches statistical parameters that comprise an output mean prediction of the specific output, standard deviation of the specific output, (Hsieh, 0227: statistical parameters for input features, including mean and standard deviation) and an output mean prediction and standard deviation. (Hsieh, 0227: statistical parameters for input features, including standard deviation of noise).
Therefore, it would have been obvious to one having ordinary skill in the art of healthcare to modify the statical parameters of Nye to include mean and standard deviation, as taught by Hsieh, because Hsieh and Nye both deal with using machine learning with medical imaging for purposes of diagnosis, and Hsieh teaches that standard deviation are useful tools for generating outputs from machine learning models.
The combination does not explicitly recite, but Lobigs teaches an equation
PNG
media_image1.png
74
138
media_image1.png
Greyscale
for calculating z scores, equivalent of a likelihood score (Lobigs, page 16-17: where ME
PNG
media_image2.png
53
139
media_image2.png
Greyscale
is the mean, and VAR is the variance).
Therefore, it would have been obvious to one having ordinary skill in the art of healthcare to modify the combination to include the above equation, as taught by Lobigs, because Hsieh, Nye, and Lobigs deal with medical diagnosis, and Lobigs teaches that the above equation is useful to providing z-scores using a Bayesian model in order to identify variations in markers.
Response to Arguments
Applicants arguments and amendments, see Remarks/Amendments submitted 10 December 2025 with respect to the rejection of claims 1, 3-11, and 13-18 have been carefully considered and are addressed below.
Claim Rejections - 35 USC § 101
Examiner has evaluated the instant claims under the requirements of the 2019 PEG Revised Step 2A Prongs One and Two and the requirements of MPEP 2106 and has determined that the claims as amended overcome the currently in place rejection of all pending claims under the requirements of the statute.
Examiner maintains the instant claims are directed to a judicial exception similar to abstract ideas similar to certain methods of organizing human activity such as managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules and instructions and as well as mental processes including concepts performed in the human mind including observation, evaluation, judgement and opinion.
Examiner as a result of evaluating Applicants amendments which specifically detail the processing of medical image data by machine learning model by evaluating the images from the image repository and determining read or unread status of the images. Therefore the instant invention is determined to be directed to a practical application and technology improvement and therefore the rejection is removed.
Claim Rejections - 35 USC § 103
Applicant’s arguments and amendments, see Remarks/Amendments, filed 10 December 2025, with respect to the rejection(s) of claim(s) 1, 3-11, and 13-18 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of previously cited to references Nye in view of Wang, Bangia, Hsiey, and Lobigs and in further view of newly cited to reference Benjamin.
Applicants argue that the currently in place combination of references do not disclose the newly added limitation “identifying the set of medical images to be evaluated by the trained machine learning model, comprising identifying a plurality of medical images from a medical image repository, each of the plurality of medical images being unread by a medical professional,” and therefore the currently in place art rejection is not appropriate. Examiner respectfully disagrees and replies that newly located reference Benjamin clearly discloses the making of decisions with respect to determining the read or unread status of the collected medical images. Therefore the rejection of all pending claims under 35 USC 103 is maintained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant' s disclosure. Please see attached References Cited form 892.
See Hirakawa et al. (20220172831) for disclosures related to the implementation of the display of medical care processes and an associated unread management unit that displays medical care process status on the display screen. See at least paras. [49]-[75].
See Wood (20210027884) for disclosures related to the analysis of radiology exam information and the re-ordering and notification of analysis results. See at least paras. [15]-[34].
See Sinichi et al. (JP 3192834 B2) for disclosures related to provide a reference image preparation support device capable of preparing a past image or a typical case image to be referred to for interpretation without preparing for an operator and preparing for interpretation. See pages 2-3.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David Stoltenberg whose telephone number is (571) 270-3472.
The examiner can normally be reached on Monday-Friday 8:30AM to 5:00PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner' s supervisor, Kambiz Abdi, can be reached on (571) 272-6702. The fax phone number for the organization where this application or proceeding is assigned is (571)-273-8300, or the examiner' s direct fax phone number is (571) 270 4472.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center at (866) 217-9197 (toll free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/DAVID J STOLTENBERG/Primary Examiner, Art Unit 3685