DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1-2, 5-7, 12-14, 17-20 have been amended.
Claims 1-20 are still pending for consideration.
Claim Rejections - 35 USC § 101
The rejection of claims 1-11 under 35 U.S.C. 101 has been withdrawn.
The described process details a computer program product to improve classification accuracy by training a supervisor to score analysis algorithms based on input data features, similar to predictive evaluation models for edge AI
Input Data: Images, algorithm attributes, and image attributes, which allows for context-aware scoring.
Supervisor Model: A trained orchestrator (a machine learning model).
Output: Algorithm scores for accuracy, which is a concrete application of model evaluation.
This is a specific, practical application of AI to improve decision-making in image classification, which generally constitutes a technical solution rather than an abstract concept.
Response to Arguments
Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim 1, 12 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Fong et al. (US 20210241152 A1) in view of Schleyen et al. (US 20210343000 A1).
Regrading claim 1, Fong et al. teaches a computer program product for selecting one of a plurality of analysis algorithms to process image data (see para [0056]; “In step 210, a ML selection that specifies a selected ML pipeline is obtained”, and claim 2; “providing the selected ML pipeline to the client” see also para [0042]; “the set of ML pipelines are selected based on the type of ML algorithms that are designed to generate ML models for the domain of the training dataset. For example, the training dataset may include images”), the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations (see para [0004]; “the invention relates to a non-transitory computer readable medium that includes computer readable program code, which when executed by a computer processor enables the computer processor to perform a method. for managing data. The method includes obtaining a request for a machine learning (ML) pipeline selection from a client”); generating training sets including inputs comprising attributes of the analysis algorithms, attributes of the training image data, and the feedback scores (see para [0045]; “The training results may be input to a prediction model. ….The prediction model may take as inputs the training results as well as additional characteristics of the ML pipelines that may factor into the predicted values of the criteria. The characteristics may include, for example, a size of the training dataset, a number of dimensions of the training dataset, a number of hyper-parameters of the ML algorithm associated with the ML pipeline”, see also para [0066]; “Each of the identified ML pipelines is processed using a prediction model that is trained using a standard dataset that includes data points that specify inputs such as the type of ML algorithm of the ML pipeline,..”, and claim 3; “providing inputs of each ML pipeline in the set of ML pipelines into a prediction model to generate the runtime statistics”, see also para [0027]; “a number of dimensions of the data set (e.g., two dimensional images, three dimensional graphics, etc.)”); training an orchestration supervisor, implementing a machine learning algorithm, with the inputs from the training sets to produce the feedback scores (see para [0024]; “the ML pipeline inference manager (120) includes a pipeline evaluator (122)….. the pipeline evaluator (122) implements a prediction model on one or more sets of ML pipelines. The prediction model may take as an input a variety of factors of a ML pipeline (e.g., 152, 154) to generate the runtime statistics (124) for each ML pipeline. Further, the pipeline evaluator (122) generates an ordering of evaluated ML pipelines based on the user preferences (126”, see also claim 6; “and updating the prediction model based on the ML pipeline telemetry” Note: prediction model and pipeline evaluator implies/serves the same function as orchestration supervisor); deploying the trained orchestration supervisor to receive inputs comprising attributes of the analysis algorithms and attributes of the image data to output algorithm scores for the analysis algorithms (see claim 3; “providing inputs of each ML pipeline in the set of ML pipelines into a prediction model to generate the runtime statistics”, see para [0027]; “the runtime statistics (124) are data structures that specify values related to the operation of each ML pipeline …. an accuracy of the ML pipeline, a training cost, a training speed, an inferred speed, and an inferred cost…. Additional inputs to the prediction model may include....a number of dimensions of the data set (e.g., two dimensional images, three dimensional graphics, etc.)”); wherein the algorithm scores are indicative of an accuracy of the analysis algorithms in classifying the image data (see para [0012]; “multiple criteria such as training cost, inferred speed of execution, accuracy, and training speed”, see claim 3; “providing inputs of each ML pipeline in the set of ML pipelines into a prediction model to generate the runtime statistics”, see also para [0042]; “the training dataset may include images. The ML pipeline inference manager may identify ML algorithms that are associated with classifying images and/or being trained using images”), using the algorithm scores to select at least one analysis algorithm of the analysis algorithms (see claim 8; “inputting the runtime statistics and user preferences associated with the client into a user preference model to obtain an ordering of the set of ML pipelines”, see also para [0056]; “In step 210, a ML selection that specifies a selected ML pipeline is obtained”, and claim 2; “providing the selected ML pipeline to the client”); and forwarding the image data to the selected at least one analysis algorithm to generate at least one classification (see para [0057]; “In step 212, the selected ML pipeline is provided to the client to be deployed in the ML execution environment”, see also para [0062]; “after the ML pipeline is provided to the client, the client may execute the ML pipeline”, and para [0042]; “The ML pipeline inference manager may identify ML algorithms that are associated with classifying images”). However, Fong does not explicitly disclose receiving feedback scores for classifications produced by a plurality of analysis algorithms processing training image data, wherein the feedback scores indicate accuracy of the classifications of the training image data.
In the same field of endeavor, Schleyen et al. teaches the operations comprising: receiving feedback scores for classifications produced by a plurality of analysis algorithms processing training image data, wherein the feedback scores indicate accuracy of the classifications of the training image data (see para [0004]; “obtain a set of images …. upon obtaining a plurality of algorithmic modules,…. for an image of the set of images, select at least one algorithmic module ….. feed the image to the at least one algorithmic module…. obtain a supervised feedback regarding rightness of data…… provided by the algorithmic module……. to generate, based at least on the supervised feedback, a score for each of a plurality of the algorithmic modules”, se also para [0006]; “the score generated for an algorithmic module is representative of a ratio between a number of times a positive feedback has been obtained for this algorithmic module and a number of times this algorithmic module has been selected”, and para [0009]; “the system is configured to select at (1) the algorithmic module M.sub.S based at least on a score associated with the algorithmic module M.sub.S and generated based at least on a supervised feedback previously obtained”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify a method for managing data includes obtaining a request for a machine learning (ML) pipeline selection, identifying a set of ML pipelines and obtaining runtime statistics of Fong et al. in view of the use of a system of examination of a semiconductor specimen of Schleyen et al order to increase effectiveness of examination by automatization process (see para [0004]).
Regarding claim 12, the scope of claim12 is fully incorporated in claim 1, and the
rejection of claim 1 is equally applicable here.
Regarding claim 17, the scope of claim 17 is fully incorporated in claim 1, and the rejection of claim 1 is equally applicable here.
Claim 2-7, 11, 13-14, 16, 18-20, and are rejected under 35 U.S.C. 103 as being unpatentable over Fong et al. in view of Schleyen et al. as applied in claim 1 above, and further in view of Kelm et al. (US 20210065886 A1).
Regarding claim 2, the rejection of claim 1is incorporated herein.
The combination of Fong et al. and Schleyen et al. does not teach wherein the training image data comprises training medical images and the image data comprises a received patient medical image for a patient, wherein the classifications produced by the analysis algorithms comprise training medical findings for the training medical images and patient medical findings for the patient medical image, and wherein the feedback scores are determined by at least one radiologist reviewing the training medical findings produced for the training medical images.
In the same field of endeavor Kelm et al. father teach wherein the training image data comprises training medical images and the image data comprises a patient medical image for a patient (see para [0124]; “the one or more medical datasets 101-103 are obtained. For instance, the one or more medical datasets 101-103 could be received from one or more medical devices, e.g., medical laboratory devices, medical imaging devices, etc.. The one or more medical datasets 101-103 are associated with the patient”, see also para [0148]; “For instance, in case the medical datasets 101-103 include medical imaging datasets, those may be displayed”), wherein the classifications produced by the analysis algorithms comprise training medical findings for the training medical images and patient medical findings for the patient medical image (see para [174]; “Which best-fit evaluation algorithm(s) to select in order to predict classification or categorization or scoring of disease?..... from the perspective of the reporting physician”, see also para [0178]; “In order to determine the required output of the evaluation algorithm, report entries that are to be prefilled with outputs of the evaluation algorithm can be semantically annotated in the medical report template. Consider for example a lesion in the report of Table 2 shown above”), and wherein the feedback scores are determined by at least one radiologist reviewing the training medical findings produced for the training medical images (see para [0184]; “the user can then provide a further user input 114 as feedback, e.g., via the HMI 214. The user input pertains to the medical report 180 or the output of the one or more evaluation algorithms…the user may not be satisfied with the accuracy of this segmentation or annotation. Then, using the user input 114 providing feedback, the user may refine the segmentation or annotation” and para [0154]; “It would also be possible that the selection between the multiple user-interaction modes can be based on accuracy feedback of previous selections of the one or more evaluation algorithms”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify a method for managing data includes obtaining a request for a machine learning (ML) pipeline selection, identifying a set of ML pipelines and obtaining runtime statistics of Fong et al. in view of the use of a system of examination of a semiconductor specimen of Schleyen et al and machine-learning algorithms used for the analysis of medical datasets and generation of a medical report of Kelm et al. in order to record the user review as feedback scores for the training set’s medical findings (see para [0124]).
Regarding claim 3, the rejection of claim 2 is incorporated herein.
Fong et al. in the combination further teach wherein the inputs for the training sets include at least one attribute of the training medical images selected from the group consisting of (see para [0066]; “prediction model that is trained using a standard dataset that includes data points that specify inputs such as the type of ML algorithm of the ML pipeline, a number of hyper-parameters associated with the ML pipeline, a size of the training dataset, a number of dimensions of the training dataset, a number of inputs, and a number of outputs”).
Kelm et al. in the combination further teach a medical imaging machine used to create a training medical image (see para [0079]; “Examples of imaging modalities that can interact with the techniques described herein include, but are not limited to: X-ray imaging; computer tomography (CT); ultrasound imaging; positron emission tomography (PET); and magnetic resonance imaging (MRI)”), a medical condition to be detected by the analysis algorithm (see para [0099]; “the analysis can extract one or more medical conditions of the patient from the one or more medical datasets”, see also para [0102]; “while two of the evaluation algorithms 711-713 may both operate based on liver MRT medical imaging datasets, one of those evaluation algorithms may evaluate the liver size, while the other one of those evaluation algorithms may evaluate a fat content of the liver”, and para [0177]; “given among the algorithms in there is one algorithm which can analyze the coronaries in data of type “CT angiography of coronary arteries” and one that can analyze data of type “CT of chest”…..The semantic reasoning may define “CT angiography of coronary arteries” as a subclass of “CT of chest”; thus, based on this knowledge both evaluation algorithms can proof suitable” Note: the condition to be detected is an obvious attribute to record and use as an input), information on an imaging center operating the medical imaging machine to produce the training medical image, and information on a medical clinic ordering the training medical image (see para [0130]; “depending on certain process parameters of the particular clinical workflow—e.g., source of the medical dataset, user, patient, time of the day, etc.—the appropriate evaluation algorithm(s) is(are) selected” Note: the source of medical dataset corresponds to the site/imaging center producing the study and obvious to include as a training-set attribute).
Regarding claim 4, the rejection of claim 2 is incorporated herein.
Kelm et al. in the combination further teach wherein the inputs for the training sets include demographics of patients from which the training medical images were generated (see para [0126]; “The patient dataset 131 includes patient-specific information of the patient. For example, the patient dataset 113 could specify one or more of the following patient-specific information elements: a previous diagnosis of the patient; a therapeutic history of the patient including, e.g., medication, etc.”).
Regarding claim 5, the rejection of claim 2 is incorporated herein.
Fong et al. in the combination further teach wherein the inputs for the training sets include at least one attribute of the analysis algorithms selected from the group consisting of an identifier of an analysis algorithm for which a feedback score is provided and a cost of running the analysis algorithm (see para [0059]; Fig. 3 disclose identifiers i.e. “ML Pipeline A…ML Pipeline N and shows a table with a name column listing Pipeline A…Pipeline B….” see para [0007]; “FIG. 2 shows a flowchart for generating a personalized machine learning (ML) pipeline selection”, see also para [0066]; “Each of the identified ML pipelines is processed using a prediction model that is trained using a standard dataset that includes data points that specify inputs such as the type of ML algorithm of the ML pipeline, … The outputs of the data points in the standard dataset include the following criteria: accuracy, training cost, training speed, inferred cost, and inferred speed”).
Kelm et al. in the combination further teach a type of condition detected and measured by the analysis algorithm (see para [0110]; “a first one of the medical report templates may configure the medical report such as to specify a medical condition associated with a brain tumor; while a second one of the medical report templates may configure the medical report such as to specify a medical condition associated with fatty liver disease”, see also para [0174]; “g) Which best-fit evaluation algorithm(s) to select in order to predict classification or categorization or scoring of disease?”).
Regarding claim 6, the rejection of claim 2 is incorporated herein.
Fong et al. in the combination further teach and wherein the inputs to the deployed trained orchestration supervisor (see para [0049]; “the pipeline evaluator inputs the generated runtime statistics with the user preferences to a user preference model to identify the highest ranking ML pipelines in the set of ML pipelines based on preferred user preferences of the user operating on the client….. the user can then provide a further user input 114 as feedback”).
Kelm et al. in the combination further teach wherein the inputs for the training sets include an attribute of a radiologist entity that provided a feedback score (see para [0003]; “the medical imaging datasets are analyzed by a user, e.g., by a radiologist” as a subclass of “CT of chest”, and para [0130]; “a user input is received and that the selection algorithms are trained based on the user input” Note: since the reviewer is a radiologist, including a radiologist attribute (identity/role/preferneces) among the training inputs is an obvious implementation so the trained selector reflects who seplied the feedback”), to include an attribute of a radiologist entity which the patient medical image is to be forwarded with a patient medical finding generated by the selected at least one analysis algorithm (see para [0183]; “the medical report 180 can be output to the user..The user can then study details of the medical report 180..”).
Regarding claim 7, the rejection of claim 2 is incorporated herein.
Fong et al. in the combination further teach attributes and identifiers of the analysis algorithms (see claim 3-4; “providing inputs of each ML pipeline in the set of ML pipelines into a prediction model to generate the runtime statistics…the inputs comprise at least one of: the size of the training dataset, a type of ML algorithm associated with each ML pipeline, a number of hyper-parameters of each ML pipeline, and a standard dataset”, see also para [0068]; “Pipeline F having the highest accuracy listed first and Pipeline D having the lowest accuracy being listed last. Client A selects Pipeline C to implement. The ordering and the runtime statistics are displayed in a GUI display (301B) of the client (300)”).
Kelm et al. in the combination further teach wherein the inputs for the training sets are selected from the group consisting of attributes of the training medical images (see para [0009]; “obtaining one or more medical datasets of a patient. The method also includes triggering a selection of one or more algorithms from an algorithm repository….The selection is based on the one or more medical datasets”), location of where the training medical images were generated (see para [0130]; “depending on certain process parameters of the particular clinical workflow—e.g., source of the medical dataset, user, patient, time of the day, etc”), technology used to generate the training medical images (see para [0079]; “imaging modalities that can interact with the techniques described herein include, but are not limited to: X-ray imaging; computer tomography (CT); ultrasound imaging; positron emission tomography (PET); and magnetic resonance imaging (MRI). Non-imaging medical datasets can also widely vary, depending on the scenario”), demographics of patients from which the training medical images were generated (see para [0126]; “The patient dataset 131 includes patient-specific information of the patient. For example, the patient dataset 113 could specify one or more of the following patient-specific information elements: a previous diagnosis of the patient; a therapeutic history of the patient including, e.g., medication, etc”), attributes and identifiers of the analysis algorithms used to produce the classifications for which the feedback scores were provided (see para [0104]; “The meta data 715-717 can be used by suppliers of the evaluation algorithms 711-713 to appropriately annotate the functional characteristics of the provided evaluation”), a location of a medical clinic that ordered the patient medical image (see para [013]; “e.g., source of the medical dataset, user, patient, time of the day, etc”), and attributes of at least one radiologist entity that provided the feedback scores (see para [0003]; “the medical imaging datasets are analyzed by a user, e.g., by a radiologist and/or a pathologist, and a medical report is drawn up by the radiologist”, see also para [0150]; “as part of the user input 111, the respective selection algorithm is trained accordingly. Thus, recurrent training would be possible to refine the selection algorithm to provide a more relevant ranking”), and wherein the inputs to the deployed trained orchestration supervisor are selected from the group consisting of attributes of the patient medical image (see para [0009]; “The method also includes triggering a selection of one or more algorithms from an algorithm repository. The algorithm repository includes multiple candidate algorithms. The selection is based on the one or more medical datasets”, see also para [0178]; “Then, evaluations algorithm having the capability to provide the semantic description of the processable input data and the produced output data can be identified by respective meta data”), location of where the patient medical image was generated (see para [0130]; “e.g., source of the medical dataset, user, patient, time of the day, etc”), technology used to generate the patient medical image (see para [0177]; “analyze the coronaries in data of type “CT angiography of coronary arteries” and one that can analyze data of type “CT of chest”. Now, the medical dataset is of the type “CT angiography of coronary arteries”. A match between the medical dataset and the two evaluation algorithms can be performed. The semantic reasoning may define “CT angiography of coronary arteries” as a subclass of “CT of chest”; thus, based on this knowledge both evaluation algorithms can proof suitable”), demographics of a patient from which the patient medical image was generated (see para [0126]; “The patient dataset 131 includes patient-specific information of the patient. For example, the patient dataset 113 could specify one or more of the following patient-specific information elements: a previous diagnosis of the patient; a therapeutic history of the patient including, e.g., medication, etc.”), a location of a medical clinic that ordered the patient medical image for the patient, and attributes of a radiologist to which the patient medical image and at least one patient medical finding from the selected at least one analysis algorithm will be forwarded (see para [0130]; “depending on certain process parameters of the particular clinical workflow—e.g., source of the medical dataset, user, patient, time of the day, etc”, see also para [0150]; “For illustration, it would be possible that if the user selects one or more of the evaluation algorithms from the sorted list, e.g., as part of the user input 111, the respective selection algorithm is trained accordingly. Thus, recurrent training would be possible to refine the selection algorithm to provide a more relevant ranking”).
Regarding claim 11, the rejection of claim 2 is incorporated herein.
Fong et al. in the combination further teach generating training sets for the updated analysis algorithm including inputs comprising an attribute of the updated analysis algorithm (see para [0066]; “a prediction model that is trained using a standard dataset that includes data points that specify inputs such as the type of ML algorithm of the ML pipeline, a number of hyper-parameters associated with the ML pipeline”), and training the orchestration supervisor with the inputs from the training sets for the updated analysis algorithm to produce the feedback scores for the updated analysis algorithm based on the inputs (see para claim 3; “providing inputs of each ML pipeline in the set of ML pipelines into a prediction model to generate the runtime statistics”, see also claim 6; “updating the prediction model based on the ML pipeline telemetry” Note: the prediction model (orchestration supervisor) is retrained to predict the score for that updated algorithm).
Kelm et al. in the combination further teach wherein the operations further comprise: receiving indication that one of the analysis algorithms has been updated to an updated analysis algorithm (see claim 6; “providing access to the algorithm repository to enable third parties to upload new candidate algorithms along with the meta-data”); an attribute of training medical images (see para [0003]; “medical imaging datasets are acquired using one or more imaging modalities” Note: machine/tech attribute f the image), and the feedback scores for medical findings produced by the updated analysis algorithm (see para [0184]; “the user can then provide a further user input 114 as feedback, e.g., via the HMI 214. The user input pertains to the medical report 180 or the output of the one or more evaluation algorithms 711-714…. Sometimes, the user may not be satisfied with the accuracy of this segmentation or annotation. Then, using the user input 114 providing feedback, the user may refine the segmentation or annotation”).
Regarding claim 13, the rejection of claim 12 is incorporated herein.
Kelm et al. in the combination further teach wherein the training image data comprises training medical images and the image data comprises a patient medical image for a patient (see para [0124]; “the one or more medical datasets 101-103 are obtained. For instance, the one or more medical datasets 101-103 could be received from one or more medical devices, e.g., medical laboratory devices, medical imaging devices, etc.. The one or more medical datasets 101-103 are associated with the patient”, see also para [0148]; “For instance, in case the medical datasets 101-103 include medical imaging datasets, those may be displayed”), wherein the classifications produced by the analysis algorithms comprise training medical findings for the training medical images and patient medical findings for the patient medical image (see para [174]; “Which best-fit evaluation algorithm(s) to select in order to predict classification or categorization or scoring of disease?..... from the perspective of the reporting physician”, see also para [0178]; “In order to determine the required output of the evaluation algorithm, report entries that are to be prefilled with outputs of the evaluation algorithm can be semantically annotated in the medical report template. Consider for example a lesion in the report of Table 2 shown above”), and wherein the feedback scores are determined by at least one radiologist reviewing the training medical findings produced for the training medical images (see para [0184]; “the user can then provide a further user input 114 as feedback, e.g., via the HMI 214. The user input pertains to the medical report 180 or the output of the one or more evaluation algorithms…the user may not be satisfied with the accuracy of this segmentation or annotation. Then, using the user input 114 providing feedback, the user may refine the segmentation or annotation” and para [0154]; “It would also be possible that the selection between the multiple user-interaction modes can be based on accuracy feedback of previous selections of the one or more evaluation algorithms”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify a method for managing data includes obtaining a request for a machine learning (ML) pipeline selection, identifying a set of ML pipelines and obtaining runtime statistics of Fong et al. in view of the use of a system of examination of a semiconductor specimen of Schleyen et al and machine-learning algorithms used for the analysis of medical datasets and generation of a medical report of Kelm et al. in order to record the user review as feedback scores for the training set’s medical findings (see para [0124]).
Regarding claim 14, the rejection of claim 13 is incorporated herein.
Fong et al. in the combination further teach attributes and identifiers of the analysis algorithms (see claim 3-4; “providing inputs of each ML pipeline in the set of ML pipelines into a prediction model to generate the runtime statistics…the inputs comprise at least one of: the size of the training dataset, a type of ML algorithm associated with each ML pipeline, a number of hyper-parameters of each ML pipeline, and a standard dataset”, see also para [0068]; “Pipeline F having the highest accuracy listed first and Pipeline D having the lowest accuracy being listed last. Client A selects Pipeline C to implement. The ordering and the runtime statistics are displayed in a GUI display (301B) of the client (300)”).
Kelm et al. in the combination further teach wherein the inputs for the training sets are selected from the group consisting of attributes of the training medical images (see para [0009]; “obtaining one or more medical datasets of a patient. The method also includes triggering a selection of one or more algorithms from an algorithm repository….The selection is based on the one or more medical datasets”), location of where the training medical images were generated (see para [0130]; “depending on certain process parameters of the particular clinical workflow—e.g., source of the medical dataset, user, patient, time of the day, etc”), technology used to generate the training medical images (see para [0079]; “imaging modalities that can interact with the techniques described herein include, but are not limited to: X-ray imaging; computer tomography (CT); ultrasound imaging; positron emission tomography (PET); and magnetic resonance imaging (MRI). Non-imaging medical datasets can also widely vary, depending on the scenario”), demographics of patients from which the training medical images were generated (see para [0126]; “The patient dataset 131 includes patient-specific information of the patient. For example, the patient dataset 113 could specify one or more of the following patient-specific information elements: a previous diagnosis of the patient; a therapeutic history of the patient including, e.g., medication, etc”), attributes and identifiers of the analysis algorithms used to produce the classifications for which the feedback scores were provided (see para [0104]; “The meta data 715-717 can be used by suppliers of the evaluation algorithms 711-713 to appropriately annotate the functional characteristics of the provided evaluation”), a location of a medical clinic that ordered the patient medical image (see para [013]; “e.g., source of the medical dataset, user, patient, time of the day, etc”), and attributes of at least one radiologist entity that provided the feedback scores (see para [0003]; “the medical imaging datasets are analyzed by a user, e.g., by a radiologist and/or a pathologist, and a medical report is drawn up by the radiologist”, see also para [0150]; “as part of the user input 111, the respective selection algorithm is trained accordingly. Thus, recurrent training would be possible to refine the selection algorithm to provide a more relevant ranking”), and wherein the inputs to the deployed trained orchestration supervisor are selected from the group consisting of attributes of the patient medical image (see para [0009]; “The method also includes triggering a selection of one or more algorithms from an algorithm repository. The algorithm repository includes multiple candidate algorithms. The selection is based on the one or more medical datasets”, see also para [0178]; “Then, evaluations algorithm having the capability to provide the semantic description of the processable input data and the produced output data can be identified by respective meta data”), location of where the patient medical image was generated (see para [0130]; “e.g., source of the medical dataset, user, patient, time of the day, etc”), technology used to generate the patient medical image (see para [0177]; “analyze the coronaries in data of type “CT angiography of coronary arteries” and one that can analyze data of type “CT of chest”. Now, the medical dataset is of the type “CT angiography of coronary arteries”. A match between the medical dataset and the two evaluation algorithms can be performed. The semantic reasoning may define “CT angiography of coronary arteries” as a subclass of “CT of chest”; thus, based on this knowledge both evaluation algorithms can proof suitable”), demographics of a patient from which the patient medical image was generated (see para [0126]; “The patient dataset 131 includes patient-specific information of the patient. For example, the patient dataset 113 could specify one or more of the following patient-specific information elements: a previous diagnosis of the patient; a therapeutic history of the patient including, e.g., medication, etc.”), a location of a medical clinic that ordered the patient medical image for the patient, and attributes of a radiologist to which the patient medical image and the at least one medical finding from the selected at least one analysis algorithm will be forwarded (see para [0130]; “depending on certain process parameters of the particular clinical workflow—e.g., source of the medical dataset, user, patient, time of the day, etc”, see also para [0150]; “For illustration, it would be possible that if the user selects one or more of the evaluation algorithms from the sorted list, e.g., as part of the user input 111, the respective selection algorithm is trained accordingly. Thus, recurrent training would be possible to refine the selection algorithm to provide a more relevant ranking”).
Regarding claim 16, the rejection of claim 13 is incorporated herein.
Fong et al. in the combination further teach generating training sets for the updated analysis algorithm including inputs comprising an attribute of the updated analysis algorithm (see para [0066]; “a prediction model that is trained using a standard dataset that includes data points that specify inputs such as the type of ML algorithm of the ML pipeline, a number of hyper-parameters associated with the ML pipeline”), and training the orchestration supervisor with the inputs from the training sets for the updated analysis algorithm to produce the feedback scores for the updated analysis algorithm based on the inputs (see para claim 3; “providing inputs of each ML pipeline in the set of ML pipelines into a prediction model to generate the runtime statistics”, see also claim 6; “updating the prediction model based on the ML pipeline telemetry” Note: the prediction model (orchestration supervisor) is retrained to predict the score for that updated algorithm).
Kelm et al. in the combination further teach wherein the operations further comprise: receiving indication that one of the analysis algorithms has been updated to an updated analysis algorithm (see claim 6; “providing access to the algorithm repository to enable third parties to upload new candidate algorithms along with the meta-data”); an attribute of training medical images (see para [0003]; “medical imaging datasets are acquired using one or more imaging modalities” Note: machine/tech attribute f the image), and the feedback scores for medical findings produced by the updated analysis algorithm (see para [0184]; “the user can then provide a further user input 114 as feedback, e.g., via the HMI 214. The user input pertains to the medical report 180 or the output of the one or more evaluation algorithms 711-714…. Sometimes, the user may not be satisfied with the accuracy of this segmentation or annotation. Then, using the user input 114 providing feedback, the user may refine the segmentation or annotation”).
Regarding claim 18, the rejection of claim 17 is incorporated herein.
Kelm et al. in the combination further teach wherein the training data comprises training medical images and the patient data comprises a patient medical image for a patient (see para [0124]; “the one or more medical datasets 101-103 are obtained. For instance, the one or more medical datasets 101-103 could be received from one or more medical devices, e.g., medical laboratory devices, medical imaging devices, etc.. The one or more medical datasets 101-103 are associated with the patient”, see also para [0148]; “For instance, in case the medical datasets 101-103 include medical imaging datasets, those may be displayed”), wherein the classifications produced by the analysis algorithms comprise training medical findings for the training medical images and patient medical findings for the patient medical image (see para [174]; “Which best-fit evaluation algorithm(s) to select in order to predict classification or categorization or scoring of disease?..... from the perspective of the reporting physician”, see also para [0178]; “In order to determine the required output of the evaluation algorithm, report entries that are to be prefilled with outputs of the evaluation algorithm can be semantically annotated in the medical report template. Consider for example a lesion in the report of Table 2 shown above”), and wherein the feedback scores are determined by at least one radiologist reviewing the training medical findings produced for the training medical images (see para [0184]; “the user can then provide a further user input 114 as feedback, e.g., via the HMI 214. The user input pertains to the medical report 180 or the output of the one or more evaluation algorithms…the user may not be satisfied with the accuracy of this segmentation or annotation. Then, using the user input 114 providing feedback, the user may refine the segmentation or annotation” and para [0154]; “It would also be possible that the selection between the multiple user-interaction modes can be based on accuracy feedback of previous selections of the one or more evaluation algorithms”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify a method for managing data includes obtaining a request for a machine learning (ML) pipeline selection, identifying a set of ML pipelines and obtaining runtime statistics of Fong et al. in view of the use of a system of examination of a semiconductor specimen of Schleyen et al and machine-learning algorithms used for the analysis of medical datasets and generation of a medical report of Kelm et al. in order to record the user review as feedback scores for the training set’s medical findings (see para [0124]).
Regarding claim 19, the rejection of claim 18 is incorporated herein.
Fong et al. in the combination further teach attributes and identifiers of the analysis algorithms (see claim 3-4; “providing inputs of each ML pipeline in the set of ML pipelines into a prediction model to generate the runtime statistics…the inputs comprise at least one of: the size of the training dataset, a type of ML algorithm associated with each ML pipeline, a number of hyper-parameters of each ML pipeline, and a standard dataset”, see also para [0068]; “Pipeline F having the highest accuracy listed first and Pipeline D having the lowest accuracy being listed last. Client A selects Pipeline C to implement. The ordering and the runtime statistics are displayed in a GUI display (301B) of the client (300)”).
Kelm et al. in the combination further teach wherein the inputs for the training sets are selected from the group consisting of attributes of the training medical images (see para [0009]; “obtaining one or more medical datasets of a patient. The method also includes triggering a selection of one or more algorithms from an algorithm repository….The selection is based on the one or more medical datasets”), location of where the training medical images were generated (see para [0130]; “depending on certain process parameters of the particular clinical workflow—e.g., source of the medical dataset, user, patient, time of the day, etc”), technology used to generate the training medical images (see para [0079]; “imaging modalities that can interact with the techniques described herein include, but are not limited to: X-ray imaging; computer tomography (CT); ultrasound imaging; positron emission tomography (PET); and magnetic resonance imaging (MRI). Non-imaging medical datasets can also widely vary, depending on the scenario”), demographics of patients from which the training medical images were generated (see para [0126]; “The patient dataset 131 includes patient-specific information of the patient. For example, the patient dataset 113 could specify one or more of the following patient-specific information elements: a previous diagnosis of the patient; a therapeutic history of the patient including, e.g., medication, etc”), attributes and identifiers of the analysis algorithms used to produce the classifications for which the feedback scores were provided (see para [0104]; “The meta data 715-717 can be used by suppliers of the evaluation algorithms 711-713 to appropriately annotate the functional characteristics of the provided evaluation”), a location of a medical clinic that ordered the training medical image (see para [013]; “e.g., source of the medical dataset, user, patient, time of the day, etc”), and attributes of at least one radiologist entity that provided the feedback scores (see para [0003]; “the medical imaging datasets are analyzed by a user, e.g., by a radiologist and/or a pathologist, and a medical report is drawn up by the radiologist”, see also para [0150]; “as part of the user input 111, the respective selection algorithm is trained accordingly. Thus, recurrent training would be possible to refine the selection algorithm to provide a more relevant ranking”), and wherein the inputs to the deployed trained orchestration supervisor are selected from the group consisting of attributes of the patient medical image (see para [0009]; “The method also includes triggering a selection of one or more algorithms from an algorithm repository. The algorithm repository includes multiple candidate algorithms. The selection is based on the one or more medical datasets”, see also para [0178]; “Then, evaluations algorithm having the capability to provide the semantic description of the processable input data and the produced output data can be identified by respective meta data”), location of where the patient medical image was generated (see para [0130]; “e.g., source of the medical dataset, user, patient, time of the day, etc”), technology used to generate the patient medical image (see para [0177]; “analyze the coronaries in data of type “CT angiography of coronary arteries” and one that can analyze data of type “CT of chest”. Now, the medical dataset is of the type “CT angiography of coronary arteries”. A match between the medical dataset and the two evaluation algorithms can be performed. The semantic reasoning may define “CT angiography of coronary arteries” as a subclass of “CT of chest”; thus, based on this knowledge both evaluation algorithms can proof suitable”), demographics of a patient from which the patient medical image was generated (see para [0126]; “The patient dataset 131 includes patient-specific information of the patient. For example, the patient dataset 113 could specify one or more of the following patient-specific information elements: a previous diagnosis of the patient; a therapeutic history of the patient including, e.g., medication, etc.”), a location of a medical clinic that ordered the patient medical image for the patient, and attributes of a radiologist to which the patient medical image and the at least one medical finding from the selected at least one analysis algorithm will be forwarded (see para [0130]; “depending on certain process parameters of the particular clinical workflow—e.g., source of the medical dataset, user, patient, time of the day, etc”, see also para [0150]; “For illustration, it would be possible that if the user selects one or more of the evaluation algorithms from the sorted list, e.g., as part of the user input 111, the respective selection algorithm is trained accordingly. Thus, recurrent training would be possible to refine the selection algorithm to provide a more relevant ranking”).
Regarding claim 20, the rejection of claim 18 is incorporated herein.
Fong et al. in the combination further teach generating training sets for the updated analysis algorithm including inputs comprising an attribute of the updated analysis algorithm (see para [0066]; “a prediction model that is trained using a standard dataset that includes data points that specify inputs such as the type of ML algorithm of the ML pipeline, a number of hyper-parameters associated with the ML pipeline”), and training the orchestration supervisor with the inputs from the training sets for the updated analysis algorithm to produce the feedback scores for the updated analysis algorithm based on the inputs (see para claim 3; “providing inputs of each ML pipeline in the set of ML pipelines into a prediction model to generate the runtime statistics”, see also claim 6; “updating the prediction model based on the ML pipeline telemetry” Note: the prediction model (orchestration supervisor) is retrained to predict the score for that updated algorithm).
Kelm et al. in the combination further teach wherein the operations further comprise: receiving indication that one of the analysis algorithms has been updated to an updated analysis algorithm (see claim 6; “providing access to the algorithm repository to enable third parties to upload new candidate algorithms along with the meta-data”); an attribute of training medical images (see para [0003]; “medical imaging datasets are acquired using one or more imaging modalities” Note: machine/tech attribute f the image), and the feedback scores for medical findings produced by the updated analysis algorithm (see para [0184]; “the user can then provide a further user input 114 as feedback, e.g., via the HMI 214. The user input pertains to the medical report 180 or the output of the one or more evaluation algorithms 711-714…. Sometimes, the user may not be satisfied with the accuracy of this segmentation or annotation. Then, using the user input 114 providing feedback, the user may refine the segmentation or annotation”).
Claims 8-9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Fong et al. and Schleyen et al. in view of Kelm et al. as applied in claims 1 and 2 above, and further in view of Pacheco et al. “Ranking of Classification Algorithms in Terms of Mean–Standard Deviation Using A-TOPSIS”.
Regarding claim 8, the rejection of claim 2 is incorporated herein.
Fong et al. in the combination further teach wherein a feedback score vector for a training medical finding produced by an analysis algorithm for a training medical image includes an accuracy score indicating an accuracy of the training medical finding from the training medical image (see para [0027]; “The criteria may be, for example, an accuracy of the ML pipeline”), a performance score indicating a performance of the analysis algorithm producing the training medical finding (see para [0027]; “The criteria may be, for example, an accuracy of the ML pipeline, a training cost, a training speed, an inferred speed, and an inferred cost” Note: performance is captured via training speed, and inferred speed (and related runtime statistics), and a worth value indicating an extent to which a reviewing radiologist considers the analysis algorithm worth a cost (see para [0032]; “the inferred cost is a prediction of the cost for executing the ML model”), and wherein the orchestration supervisor is trained to produce the feedback score vectors for the inputs from the training sets (see claim 3; “providing inputs of each ML pipeline in the set of ML pipelines into a prediction model to generate the runtime statistics”, see also para [0045]; “The prediction model may be trained using a prediction model training dataset” and para [0066]; “Each of the identified ML pipelines is processed using a prediction model that is trained using a standard dataset.. The outputs of the data points in the standard dataset include the following criteria: accuracy, training cost, training speed, inferred cost, and inferred speed”), and wherein algorithm scores for the analysis algorithms from the deployed trained orchestration supervisor comprise algorithm score vectors, wherein an algorithm score vector for an analysis algorithm of the analysis algorithms includes an accuracy score, a performance score, a user experience score and a worth value (see para [0025]; “The prediction model may take as an input a variety of factors of a ML pipeline (e.g., 152, 154) to generate the runtime statistics” see also para [0049]; “In step 206, the runtime statistics and the user preferences associated with the client are input into a pipeline evaluator to obtain an ordering of the ML pipelines”).
Kelm et al. in the combination further teach a user experience score indicating a reviewing radiologist satisfaction with a user experience of the training medical finding from the training medical image (see para [0183]; “a situation can occur where the user—upon studying the medical report 180 along with the output of the previously selected one or more evaluation algorithms—is not satisfied with the accuracy of the output of the one or more evaluation algorithms 711-714 or the medical report 180” see also para [0185]; “Sometimes, the user may not be satisfied with the accuracy of this segmentation or annotation. Then, using the user input 114 providing feedback, the user may refine the segmentation or annotation”).
Fong additionally disclose per algorithm runtime statistics including accuracy, training/inferred speed as a feedback score. However, both Fong et al., Schleyen et al and Kelm et al. as a whole does not specifically disclose feedback score vectors.
In the same field of endeavor, Pacheco et al. teaches wherein the feedback scores for the training medical findings from the analysis algorithm comprise feedback score vectors (see Abstract; “ranking and comparing classification algorithms in terms of means and standard deviations” see also Table. 1 disclose performance of the classifiers in terms of vectors. Note: each algorithm by a multi-criteria performance vector and then ranks algorithms using TOPSIS which reads as the “feedback score vector”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify a method for managing data includes obtaining a request for a machine learning (ML) pipeline selection, identifying a set of ML pipelines and obtaining runtime statistics of Fong et al. and a system of examination of a semiconductor specimen of Schleyen et al in view of the use of machine-learning algorithms used for the analysis of medical datasets and generation of a medical report of Kelm et al. and further in view of ranking of classification algorithms in terms of mean–standard deviation using a-TOPSIS of Pacheco et al.in order to solve the problem of ranking and comparing classification algorithms (see Abstract).
Regarding claim 9, the rejection of claim 8 is incorporated herein.
Pacheco et al. in the combination further teach wherein the using the algorithm scores to select at least one analysis algorithm comprises: aggregating, by an aggregation function, algorithm score vectors for the analysis algorithms into aggregation scores to rank the analysis algorithms to select the at least one analysis algorithm according to the rank of the analysis algorithms (see Page 100, 4.1 Case Study I; “In addition, we have three aggregation methodologies: the average of the supports (AVG), the majority voting (MV) and the Choquet integral (CHO) [12]. All these classifiers were applied to 12 benchmarks, and their performance for each benchmark is described in Table 1. Our goal is to rank the seven algorithms according to their performance”).
Regarding claim 15, the rejection of claim 13 is incorporated herein.
Fong et al. in the combination further teach wherein a feedback score vector for a training medical finding produced by an analysis algorithm for a training medical image includes an accuracy score indicating an accuracy of the training medical finding from the training medical image (see para [0027]; “The criteria may be, for example, an accuracy of the ML pipeline”), a performance score indicating a performance of the analysis algorithm producing the training medical finding (see para [0027]; “The criteria may be, for example, an accuracy of the ML pipeline, a training cost, a training speed, an inferred speed, and an inferred cost” Note: performance is captured via training speed, and inferred speed (and related runtime statistics), and a worth value indicating an extent to which a reviewing radiologist considers the analysis algorithm worth a cost (see para [0032]; “the inferred cost is a prediction of the cost for executing the ML model”), and wherein the orchestration supervisor is trained to produce the feedback score vectors for the inputs from the training sets (see claim 3; “providing inputs of each ML pipeline in the set of ML pipelines into a prediction model to generate the runtime statistics”, see also para [0045]; “The prediction model may be trained using a prediction model training dataset” and para [0066]; “Each of the identified ML pipelines is processed using a prediction model that is trained using a standard dataset.. The outputs of the data points in the standard dataset include the following criteria: accuracy, training cost, training speed, inferred cost, and inferred speed”), and wherein algorithm scores for the analysis algorithms from the deployed trained orchestration supervisor comprise algorithm score vectors, wherein an algorithm score vector for an analysis algorithm of the analysis algorithms includes an accuracy score, a performance score, a user experience score and a worth value (see para [0025]; “The prediction model may take as an input a variety of factors of a ML pipeline (e.g., 152, 154) to generate the runtime statistics” see also para [0049]; “In step 206, the runtime statistics and the user preferences associated with the client are input into a pipeline evaluator to obtain an ordering of the ML pipelines”).
Kelm et al. in the combination further teach a user experience score indicating a reviewing radiologist satisfaction with a user experience of the training medical finding from the training medical image (see para [0183]; “a situation can occur where the user—upon studying the medical report 180 along with the output of the previously selected one or more evaluation algorithms—is not satisfied with the accuracy of the output of the one or more evaluation algorithms 711-714 or the medical report 180” see also para [0185]; “Sometimes, the user may not be satisfied with the accuracy of this segmentation or annotation. Then, using the user input 114 providing feedback, the user may refine the segmentation or annotation”).
Pacheco et al. in the combination further teach wherein the feedback scores for the training medical findings from the analysis algorithm comprise feedback score vectors (see Abstract; “ranking and comparing classification algorithms in terms of means and standard deviations” see also Table. 1 disclose performance of the classifiers in terms of vectors. Note: each algorithm by a multi-criteria performance vector and then ranks algorithms using TOPSIS which reads as the “feedback score vector”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify a method for managing data includes obtaining a request for a machine learning (ML) pipeline selection, identifying a set of ML pipelines and obtaining runtime statistics of Fong et al. and a system of examination of a semiconductor specimen of Schleyen et al in view of the use of machine-learning algorithms used for the analysis of medical datasets and generation of a medical report of Kelm et al. and further in view of ranking of classification algorithms in terms of mean–standard deviation using a-TOPSIS of Pacheco et al. in order to solve the problem of ranking and comparing classification algorithms (see Abstract).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Fong et al. and Schleyen et al. in view of Kelm et al. and Pacheco et al.as applied in claims 1, 2 and 8 above, and further in view of Ramanath et al. (US 20200005149 A1).
Regarding claim 10, the rejection of claim 8 is incorporated herein.
The combination of Fong et al., Schleyen et al., Kelm et al. and Pacheco et al. as a whole does not teach wherein the using the algorithm scores to select at least one analysis algorithm comprises: inputting the algorithm score vectors for the analysis algorithms into a learning-to-rank machine learning model to produce a ranking of the analysis algorithms to select the at least one analysis algorithm according to the ranking of the analysis algorithms.
In the same field of endeavor, Ramanath et al. disclose wherein the using the algorithm scores to select at least one analysis algorithm comprises: inputting the algorithm score vectors for the analysis algorithms into a learning-to-rank machine learning model to produce a ranking of the analysis algorithms (see para [0015] “FIG. 12 is a flowchart illustrating a method of applying learning to rank with deep models for search”, claim 9; “training a ranking model using the training data and a loss function, the ranking model comprising a deep learning model”, see also para [0051]; “he training of the ranking model comprises using a pairwise learning model in applying the loss function”, and para [0040]; “for each one of a plurality of target candidate users; generating, by the computer system, a corresponding score …. using the trained ranking model”) to select the at least one analysis algorithm according to the ranking of the analysis algorithms (see para [0050]; “an indication of at least a portion of the plurality of target candidate users to be displayed on the computing device as search results for the target query based on the generated scores of the plurality of target candidate users” Note: pick/report the top ranked items). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify a method for managing data includes obtaining a request for a machine learning (ML) pipeline selection, identifying a set of ML pipelines and obtaining runtime statistics of Fong et al. and a system of examination of a semiconductor specimen of Schleyen et al in view of the use of machine-learning algorithms used for the analysis of medical datasets and generation of a medical report of Kelm et al. and ranking of classification algorithms in terms of mean–standard deviation using a-TOPSIS of Pacheco et al. and further in view of a techniques for applying learning-to-rank with deep learning models of Ramanath et al. in order to obtain ranking and select accordingly (see para [0015]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WINTA GEBRESLASSIE/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677