Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 27 January, 2026 has been entered.
Response to Amendment
The amendment filed 27 January, 2026 has been entered.
The amendment of claims 1, 2, 4, 5, 10, 11, 13, 14, 19, and 22 has been acknowledged.
The cancellation of claims 17 and 21 has been acknowledged.
The addition of new claims 23 and 24 has been acknowledged.
Response to Arguments
Applicant’s arguments, see page 10, section “Section 103 Rejections”, filed 27 January, 2026 with respect to the rejection of claims 1 – 3, 5 – 12, and 14 - 22 have been fully considered but are not persuasive.
Applicant claims on page 10 of the remarks filed 27 January, 2026 that Trautwein et al (U.S. Patent Publication No. 2021/0174503 A1, hereinafter “Trautwein”), fails to disclose the limitations of “wherein the model-accuracy data indicates an accuracy of each of the plurality of machine learning systems”, and “outputting the selected machine learning system”. The examiner respectfully disagrees.
Regarding the claim limitation “wherein the model-accuracy data indicates an accuracy of each of the plurality of machine learning systems”, the applicant states that ¶ 0064 Trautwein discloses that the confidence intervals are only calculated for “various outputs of the algorithms” rather than whether each model of plurality is accurate overall. ¶ 0064 discloses wherein the results determined by the algorithm, values for probability, confidence intervals for the various outputs of the algorithm, and a minimum and maximum of a cost function for each of the image analysis features are calculated and stored. ¶ 0068 states “During or after the algorithms have completed the image analysis, a result validator 112 may evaluate the results of the image analyzers 110 in terms of plausibility, confidence, applicability and/or reliability… If a result is not plausible or confident enough, then the result validator may communicate with the algorithm selector 108 to select a new algorithm or adjust certain parameters for the image analyzer 110 in the reference data structures 109. The result validator 112 may then cause the computer system 100 to restart the analysis or perform a parallel analysis beginning again at the algorithm selector 108.”. Additionally, Trautwein in ¶ 0091 discloses a step performed by the system where based on the selected algorithms, analysis results along with confidence intervals for the algorithm are outputted to the user with the image data. With respect to ¶ 0064, 0068 and 0091 it is clear to the examiner that the algorithms/models are tested for their accuracy after producing the analysis results and thus, the model-accuracy data is produced.
Regarding the claim limitation “outputting the selected machine learning system” the examiner notes that Trautwein performs multiple steps of outputting the selected machine learning model/models, however, these steps are not performed in conjunction with the claim limitation step of “receiving a selection from the user, the selection corresponding to a selected machine learning system from the plurality of machine learning systems”. As such, the examiner relies on the combination of art Gur et al (U.S. Patent Publication No. 2021/0019665 A1) to teach both these limitations being performed in the same context.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 – 7, 10 – 16, 18 – 20, and 22 - 24 are rejected under 35 U.S.C. 103 as being unpatentable over Trautwein et al (U.S. Patent Publication No. 2021/0174503 A1, hereinafter “Trautwein”) in view of Gur et al (U.S. Patent Publication No. 2021/0019665 A1, hereinafter “Gur”)
Regarding claim 1, Trautwein teaches a computer-implemented method for processing electronic medical images comprising:
receiving one or more digital medical images of at least one pathology specimen, the pathology specimen being associated with a patient (¶ 0051: The image content for medical images describes the anatomical structures visible in a medical image. For a medical image of a patient's left knee joint, the derived image content metadata may include "left knee joint", "left tibia", "left femur", and/or "left patella". Or for an anteroposterior (AP) image of the lung, the visible image content may be "lung", "thoracal spine", "heart", and/or "aorta".);
receiving one or more search criteria (¶ 0049: The interfaces 102 may transmit one or more image files or other data to a metadata analyzer 104. The computer system 100 or the metadata analyzer 104 may read the metadata of the image data (e.g. DICOM tags) as a preliminary process for the selection and analysis of the image content. This metadata may include, for example, the image modality, image acquisition direction, sequences (for MR modality), slice spacing, but can also contain other data (e.g. features for identifying the patient, image content, acquisition date, etc.).; ¶ 0055: If only certain image data are to be analyzed, one or more filter criteria may be defined. The application of the criteria by the data filter 106 against the available data may take place at this point, so that, for example, only the image data of a certain patient or a group of patients, a certain modality, a certain follow-up period, of certain diseases, for certain treatments or a combination of several of these criteria are subsequently taken into account.);
determining, a plurality of machine learning systems, based on the one or more search criteria, from a marketplace containing a plurality of machine learning systems (0056: Based on the metadata extracted or aggregated by the metadata analyzer 104, any other data and the filter criteria defined by the data filter 106, the algorithm selector 108 may determine and optimize the image data and algorithms that will used to analyze the respective image content.; ¶ 0058: The selection of the analysis algorithm or algorithms by the algorithm selector 108 may be made solely on the basis of the metadata of the respective image. If, for example, an image with an artificial hip joint prosthesis is recognized, all algorithms suitable for examining hip joint prosthesis (e.g. cup orientation, wear condition, signs of osteolysis or radiolucency) can be automatically selected for use.; ¶ 0060: The algorithms utilized by the image analyzer 110 may be of different types. For example, deep learning or artificial neural network algorithms (e.g. convolutional neural networks (CNNs), recurrent neural networks (RNNs), long-short term memories (LSTMs)) may segment image areas or place landmarks or markers.) wherein the plurality of machine learning systems includes a first machine learning model and a first set of machine learning models comprised of two or more machine learning systems, wherein the first set of machine learning models are different than the first machine learning model (¶ 0057: For example, a first algorithm may be suitable for segmenting the cervical spine, a second algorithm can be used to segment the lumbar spine, a third to recognize the heads of the femur, a fourth to place landmarks on the sacrum, a fifth to determine the anatomical designation of the segmented vertebrae, a sixth for determining bone density in a given image section, a seventh for classifying degenerative changes, an eighth for determining the extent of carcinogenic issue, a ninth for counting metastases, etc.);
outputting the plurality of machine learning systems to a user (¶ 0059: The algorithms to be applied can either be explicitly specified by the user or selected based on criteria associated with the analysis task and information related to the capability of each individual algorithm or programmatically based on the derived image metadata.), wherein outputting the plurality of machine learning system includes:
applying the plurality of machine learning systems to the one or more received digital medical images to determine one or more processed digital medical images (¶ 0060: The algorithms selected or determined for each image by the algorithm selector 108 may then be provided to the computer system 100 and executed to analyze each of the images.), and
displaying the one or more processed digital medical images (¶ 0062: For example, based on "interventional landmarks", an appropriately trained machine learning algorithm can determine the correct position of an implant or the resection edges on an unknown image. If the training data set consists of image data from an individual surgeon or a small group of surgeons ( e.g. within one site), the system can also learn personal preferences and automatically apply them to future images.; ¶ 0064: For example, based on "interventional landmarks", an appropriately trained machine learning algorithm can determine the correct position of an implant or the resection edges on an unknown image. If the training data set consists of image data from an individual surgeon or a small group of surgeons ( e.g. within one site), the system can also learn personal preferences and automatically apply them to future images.);
determining, by a processor, model-accuracy data of each of the plurality of machine learning systems, wherein the model-accuracy data indicates an accuracy of each of the plurality of machine learning systems (¶ 0064: The image analyzer 110 may be formed of one or more machine learning models that have been trained on similar data (e.g. local hospital data) or general data as a preliminary step that optimizes their capabilities as noted in FIG. 6. In addition to the results determined by the algorithms, values for probability, confidence intervals for the various outputs of the algorithms, and the minimum/maximum of a cost function can also be calculated and stored for each of the image analysis features. Thus, the image analyzer 110 may also produce results and the values used to assess probability or confidence in the outputs related to the anatomical or pathological structure(s) analyzed.; ¶ 0068: If a result is not plausible or confident enough, then the result validator may communicate with the algorithm selector 108 to select a new algorithm or adjust certain parameters for the image analyzer 110 in the reference data structures 109. The result validator 112 may then cause the computer system 100 to restart the analysis or perform a parallel analysis beginning again at the algorithm selector 108.), the model-accuracy data being output to the user (¶ 0091: At 810, the computing system 320 or process 800 analyzes the images based on the selected algorithms or ML models (e.g. CNN, RNN, LTSM) and outputs an analysis result along with a confidence interval. At 812, the computing system 320 or process 800 may process the analyzed images through quality control analysis to validate result based on confidence thresholds or the like. At 814, the computing system 320 or process 800 may combine the image data and analysis results for a report with image overlays illustrating analysis, or other image markup or data display (e.g. deviations from average anatomical sizes).).
(¶ 0057: For example, a first algorithm may be suitable for segmenting the cervical spine, a second algorithm can be used to segment the lumbar spine, a third to recognize the heads of the femur, a fourth to place landmarks on the sacrum, a fifth to determine the anatomical designation of the segmented vertebrae, a sixth for determining bone density in a given image section, a seventh for classifying degenerative changes, an eighth for determining the extent of carcinogenic issue, a ninth for counting metastases, etc.); and
Trautwein does not explicitly teach receiving a selection from the user, the selection corresponding to a first machine learning system from the one or more machine learning systems; and outputting the selected machine learning system.
However, Gur does teach receiving a selection from the user, the selection corresponding to a selected machine learning system from the plurality of machine learning systems wherein the selected machine learning system is either the first machine learning model or the first set of machine learning models (¶ 0039: An identification of these matching ML algorithms may then be returned to the user via a user interface 150 for selection and execution of the ML algorithms to train a ML model to perform the desired task, which may then be registered and indexed in the trained model repository 140 and trained model index 142.); and
outputting the selected machine learning system (¶ 0039: An identification of these matching ML algorithms may then be returned to the user via a user interface 150 for selection and execution of the ML algorithms to train a ML model to perform the desired task, which may then be registered and indexed in the trained model repository 140 and trained model index 142.).
Trautwein and Gur are considered to be analogous art as both pertain to machine learning repositories. Therefore, it would have been obvious to one of ordinary skill in the art to combine the system and storage medium with a program for the automatic analysis of medical image data (as taught by Trautwein) and the system of machine learning model repository management (as taught by Gur) before the effective filing date of the claimed invention. The motivation for this combination of references would be the system of Gur provides a prediction API for computer logic for classifying new data instances based on a previously trained ML model and allows searching of prior trained ML models based on various attributes (See ¶ 0046).
This motivation for the combination of Trautwein and Gur is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim 2, the Trautwein and Gur combination teaches the method of claim 1.
Additionally, Trautwein teaches wherein outputting the selected machine learning system further includes:
inserting the one or more digital medical images into the selected machine learning system (¶ 0060: The algorithms selected or determined for each image by the algorithm selector 108 may then be provided to the computer system 100 and executed to analyze each of the images.);
applying the selected machine learning system to the inserted digital medical images to generate processed medical images (¶ 0062: For example, based on "interventional landmarks", an appropriately trained machine learning algorithm can determine the correct position of an implant or the resection edges on an unknown image. If the training data set consists of image data from an individual surgeon or a small group of surgeons ( e.g. within one site), the system can also learn personal preferences and automatically apply them to future images.; ¶ 0064: For example, based on "interventional landmarks", an appropriately trained machine learning algorithm can determine the correct position of an implant or the resection edges on an unknown image. If the training data set consists of image data from an individual surgeon or a small group of surgeons ( e.g. within one site), the system can also learn personal preferences and automatically apply them to future images.); and
outputting the generated processed medical images (¶ 0071: The data validated by the result validator 112 may be passed to or transmitted to the result generator 114 which may output to a graphical user interface, a printed report, or other aggregation of the image data and results.).
Regarding claim 3, the Trautwein and Gur combination teaches the method of claim 1.
Additionally Trautwein teaches wherein the display of the one or more images includes a heat map, level of confidence map, and/or color gradient map of the search criteria (¶ 0052: AI algorithms trained as illustrated in FIG. 7 on identifying human anatomy derive the confidence of pixels or voxels belonging to certain structures or organs. If the confidence is above a defined threshold or the difference between the highest ranked structure and the second highest rank is sufficient, the algorithm is capable to derive a segmentation mask indicating the area within the image for the respectively identified structure. The segmentation mask itself or the respective coordinates of bounding boxes or other means allowing to describe the position of the anatomical structures within the image may be saved as input or metadata for subsequent processing. Thus, a reference between a certain area of a visible structure in a medical image and the name or label of said structure is created. For instance, the segmentation mask may relate to image coordinates defining an area.; Examiner’s note: Under broadest reasonable interpretation, as well as in view of the applicant’s disclosure, there appears to be no significant difference between a heat map, a level of confidence map, or a color gradient map. All three are drawn to a map that would differentiate pixels within an image based on a level of confidence.).
Regarding claim 4, the Trautwein and Gur combination teaches the method of claim 1.
Additionally, Trautwein teaches wherein the display of the one or more images includes a heat map overlay of the one or more medical images indicating a confidence value (¶ 0052: AI algorithms trained as illustrated in FIG. 7 on identifying human anatomy derive the confidence of pixels or voxels belonging to certain structures or organs. If the confidence is above a defined threshold or the difference between the highest ranked structure and the second highest rank is sufficient, the algorithm is capable to derive a segmentation mask indicating the area within the image for the respectively identified structure. The segmentation mask itself or the respective coordinates of bounding boxes or other means allowing to describe the position of the anatomical structures within the image may be saved as input or metadata for subsequent processing. Thus, a reference between a certain area of a visible structure in a medical image and the name or label of said structure is created. For instance, the segmentation mask may relate to image coordinates defining an area.), the confidence value representing a similarity of results between the plurality of machine learning systems (¶ 0091: At 810, the computing system 320 or process 800 analyzes the images based on the selected algorithms or ML models (e.g. CNN, RNN, LTSM) and outputs an analysis result along with a confidence interval. At 812, the computing system 320 or process 800 may process the analyzed images through quality control analysis to validate result based on confidence thresholds or the like. At 814, the computing system 320 or process 800 may combine the image data and analysis results for a report with image overlays illustrating analysis, or other image markup or data
display (e.g. deviations from average anatomical sizes).).
Regarding claim 5, the Trautwein and Gur combination teaches the method of claim 1.
Additionally, Trautwein teaches further including:
suggesting a particular machine learning system of the plurality of machine learning systems based on the search criteria (¶ 0064: In addition to the results determined by the algorithms, values for probability, confidence intervals for the various outputs of the algorithms, and the minimum/maximum of a cost function can also be calculated and stored for each of the image analysis features. Thus, the image analyzer 110 may also produce results and the values used to assess probability or confidence in the outputs related to the anatomical or pathological structure(s) analyzed.; ¶ 0071: The data validated by the result validator 112 maybe passed to or transmitted to the result generator 114 which may output to a graphical user interface, a printed report, or other aggregation of the image data and results.).
Regarding claim 6, the Trautwein and Gur combination teaches the method of claim 1, further including:
Additionally, Trautwein teaches applying a first machine learning system to the received one or more medical images prior to outputting the machine learning systems to a user (¶ 0060: The algorithms selected or determined for each image by the algorithm selector 108 may then be provided to the computer system 100 and executed to analyze each of the images.).
Regarding claim 7, the Trautwein and Gur combination teaches the method of claim 6.
Additionally, Trautwein teaches wherein the first machine learning system applies an initial filter to the one or more medical images, the initial filter to determine an area of tissues displayed in the one or more medical images (¶ 0057: In one implementation, optimization criteria (e.g. input requirements) for the algorithms may be stored in a database, configuration file, or in program code that define the suitability of the algorithms for the various applications. For example… an eighth for determining the extent of carcinogenic tissue…).
Regarding claim 10, claim 10 has been analyzed with regard to claim 1 and is rejected for the same reasons of obviousness as used above as well as in accordance with Trautwein’s further teaching on:
At least one memory storing instructions (¶ 0044: The computer system may also include a non-transitory storage medium (e.g. a hard drive or solid state drive) for storing and/or accessing the computer program code and computer program instructions (i.e. software).); and
At least one processor configured to execute the instructions (¶ 0045: The computer program instructions may be executed by one or more processors (e.g. central processing units, graphics processing units) of the computer system. The computer program instructions may form applications or software applications which are executed to perform one or more processes.) to perform operations comprising:
Regarding claim 11, claim 11 has been analyzed with regard to respective claim 2 and is rejected for the same reasons of obviousness as used above.
Regarding claim 12, claim 12 has been analyzed with regard to respective claim 3 and is rejected for the same reasons of obviousness as used above.
Regarding claim 13, claim 13 has been analyzed with regard to respective claim 4 and is rejected for the same reasons of obviousness as used above.
Regarding claim 14, claim 14 has been analyzed with regard to respective claim 5 and is rejected for the same reasons of obviousness as used above.
Regarding claim 15, claim 15 has been analyzed with regard to respective claim 6 and is rejected for the same reasons of obviousness as used above.
Regarding claim 16, claim 16 has been analyzed with regard to respective claim 7 and is rejected for the same reasons of obviousness as used above.
Regarding claim 18, the Trautwein and Gur combination teaches the method of claim 10. Additionally, Trautwein wherein the search criteria is a medical diagnosis (¶ 0007: While algorithms for the analysis of image content have been developed, a system that independently selects suitable image analyses based on image content, metadata or other data related to the patient, diagnosis or (possible) treatment, and then applies one or several algorithms to the images to extract and utilize data is needed.; ¶ 0058: The selection of the analysis algorithm or algorithms by the algorithm selector 108 may be made solely on the basis of the metadata of the respective image. If, for example, an image with an artificial hip joint prosthesis is recognized, all algorithms suitable for examining hip joint prosthesis (e.g. cup orientation, wear condition, signs of osteolysis or radiolucency) can be automatically selected for use.).
Regarding claim 19, claim 19 has been analyzed with regard to claim 1 and is rejected for the same reasons of obviousness as used above as well as in accordance with Trautwein’s further teaching on:
A non-transitory computer-readable medium storing instructions that, when executed by a processor, perform operations processing electronic digital medical images (¶ 0044: The computer system may also include a non-transitory storage medium (e.g. a hard drive or solid state drive) for storing and/or accessing the computer program code and computer program instructions (i.e. software).), the operations comprising:
Regarding claim 20, claim 20 has been analyzed with regard to respective claim 3 and is rejected for the same reasons of obviousness as used above.
Regarding claim 22 the Trautwein and Gur combination teaches the method of claim 1.
Additionally, Trautwein teaches further comprising displaying the one or more digital medical images after the machine learning system performed analysis on the digital medical images (¶ 0060: The algorithms selected or determined for each image by the algorithm selector 108 may then be provided to the computer system 100 and executed to analyze each of the images.; ¶ 0064: The image analyzer 110 may be formed of one or more machine learning models that have been trained on similar data (e.g. local hospital data) or general data as a preliminary step that optimizes their capabilities as noted in FIG. 6. In addition to the results determined by the algorithms, values for probability, confidence intervals for the various outputs of the algorithms, and the minimum/maximum of a cost function can also be calculated and stored for each of the image analysis features. Thus, the image analyzer 110 may also produce results and the values used to assess probability or confidence in the outputs related to the anatomical or pathological structure(s) analyzed.).
Regarding claim 23, the Trautwein and Gur combination teaches the method of claim 1.
Additionally, Trautwein teaches further comprising determining one or more machine learning systems that each highlight a different aspect of the one or more search criteria (¶ 0057: In one implementation, optimization criteria (e.g. input requirements) for the algorithms may be stored in a database, configuration file, or in program code that define the suitability of the algorithms for the various applications. For example, a first algorithm may be suitable for segmenting the cervical spine, a second algorithm can be used to segment the lumbar spine, a third to recognize the heads of the femur, a fourth to place landmarks on the sacrum, a fifth to determine the anatomical designation of the segmented vertebrae, a sixth for determining bone density in a given image section, a seventh for classifying degenerative changes, an eighth for determining the extent of carcinogenic tissue, a ninth for counting metastases, etc.).
Regarding claim 24, the Trautwein and Gur combination teaches the method of claim 1.
Additionally, Trautwein teaches further comprising determining one or more related machine learning systems based on search criteria related to the one or more search criteria (¶ 0056: Based on the metadata extracted or aggregated by the metadata analyzer 104, any other data and the filter criteria defined by the data filter 106, the algorithm selector 108 may determine and optimize the image data and algorithms that will used to analyze the respective image content. (emphasis added); ¶ 0058: The selection of the analysis algorithm or algorithms by the algorithm selector 108 may be made solely on the basis of the metadata of the respective image. If, for example, an image with an artificial hip joint prosthesis is recognized, all algorithms suitable for examining hip joint prosthesis ( e.g. cup orientation, wear condition, signs of osteolysis or radiolucency) can be automatically selected for use.).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW JONES whose telephone number is (703)756-4573. The examiner can normally be reached Monday - Friday 8:00-5:00 EST, off Every Other Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW B. JONES/Examiner, Art Unit 2667
/MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667