DETAILED ACTION
This Action is in response to Applicant’s response filed on 07/28/2025. Claims 1-20 are still pending in the present application. This Action is made FINAL.
Response to Amendment
Double Patenting: The amended claims filed on 07/28/2025 overcomes the Double Patenting in the previous office action.
Examiner noted that claims set of reference (Application number 18/128,290) filed on 07/28/2025 are distinct relative to the claims set of application number 17/969,876 (U.S. 12444171) filed on 06/30/2025.
Response to Arguments
With respect to 35 U.S.C. 101 Rejection: Applicant argues that the amended claims 1 and 11 to include patent-eligible subject matter. After reviewing the amendments and argument filed on 07/28/2025, the Examiner has withdrawn the previous 101 rejection for the following reasons: The claims recites steps and features that are "significantly more” than any alleged judicial exception and/or provide improvement to the technical field.
With respect to 35 U.S.C. 102(a)(1) Rejection: Applicant's arguments filed on 07/28/2025 have been fully considered but are moot in view of the new ground(s) rejection in view of Elgar et al (U.S. 20210034920 A1).
Claims Status
Claim(s) 1-5, 8-15 and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Elgar et al (U.S. 20210034920 A1;Elgar), in view of Van Den Heuvel et al (U.S. 20210133553 A1; Van).
Claim(s) 6-7 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Elgar et al (U.S. 20210034920 A1;Elgar), in view of Van Den Heuvel et al (U.S. 20210133553 A1; Van), and in further view of Shen et al (U.S. 20200019799 A1).
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07/28/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 8-15 and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Elgar et al (U.S. 20210034920 A1;Elgar), in view of Van Den Heuvel et al (U.S. 20210133553 A1; Van).
Regarding claim 1, Elgar discloses an apparatus, (Figs. 1-2; Paragraph 37: “ non-limiting system 1000 that facilitates enhancing the efficiency and accuracy of annotating data samples for supervised machine learning algorithms”) comprising:
at least one processor (Paragraph 38: “Such components, when executed by the one or more machines (e.g., processors, computers, computing devices, virtual machines, etc.) can cause the one or more machines to perform the operations”) configured to:
obtain a first sequence of two-dimensional (2D) images; (Figs.2: data sources 102; unannotated data samples 104; Fig.4: annotated data samples 402; 404; 406, Fig. 6: unannotated data samples 104; Paragraphs 47-49: “ the collection component 202 can collect or receive the unannotated data samples 104 from the one or more data sources 102 and store the unannotated data samples 104 in the annotation queue. For instance, in association with application of system 200 to annotate medical images for training a DNN model to diagnose a medical condition based on analysis of medical images, the collection component 202 can collect or receive hundreds to thousands to millions (or more) of unannotated medical images for the particular type of medical condition from various medical institutions”)
obtain a first manual annotation based on a first user input, wherein the first manual annotation is associated with a first image of the first sequence of 2D images and indicates an anatomical structure; (Fig.2: annotation component 208; Figs. 3-4: annotated data samples 402,404,406 and Paragraphs 51: “the annotation pipeline module 112 can leverage different types of annotation techniques to facilitate annotating the data samples, wherein the different types of annotation techniques can vary with respect to the amount of time and resources involved. For example, in one implementation, the different types of annotation techniques can include a manual annotation technique, a metadata extraction annotation technique and a semi-supervised machine learning technique. … the annotation management component 204 can generate annotation prioritization information that identifies the annotation technique or techniques selected for each (or in some implementations one or more) of the unannotated data samples. … the annotation management component 204 can further generate and provide an entity with information recommending application of the priority order and/or directly send the unannotated data samples to the annotation component 208 for annotation in accordance with the priority order.”) (Paragraph 28: “the mapping of image features based on the physics of the acquisition to underlying physiology, function and anatomy is the core of the science and art of diagnostic radiology, cardiology and pathology.”)
annotate, automatically, the first sequence of 2D images with respect to the anatomical structure based on the first manual annotation and a first machine-learning (ML) model; (Paragraphs 51 - : “the annotation pipeline module 112 can leverage different types of annotation techniques to facilitate annotating the data samples, wherein the different types of annotation techniques can vary with respect to the amount of time and resources involved. For example, in one implementation, the different types of annotation techniques can include a manual annotation technique, a metadata extraction annotation technique and a semi-supervised machine learning technique. … the annotation management component 204 can generate annotation prioritization information that identifies the annotation technique or techniques selected for each (or in some implementations one or more) of the unannotated data samples. … the annotation management component 204 can further generate and provide an entity with information recommending application of the priority order and/or directly send the unannotated data samples to the annotation component 208 for annotation in accordance with the priority order.”, the person ordinary skill in the art would understand that “the selected one or more annotated techniques such as manual annotation and metadata extraction annotation in order” is interpreted as “first manual annotation and first machine learning model”; Paragraph 74: “a data sample (e.g., an image) can be annotated more than once using different annotation techniques and/or different annotation processes associated with a same annotation technique. For example, as shown in FIG. 5, the resulting annotated data samples 502 can include several groups (e.g., group 1, group 2, group N) of data samples corresponding to the same input sample yet annotated using different annotation techniques. )
determine whether the automatically annotated first sequence of 2D images meets a readiness requirement, wherein the determination is made by generating, using a second ML model, (Fig.6: annotation accuracy evaluation Component 604 and Machine learning Model (M1) 110) a query annotation associated with the anatomical structure based on multiple annotated 2D images from the automatically annotated first sequence of 2D images (Paragraph 81: “the machine learning model M1 can also be applied to data samples annotated using techniques other than the semi-supervised machine learning technique to determine the degree of confidence in the accuracy of the applied annotation. For example, the annotation accuracy evaluation component 604 can apply the machine learning model M1 to a manually annotated data sample and/or a metadata annotated data sample to generate an inference output and a confidence level in the accuracy of the inference output.”) and comparing the query annotation with a ground truth annotation; (Paragraph 83: “the annotation accuracy evaluation component 604 can compare the annotated data samples 210 to the annotated training data samples included in the annotated training data set 106 (e.g., which are expected to be or determined to be accurate) to estimate the degree of confidence in the applied annotations. With these embodiments, the annotation accuracy evaluation component 604 can compare an annotated data sample (e.g., annotated using any of the different annotation techniques) to the annotated training data samples included in the annotated training data set 106 to identify one or more annotated training data samples that correspond to the annotated data sample (e.g., using a feature to feature comparison).”, it shows that “ annotated training date set 106” is interpreted as “a ground truth annotation” and
in response to determining that the automatically annotated first sequence of 2D images meets the readiness requirement, annotate, automatically, a second sequence of 2D images with respect to the anatomical structure based on the second ML model and the automatically annotated first sequence of 2D images. (Paragraph 39: “the model development module 108 can facilitate training and/or optimizing one or more machine learning models (e.g., machine learning model 110, M1) using accurately annotated/labeled training data samples … system 100 can be configured to train and develop a plurality of different machine models respectively tailored to different input data sets”; Paragraph 67: “the initial distribution of unannotated cases to a particular annotation technique by the annotation management component 204 could be random, determined manually, or based on some other criteria determined as a result of an active learning process … as the active learning process progresses over time, the continued distribution of new, unannotated data cases collected in the annotation que 114 can become more automated with (e.g., with no manual intervention. For example, as a result of the active learning processes, if the system 200 (i.e., the priority evaluation component 206) thinks M1 will generate an annotation for the “unannotated” case with a high confidence level, then this case can be ranked with a lower priority and thus sent for annotation using an automated annotation technique (e.g., a semi-supervised annotation technique and/or a metadata extraction technique).”, the person ordinary skill in the art would understand that “the active learning process progresses over time, the continued distribution of new, unannotated data cases collected in the annotation que 114” interpreted as “ second sequence 2D image data”.
However, Elgar does not disclose indicates a location of an anatomical structure in the first image;
Van discloses obtain a first sequence of two-dimensional (2D) images; (Figs. 6-7 show two dimensional images ; Paragraph 18: “the portion of data may include a sequence of images separated in time,”; Paragraph 55: “the first model may include a deep neural network …the first model may predict the annotation for the at least one other parameter in the portion of data,”, it shows that the input is two dimensional images for training the deep neural network )
obtain a first manual annotation based on a first user input, (Figs. 2-3 ; Paragraph 72: “at block 202 of FIG. 3, the process includes receiving a first user input to annotate a first parameter in a portion of data (as described earlier with respect to block 202 of FIG. 2)”) wherein the first manual annotation is associated with a first image of the first sequence of 2D images (Paragraph 88: “the first model can be used to efficiently annotate a sequence of images, such as a sequence of images separated in time (e.g. a time sequence of images). In such examples, the user may annotate a first parameter that relates to a first image in the sequence of images”) and indicates a location of an anatomical structure in the first image; (Paragraph 50: “The first parameter may include the location of a feature in the portion of data. For example, in embodiments where the portion of data is an image, the first parameter may include the location of a feature in the image. Where the image is a medical image, the feature may include the location of an anatomical structure, an artificial structure and/or an abnormality … (for example, the user may indicate that the image relates to a “heart”).”)
annotate, automatically, the first sequence of 2D images with respect to the location of anatomical structure based on the first manual annotation and a first machine-learning (ML) model; (Paragraphs 52-55: “ Returning back to FIG. 2, at block 204, the method includes using a first model to predict an annotation for at least one other parameter of the portion of data, based on the received first user input for the first parameter … the first model includes any model that uses the annotated first parameter (as derived from the first user input) to predict an annotation for at least one other parameter of the portion of data … the first model may be a machine learning model. ”; Paragraph 88: “the user may annotate a first parameter that relates to a first image in the sequence of images and the first model may predict an annotation of the first parameter”;)
determine whether the automatically annotated first sequence of 2D images meets a readiness requirement, (Paragraphs 72-74: “the portion of data is annotated (by the user and the first model) for use in training the second model … the step of determining whether sufficient annotated portions of data are available may include comparing the number of annotated portions of data to a predetermined threshold”
and in response to determining that the automatically annotated first sequence of 2D images meets the readiness requirement, annotating, automatically, a second 2D images with respect to the anatomical structure based on the second ML model and the automatically annotated first sequence of 2D images. (Paragraph 66-67: “ block 206 of FIG. 2, the method includes using the annotated first parameter (which is received from the user at block 202 of FIG. 2), the predicted annotation for the at least one other parameter (as predicted by the first model at block 204 of FIG. 2) and the portion of data, as training data to train a second model. … the second model may be for annotating the first parameter and/or the at least one other parameter in one or more further (e.g. unseen) portions of data.”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Elgar by including the predicted annotation of the at least one other parameter in a medical images that is taught by Van, to make the invention that method and system for training a model; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the annotation process efficiency as well as reducing processing time. (Van: Paragraph 84)
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 2, Elgar, as modified Van, discloses all the claims invention. Van further discloses the at least one processor being configured to automatically annotate the first sequence of images based on the first manual annotation comprises the at least one processor being configured to annotate the first sequence of 2D images progressively based on a pre-defined annotation propagation window size. (Paragraph 72-73: “the portion of data is annotated (by the user and the first model) for use in training the second model. In some embodiments, at block 308 of FIG. 3, the method includes determining whether sufficient annotated portions of data are available to train the second model … the step of determining whether sufficient annotated portions of data are available may include comparing the number of annotated portions of data to a predetermined threshold”)
Regarding claim 3, Elgar, as modified Van, discloses all the claims invention. Van further discloses the at least one processor being configured to annotate the first sequence of 2D images progressively based on the pre-defined annotation propagation window size comprises the at least one processor being configured to: select a first subset of 2D images from the first sequence of 2D images based on the pre- defined annotation propagation window size; annotate, automatically, the first subset of 2D images (the annotated first parameter(s) based on the first manual annotation and the first ML model; obtain a second annotation based on the automatically annotated first subset of 2D images; select a second subset of 2D image from the first sequence of 2D images based on the pre-defined annotation propagation window size; and annotate, automatically, the second subset of 2D images (the predicted annotation(s) of the at least one other parameter) based on the second annotation and the first ML model. (Paragraphs 72-73: “the portion of data is annotated (by the user and the first model) for use in training the second model. In some embodiments, at block 308 of FIG. 3, the method includes determining whether sufficient annotated portions of data are available to train the second model … the step of determining whether sufficient annotated portions of data are available may include comparing the number of annotated portions of data to a predetermined threshold. the predetermined threshold may be set based on numerical analysis (e.g. simulations relating to the performance of the second model for different sizes of training data). … If insufficient portions of annotated data are available, then blocks 202 and 204 of FIG. 3 may be repeated on further portions of data until it is determined that enough annotated portions of data are available to train the second model. At block 206 of FIG. 3, the annotated first parameter(s), the predicted annotation(s) of the at least one other parameter and the portion(s) of data are used as training data to train the second model.”)
Regarding claim 4, Elgar, as modified Van, discloses all the claims invention. Van further discloses the at least one processor is configured to obtain the second annotation by selecting a candidate annotation from the automatically annotated first sequence of 2D images and adjusting the selected candidate annotation based on a second user input. (Paragraph 75: “After this training, the performance of the second model is reviewed at block 310 of FIG. 3 to check whether the performance of the second model is sufficient. If the performance of the second model is insufficient (e.g. not accurate enough for the user's purpose), the process moves to block 312 of FIG. 3 whereby the first model is retrained to output further, improved annotation suggestions of further portions of data. Block 312 may include, for example, re-training the first model based on a user input indicating the accuracy of predicted annotations of the first model”)
Regarding claim 5, Elgar, as modified Van, discloses all the claims invention. Van further discloses the second ML model is pre-trained for extracting features from the automatically annotated first sequence of 2D images and annotating the second sequence of 2D images based on an average or a maximum of the extracted features. (Figs. 3-6; Paragraph 51: “With the metadata extraction annotation technique, an unannotated data sample can be automatically annotated based on machine analysis of the associated metadata (e.g., the additional, non-image-based clinical information associated with a medical image) that identifies or indicates the classification of the unannotated data sample that the machine learning model (e.g., M1) is configured to infer”; Paragraphs 71: “The technique 2 annotation component 304 can be configured to perform a metadata extraction technique to generate a second subset 404 of automatically annotated medical images. … annotation component 304 can access and apply the machine learning model (M1) to the unannotated medical images to generate an inference result that machine learning model has been trained to generate. the technique 3 annotation component 306 can further annotate or label the medical image with the inference result.”, the person ordinary skill in the art would understand that the infer result from Machine learning model 110 (second ML) base on the extract annotation (features) is interpreted as “based on an average or a maximum of the extracted features” )
Regarding claim 8, Elgar, as modified Van, discloses all the claims invention. Elgar further discloses the ground truth annotation is a user-adjusted or user-confirmed annotation, (Paragraph 43: “the annotated training data set 106 can include an initial set of annotated training data samples that can be used to initiate training and development of the machine learning model M1. For example, the initial annotated training data samples can include manually labeled/annotated data samples that are known to be accurate (e.g., providing ground truth examples)”) and wherein the at least one processor being is further configured to determine whether the automatically annotated first sequence of 2D images meets the readiness requirement comprises the at least one processor being configured to: calculate a readiness score (estimate the degree of confidence) based on the comparison of the query annotation with the ground truth annotation; and compare the readiness score with a pre-determined threshold value. (Paragraphs 83-85: “the annotation accuracy evaluation component 604 can compare the annotated data samples 210 to the annotated training data samples included in the annotated training data set 106 (e.g., which are expected to be or determined to be accurate) to estimate the degree of confidence in the applied annotations. … the annotation accuracy evaluation component 604 can find annotated training images included in the annotated training data set 106 that match or substantially correspond to (e.g., with respect to a defined threshold of correspondence) a newly annotated medical image annotated using any of the different annotation techniques … the training selection component 608 can identify and/or select the annotated data samples having annotations with estimated confidence levels that exceed a threshold confidence level.”, it shows that “ annotated training date set 106” is interpreted as “a ground truth annotation”.
Regarding claim 9, Elgar, as modified Van, discloses all the claims invention. Elgar further discloses the first sequence of 2D images is associated with a first patient and wherein the second sequence of 2D images is associated with a second patient. (Paragraph 50: “the non-clinical information associated with an unannotated medical image can include attributes regarding the patient from which the medical image was taken (e.g., patient medical history, patient comorbidity, patient demographics such as age, gender, location, height, weight, etc.), … using an active learning process, the annotation pipeline module 112 can learn that the model consistently generates low confidence diagnosis for medical images of a specific patient subgroup (e.g., age group, gender, location, etc.),” one having ordinary skill in the art would see “ the images data of patient subgroup” as different training data from difference patient.)
Regarding claim 10, Elgar, as modified Van, discloses all the claims invention. Van further discloses the at least one processor is further configured to provide a graphical user interface for obtaining the first user input. (Fig. 4 and Paragraph 55; Paragraph 77: “a first portion of data is presented to the user for the user to annotate, for example, using a user interface 104 as described earlier.”)
Regarding claim 11, Elgar discloses a method of automatic image annotation, (Paragraph 2: “systems, computer-implemented methods, apparatus and/or computer program products are described that provide an annotation pipeline for machine learning algorithm training and optimization.”) the method comprising:
obtaining a first sequence of two-dimensional (2D) images; (Figs.2: data sources 102; unannotated data samples 104; Fig.4: annotated data samples 402; 404; 406, Fig. 6: unannotated data samples 104; Paragraphs 47-49: “ the collection component 202 can collect or receive the unannotated data samples 104 from the one or more data sources 102 and store the unannotated data samples 104 in the annotation queue. For instance, in association with application of system 200 to annotate medical images for training a DNN model to diagnose a medical condition based on analysis of medical images, the collection component 202 can collect or receive hundreds to thousands to millions (or more) of unannotated medical images for the particular type of medical condition from various medical institutions”)
obtaining a first manual annotation based on a first user input, wherein the first manual annotation is associated with a first image of the first sequence of 2D images and indicates an anatomical structure; (Fig.2: annotation component 208; Figs. 3-4: annotated data samples 402,404,406 and Paragraphs 51: “the annotation pipeline module 112 can leverage different types of annotation techniques to facilitate annotating the data samples, wherein the different types of annotation techniques can vary with respect to the amount of time and resources involved. For example, in one implementation, the different types of annotation techniques can include a manual annotation technique, a metadata extraction annotation technique and a semi-supervised machine learning technique. … the annotation management component 204 can generate annotation prioritization information that identifies the annotation technique or techniques selected for each (or in some implementations one or more) of the unannotated data samples. … the annotation management component 204 can further generate and provide an entity with information recommending application of the priority order and/or directly send the unannotated data samples to the annotation component 208 for annotation in accordance with the priority order.”) (Paragraph 28: “the mapping of image features based on the physics of the acquisition to underlying physiology, function and anatomy is the core of the science and art of diagnostic radiology, cardiology and pathology.”)
annotating, automatically, the first sequence of 2D images with respect to the anatomical structure based on the first manual annotation and a first machine-learning (ML) model; (Paragraphs 51 - : “the annotation pipeline module 112 can leverage different types of annotation techniques to facilitate annotating the data samples, wherein the different types of annotation techniques can vary with respect to the amount of time and resources involved. For example, in one implementation, the different types of annotation techniques can include a manual annotation technique, a metadata extraction annotation technique and a semi-supervised machine learning technique. … the annotation management component 204 can generate annotation prioritization information that identifies the annotation technique or techniques selected for each (or in some implementations one or more) of the unannotated data samples. … the annotation management component 204 can further generate and provide an entity with information recommending application of the priority order and/or directly send the unannotated data samples to the annotation component 208 for annotation in accordance with the priority order.”, the person ordinary skill in the art would understand that “the selected one or more annotated techniques such as manual annotation and metadata extraction annotation in order” is interpreted as “first manual annotation and first machine learning model”; Paragraph 74: “a data sample (e.g., an image) can be annotated more than once using different annotation techniques and/or different annotation processes associated with a same annotation technique. For example, as shown in FIG. 5, the resulting annotated data samples 502 can include several groups (e.g., group 1, group 2, group N) of data samples corresponding to the same input sample yet annotated using different annotation techniques. )
determining whether the automatically annotated first sequence of 2D images meets a readiness requirement, wherein the determination is made by generating, using a second ML model, (Fig.6: annotation accuracy evaluation Component 604 and Machine learning Model (M1) 110) a query annotation associated with the anatomical structure based on multiple annotated 2D images from the automatically annotated first sequence of 2D images (Paragraph 81: “the machine learning model M1 can also be applied to data samples annotated using techniques other than the semi-supervised machine learning technique to determine the degree of confidence in the accuracy of the applied annotation. For example, the annotation accuracy evaluation component 604 can apply the machine learning model M1 to a manually annotated data sample and/or a metadata annotated data sample to generate an inference output and a confidence level in the accuracy of the inference output.”) and comparing the query annotation with a ground truth annotation; (Paragraph 83: “the annotation accuracy evaluation component 604 can compare the annotated data samples 210 to the annotated training data samples included in the annotated training data set 106 (e.g., which are expected to be or determined to be accurate) to estimate the degree of confidence in the applied annotations. With these embodiments, the annotation accuracy evaluation component 604 can compare an annotated data sample (e.g., annotated using any of the different annotation techniques) to the annotated training data samples included in the annotated training data set 106 to identify one or more annotated training data samples that correspond to the annotated data sample (e.g., using a feature to feature comparison).”, it shows that “ annotated training date set 106” is interpreted as “a ground truth annotation” and
in response to determining that the automatically annotated first sequence of 2D images meets the readiness requirement, annotate, automatically, a second sequence of 2D images with respect to the anatomical structure based on the second ML model and the automatically annotated first sequence of 2D images. (Paragraph 39: “the model development module 108 can facilitate training and/or optimizing one or more machine learning models (e.g., machine learning model 110, M1) using accurately annotated/labeled training data samples … system 100 can be configured to train and develop a plurality of different machine models respectively tailored to different input data sets”; Paragraph 67: “the initial distribution of unannotated cases to a particular annotation technique by the annotation management component 204 could be random, determined manually, or based on some other criteria determined as a result of an active learning process … as the active learning process progresses over time, the continued distribution of new, unannotated data cases collected in the annotation que 114 can become more automated with (e.g., with no manual intervention. For example, as a result of the active learning processes, if the system 200 (i.e., the priority evaluation component 206) thinks M1 will generate an annotation for the “unannotated” case with a high confidence level, then this case can be ranked with a lower priority and thus sent for annotation using an automated annotation technique (e.g., a semi-supervised annotation technique and/or a metadata extraction technique).”, the person ordinary skill in the art would understand that “the active learning process progresses over time, the continued distribution of new, unannotated data cases collected in the annotation que 114” interpreted as “ second sequence 2D image data”).
However, Elgar does not disclose indicates a location of an anatomical structure in the first image;
Van discloses obtaining a first sequence of two-dimensional (2D) images; (Figs. 6-7 show two dimensional images ; Paragraph 18: “the portion of data may include a sequence of images separated in time,”; Paragraph 55: “the first model may include a deep neural network …the first model may predict the annotation for the at least one other parameter in the portion of data,”, it shows that the input is two dimensional images for training the deep neural network )
obtaining a first manual annotation based on a first user input, (Figs. 2-3 ; Paragraph 72: “at block 202 of FIG. 3, the process includes receiving a first user input to annotate a first parameter in a portion of data (as described earlier with respect to block 202 of FIG. 2)”) wherein the first manual annotation is associated with a first image of the first sequence of 2D images (Paragraph 88: “the first model can be used to efficiently annotate a sequence of images, such as a sequence of images separated in time (e.g. a time sequence of images). In such examples, the user may annotate a first parameter that relates to a first image in the sequence of images”) and indicates a location of an anatomical structure in the first image; (Paragraph 50: “The first parameter may include the location of a feature in the portion of data. For example, in embodiments where the portion of data is an image, the first parameter may include the location of a feature in the image. Where the image is a medical image, the feature may include the location of an anatomical structure, an artificial structure and/or an abnormality … (for example, the user may indicate that the image relates to a “heart”).”)
annotating, automatically, the first sequence of 2D images with respect to the location of anatomical structure based on the first manual annotation and a first machine-learning (ML) model; (Paragraphs 52-55: “ Returning back to FIG. 2, at block 204, the method includes using a first model to predict an annotation for at least one other parameter of the portion of data, based on the received first user input for the first parameter … the first model includes any model that uses the annotated first parameter (as derived from the first user input) to predict an annotation for at least one other parameter of the portion of data … the first model may be a machine learning model. ”; Paragraph 88: “the user may annotate a first parameter that relates to a first image in the sequence of images and the first model may predict an annotation of the first parameter”;)
determining whether the automatically annotated first sequence of 2D images meets a readiness requirement, (Paragraphs 72-74: “the portion of data is annotated (by the user and the first model) for use in training the second model … the step of determining whether sufficient annotated portions of data are available may include comparing the number of annotated portions of data to a predetermined threshold”
and in response to determining that the automatically annotated first sequence of 2D images meets the readiness requirement, annotating, automatically, a second 2D images with respect to the anatomical structure based on the second ML model and the automatically annotated first sequence of 2D images. (Paragraph 66-67: “ block 206 of FIG. 2, the method includes using the annotated first parameter (which is received from the user at block 202 of FIG. 2), the predicted annotation for the at least one other parameter (as predicted by the first model at block 204 of FIG. 2) and the portion of data, as training data to train a second model. … the second model may be for annotating the first parameter and/or the at least one other parameter in one or more further (e.g. unseen) portions of data.”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Elgar by including the predicted annotation of the at least one other parameter in a medical images that is taught by Van, to make the invention that method and system for training a model; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the annotation process efficiency as well as reducing processing time. (Van: Paragraph 84)
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 12 Elgar, as modified Van, discloses all the claims invention. Van further discloses annotating, automatically, the first sequence of images based on the first manual annotation comprises annotating the first sequence of 2D images progressively based on a pre-defined annotation propagation window size. (Paragraph 72-73: “the portion of data is annotated (by the user and the first model) for use in training the second model. In some embodiments, at block 308 of FIG. 3, the method includes determining whether sufficient annotated portions of data are available to train the second model … the step of determining whether sufficient annotated portions of data are available may include comparing the number of annotated portions of data to a predetermined threshold”)
Regarding claim 13, (Elgar, as modified Van, discloses all the claims invention. Van further discloses annotating the first sequence of 2D images progressively based on the pre-defined annotation propagation window size comprises: selecting a first subset of 2D images from the first sequence of 2D images based on the pre-defined annotation propagation window size; annotating, automatically, the first subset of 2D images (the annotated first parameter(s) based on the first manual annotation and the first ML model; obtaining a second annotation based on the automatically annotated first subset of 2D images; selecting a second subset of 2D image from the first sequence of 2D images based on the pre-defined annotation propagation window size; and annotating, automatically, the second subset of 2D images (the predicted annotation(s) of the at least one other parameter) based on the second annotation and the first ML model. (Paragraphs 72-73: “the portion of data is annotated (by the user and the first model) for use in training the second model. In some embodiments, at block 308 of FIG. 3, the method includes determining whether sufficient annotated portions of data are available to train the second model … the step of determining whether sufficient annotated portions of data are available may include comparing the number of annotated portions of data to a predetermined threshold. the predetermined threshold may be set based on numerical analysis (e.g. simulations relating to the performance of the second model for different sizes of training data). … If insufficient portions of annotated data are available, then blocks 202 and 204 of FIG. 3 may be repeated on further portions of data until it is determined that enough annotated portions of data are available to train the second model. At block 206 of FIG. 3, the annotated first parameter(s), the predicted annotation(s) of the at least one other parameter and the portion(s) of data are used as training data to train the second model.”)
Regarding claim 14, Elgar, as modified Van, discloses all the claims invention. Van further discloses the second annotation is obtained by selecting a candidate annotation from the automatically annotated first sequence of 2D images and adjusting the selected candidate annotation based on a second user input. (Paragraph 75: “After this training, the performance of the second model is reviewed at block 310 of FIG. 3 to check whether the performance of the second model is sufficient. If the performance of the second model is insufficient (e.g. not accurate enough for the user's purpose), the process moves to block 312 of FIG. 3 whereby the first model is retrained to output further, improved annotation suggestions of further portions of data. Block 312 may include, for example, re-training the first model based on a user input indicating the accuracy of predicted annotations of the first model”)
Regarding claim 15, Elgar, as modified Van, discloses all the claims invention. Van further discloses the second ML model is pre-trained for extracting features from the automatically annotated first sequence of 2D images and annotating the second sequence of 2D images based on an average or a maximum of the extracted features. (Figs. 3-6; Paragraph 51: “With the metadata extraction annotation technique, an unannotated data sample can be automatically annotated based on machine analysis of the associated metadata (e.g., the additional, non-image-based clinical information associated with a medical image) that identifies or indicates the classification of the unannotated data sample that the machine learning model (e.g., M1) is configured to infer”; Paragraphs 71: “The technique 2 annotation component 304 can be configured to perform a metadata extraction technique to generate a second subset 404 of automatically annotated medical images. … annotation component 304 can access and apply the machine learning model (M1) to the unannotated medical images to generate an inference result that machine learning model has been trained to generate. the technique 3 annotation component 306 can further annotate or label the medical image with the inference result.”, the person ordinary skill in the art would understand that the infer result from Machine learning model 110 (second ML) base on the extract annotation (features) is interpreted as “based on an average or a maximum of the extracted features” )
Regarding claim 16 (Original) The method of claim 11, wherein the first ML model is trained using a plurality of sequentially ordered training images and wherein, during the training of the first ML model: the first ML model is used to annotate, automatically, the plurality of sequentially ordered training images in a first order and based on a first training annotation; the first ML model is further used to annotate, automatically, the plurality of sequentially ordered training images in a second order and based on a second training annotation; and parameters of the first ML model are adjusted to reduce a difference between annotations obtained in the first order and corresponding annotations obtained in the second order.
Regarding claim 17 (Original) The method of claim 16, wherein the first order is based on an ascending order of image indices associated with the plurality of sequentially ordered training images and the second order is based on a descending order of the image indices associated with the plurality of sequentially ordered training images.
Regarding claim 18, Elgar, as modified Van, discloses all the claims invention. Elgar further discloses the ground truth annotation is a user-adjusted or user-confirmed annotation, (Paragraph 43: “the annotated training data set 106 can include an initial set of annotated training data samples that can be used to initiate training and development of the machine learning model M1. For example, the initial annotated training data samples can include manually labeled/annotated data samples that are known to be accurate (e.g., providing ground truth examples)”) and wherein determining whether the automatically annotated first sequence of 2D images meets the readiness requirement comprises: calculating a readiness score (estimate the degree of confidence) based on the comparison of the query annotation with the ground truth annotation; and comparing the readiness score with a pre-determined threshold value. (Paragraphs 83-85: “the annotation accuracy evaluation component 604 can compare the annotated data samples 210 to the annotated training data samples included in the annotated training data set 106 (e.g., which are expected to be or determined to be accurate) to estimate the degree of confidence in the applied annotations. … the annotation accuracy evaluation component 604 can find annotated training images included in the annotated training data set 106 that match or substantially correspond to (e.g., with respect to a defined threshold of correspondence) a newly annotated medical image annotated using any of the different annotation techniques … the training selection component 608 can identify and/or select the annotated data samples having annotations with estimated confidence levels that exceed a threshold confidence level.”, it shows that “ annotated training date set 106” is interpreted as “a ground truth annotation”.
Regarding claim 19, Elgar, as modified Van, discloses all the claims invention. Elgar further discloses the first sequence of 2D images is one or more annotated image sequences are associated with a first patient and wherein the second sequence of 2D images is associated with a second patient. (Paragraph 50: “the non-clinical information associated with an unannotated medical image can include attributes regarding the patient from which the medical image was taken (e.g., patient medical history, patient comorbidity, patient demographics such as age, gender, location, height, weight, etc.), … using an active learning process, the annotation pipeline module 112 can learn that the model consistently generates low confidence diagnosis for medical images of a specific patient subgroup (e.g., age group, gender, location, etc.),” one having ordinary skill in the art would see “ the images data of patient subgroup” as different training data from difference patient.)
Regarding claim 20, Elgar discloses a non-transitory computer-readable medium comprising instructions that, when executed by a processor included in a computing device, (Paragraph 107: “The computer program product can include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.”) cause the processor to:
obtain a first sequence of two-dimensional (2D) images; (Figs.2: data sources 102; unannotated data samples 104; Fig.4: annotated data samples 402; 404; 406, Fig. 6: unannotated data samples 104; Paragraphs 47-49: “ the collection component 202 can collect or receive the unannotated data samples 104 from the one or more data sources 102 and store the unannotated data samples 104 in the annotation queue. For instance, in association with application of system 200 to annotate medical images for training a DNN model to diagnose a medical condition based on analysis of medical images, the collection component 202 can collect or receive hundreds to thousands to millions (or more) of unannotated medical images for the particular type of medical condition from various medical institutions”)
obtain a first manual annotation based on a first user input, wherein the first manual annotation is associated with a first image of the first sequence of 2D images and indicates an anatomical structure; (Fig.2: annotation component 208; Figs. 3-4: annotated data samples 402,404,406 and Paragraphs 51: “the annotation pipeline module 112 can leverage different types of annotation techniques to facilitate annotating the data samples, wherein the different types of annotation techniques can vary with respect to the amount of time and resources involved. For example, in one implementation, the different types of annotation techniques can include a manual annotation technique, a metadata extraction annotation technique and a semi-supervised machine learning technique. … the annotation management component 204 can generate annotation prioritization information that identifies the annotation technique or techniques selected for each (or in some implementations one or more) of the unannotated data samples. … the annotation management component 204 can further generate and provide an entity with information recommending application of the priority order and/or directly send the unannotated data samples to the annotation component 208 for annotation in accordance with the priority order.”) (Paragraph 28: “the mapping of image features based on the physics of the acquisition to underlying physiology, function and anatomy is the core of the science and art of diagnostic radiology, ca