DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on September 13, 2023 has been entered.
The amendment filed of claims 1, 4-7, 10, 15-16, 18, and 19 has been acknowledged.
The cancelation of claims 17 and 20 has been acknowledged.
The introduction of new claims 21-22 has been acknowledged.
Allowable Subject Matter
Claims 6, 8-10 and 16 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-5, 7, 11-12, 15, 18-19, and 21-22 is/are rejected under 35 U.S.C. 102(a) as being unpatentable over Chen et al. (U.S. Patent Publication No. 2021/0166395-A1, hereinafter “Chen”)
Regarding claim 1, Chen teaches: A computer-implemented method, comprising:obtaining a target image segmentation result according to a target medical image of a target part, wherein the target medical image comprises a medical image in at least one modality; ([0029], "In certain embodiments, the image may be a medical image, that is, an image of a human tissue. The image segmentation method provided in the embodiments of the present disclosure is applicable to human tissue image segmentation scenarios…")
obtaining target fusion data according to the target medical image segmentation result and a medical image in a predetermined modality in the target medical image; ([0102], "After the second image segmentation module extracts the feature, the foregoing downsampling process may be further performed, and after all the information is combined, the each pixel of the second sample image is classified to determine the second segmentation result.")
and obtaining a target multi-mutation detection result according to the target fusion data. ([0029], "The image segmentation method provided in the embodiments of the present disclosure is applicable to human tissue image segmentation scenarios, for example, human tissue image segmentation scenarios such as liver cancer segmentation, brain cancer and peripheral injury segmentation, lung cancer segmentation, pancreatic cancer segmentation, colorectal cancer segmentation, microvascular invasion of liver segmentation, hippocampus structure segmentation, prostate structure segmentation, left atrium segmentation, pancreas segmentation, liver segmentation, or spleen segmentation, and may alternatively be other human tissue image segmentation scenarios.")
Regarding claim 2, Chen teaches: The method according to claim 1, wherein the target medical image comprises a target multi-modal medical image, and the target multi-modal medical image comprises a medical image in a plurality of modalities; ([0070], "The second initial model may further perform modality merging on a multi-modality image, thereby segmenting the merged image. The modality merging module is a module in the second initial model. When the modality number of the second sample image is greater than 1, modality merging may be performed on the second sample image by using the modality merging module.")
and wherein the obtaining target fusion data according to the target medical image segmentation result and a medical image in a predetermined modality in the target medical image comprises:
obtaining first target tumor region feature data according to the target image segmentation result and a medical image in a first predetermined modality in the target multi-modal medical image; ([0033], "An electronic device pre-trains a first initial model based on a plurality of first sample images to obtain a second initial model."; [0070], "The modality merging module is a module in the second initial model. When the modality number of the second sample image is greater than 1, modality merging may be performed on the second sample image by using the modality merging module."; Examiner's Note - First Initial model in prior art maps to first predetermined modality and second initial model maps to segmentation result, with the resulting fusion image being produced through this second modality.)
and obtaining the target fusion data according to the first target tumor region feature data and a medical image in a second predetermined modality in the target multi-modal medical image. ([0029], "The image segmentation method provided in the embodiments of the present disclosure is applicable to human tissue image segmentation scenarios, for example, human tissue image segmentation scenarios such as liver cancer segmentation, brain cancer and peripheral injury segmentation, lung cancer segmentation, pancreatic cancer segmentation, colorectal cancer segmentation, microvascular invasion of liver segmentation, hippocampus structure segmentation, prostate structure segmentation, left atrium segmentation, pancreas segmentation, liver segmentation, or spleen segmentation, and may alternatively be other human tissue image segmentation scenarios."; [0033], "An electronic device pre-trains a first initial model based on a plurality of first sample images to obtain a second initial model."; [0070], "The modality merging module is a module in the second initial model. When the modality number of the second sample image is greater than 1, modality merging may be performed on the second sample image by using the modality merging module."; Examiner's Note - The segmentation targets are often cancers in prior art and the fusion is done on data from a second modality that us designed to segment this target cancer.)
Regarding claim 3, Chen teaches: The method according to claim 1, wherein the target medical image comprises a target mono-modal medical image, and the target mono-modal medical image comprises a medical image in a single modality; ([0029], "In certain embodiments, the image may be a medical image, that is, an image of a human tissue. The image segmentation method provided in the embodiments of the present disclosure is applicable to human tissue image segmentation scenarios…")
and wherein the obtaining target fusion data according to the target medical image segmentation result and a medical image in a predetermined modality in the target medical image comprises: obtaining second target tumor region feature data according to the target image segmentation result and the target mono-modal medical image; ([0039], "The electronic device may obtain an image segmentation model through training based on a plurality of second sample images. In certain embodiments, the plurality of second sample images may be stored in the electronic device and can be obtained when image segmentation model training needs to be performed. Each second sample image may further carry a label used for indicating a target segmentation result, where the target segmentation result refers to a correct segmentation result of the second sample image, or an actual segmentation result of the second sample image.")
and determining the second target tumor region feature data as the target fusion data. ([0102], "After the second image segmentation module extracts the feature, the foregoing downsampling process may be further performed, and after all the information is combined, the each pixel of the second sample image is classified to determine the second segmentation result.")
Regarding claim 4, Chen teaches: The method according to any one of claims 1 to 3 claim 1,wherein the obtaining a target multi-mutation detection result according to the target fusion data comprises:
processing the target fusion data based on each of a plurality of first mutation processing strategies, so as to obtain a plurality of target mutation detection results respectively corresponding to the plurality of first mutation processing strategies; ([0121], "After the first image segmentation module and the second image segmentation module are trained, the second initial model may further train a mixed strategy of the two modules based on the two trained modules, that is, for a second sample image, to train to select which one or both of the two modules to better segment the second sample image.")
and obtaining the target multi-mutation detection result according to the plurality of target mutation detection results respectively corresponding to the plurality of first mutation processing strategies. ([0029], "In certain embodiments, the image may be a medical image, that is, an image of a human tissue. The image segmentation method provided in the embodiments of the present disclosure is applicable to human tissue image segmentation scenarios, for example, human tissue image segmentation scenarios such as liver cancer segmentation, brain cancer and peripheral injury segmentation, lung cancer segmentation, pancreatic cancer segmentation, colorectal cancer segmentation, microvascular invasion of liver segmentation, hippocampus structure segmentation, prostate structure segmentation, left atrium segmentation, pancreas segmentation, liver segmentation, or spleen segmentation, and may alternatively be other human tissue image segmentation scenarios.")
Regarding claim 5, Chen teaches: The method according to any one of claims 1 to 3 claim 1,wherein the obtaining a target multi-mutation detection result according to the target fusion data comprises: processing the target fusion data based on a first single mutation processing strategy to obtain the target multi-mutation detection result. ([0121], "After the first image segmentation module and the second image segmentation module are trained, the second initial model may further train a mixed strategy of the two modules based on the two trained modules, that is, for a second sample image, to train to select which one or both of the two modules to better segment the second sample image."; Examiner's Note - Even if multiple models are used, the resulting data would still be based in part by the "first" mutation processing strategy)
Regarding claim 7, Chen teaches: The method according to any one of claims 1 to 3 claim 1,wherein the obtaining a target image segmentation result according to a target medical image of a target part comprises:obtaining target image feature data in at least one scale according to the target medical image of the target part; and obtaining the target image segmentation result according to the target image feature data in at least one scale. ([0073], "As shown in FIG. 5, if a size of an image that is resized and then downsampled by 8 times is less than one pixel, it indicates that a lot of useful information is lost in the downsampling process, and the image needs to be sampled in a multi-scale image cropping manner.")
Regarding claim 11, Chen teaches: A method of training a deep learning model, comprising:obtaining a sample image segmentation result according to a sample medical image of a sample part, wherein the sample medical image comprises a medical image in at least one modality; ([0029], "In certain embodiments, the image may be a medical image, that is, an image of a human tissue. The image segmentation method provided in the embodiments of the present disclosure is applicable to human tissue image segmentation scenarios…")
obtaining sample fusion data according to the sample image segmentation result and a medical image in a predetermined modality in the sample multi-modal medical image; ([0102], "After the second image segmentation module extracts the feature, the foregoing downsampling process may be further performed, and after all the information is combined, the each pixel of the second sample image is classified to determine the second segmentation result.")
obtaining a sample multi-mutation detection result according to the sample fusion data;([0029], "The image segmentation method provided in the embodiments of the present disclosure is applicable to human tissue image segmentation scenarios, for example, human tissue image segmentation scenarios such as liver cancer segmentation, brain cancer and peripheral injury segmentation, lung cancer segmentation, pancreatic cancer segmentation, colorectal cancer segmentation, microvascular invasion of liver segmentation, hippocampus structure segmentation, prostate structure segmentation, left atrium segmentation, pancreas segmentation, liver segmentation, or spleen segmentation, and may alternatively be other human tissue image segmentation scenarios.")
and training the deep learning model by using the sample image segmentation result, a sample image segmentation label of the sample medical image, the sample multi-mutation detection result, and a sample multi-mutation label of the sample medical image. ([0039], "The electronic device may obtain an image segmentation model through training based on a plurality of second sample images. In certain embodiments, the plurality of second sample images may be stored in the electronic device and can be obtained when image segmentation model training needs to be performed. Each second sample image may further carry a label used for indicating a target segmentation result, where the target segmentation result refers to a correct segmentation result of the second sample image, or an actual segmentation result of the second sample image.")
Regarding claim 12, Chen teaches: The method according to claim 11, wherein the training the deep learning model by using the sample image segmentation result, a sample image segmentation label of the sample medical image, the sample multi-mutation detection result, and a sample multi-mutation label of the sample medical image comprises: obtaining a first output value based on a first loss function according to the sample image segmentation result and the sample image segmentation label of the sample medical image; ([0107], "The second initial model in the electronic device obtains a first segmentation error and a second segmentation error respectively based on the labels of the plurality of second sample images, the first segmentation result, and the second segmentation result."; [0108], "After obtaining the first segmentation result and the second segmentation result, the second initial model may respectively determine whether the first segmentation result and the second segmentation result are accurate based on the label of the second sample image. In certain particular embodiments, whether the segmentation result is accurate may be determined according to the segmentation error.")
obtaining a second output value based on a second loss function according to the sample multi-mutation detection result and the sample multi-mutation label of the sample medical image; ([0111], "In certain embodiments, a process of obtaining the segmentation error of the second segmentation result is implemented by using a second loss function, and a weight of the second loss function is determined based on an online hard example mining (OHEM) algorithm, which can effectively distinguish difficult samples in the second sample image, and reduce the influence of the hard samples on the model parameter, so that adverse effects caused by the imbalance of the sample labels can be dealt with.")
and adjusting a model parameter of the deep learning model according to an output value, wherein the output value is determined according to the first output value and the second output value. ([0113], "The second initial model in the electronic device adjusts the module parameters of the first image segmentation module and the second image segmentation module respectively based on the first segmentation error and the second segmentation error, and stops the adjustment until a first number of iterations is reached, to obtain the first image segmentation module and the second image segmentation module.")
Regarding claim 13, Chen teaches: The method according to claim 12, wherein the obtaining a sample multi-mutation detection result according to the sample fusion data comprises:
processing the sample fusion data based on each of a plurality of first mutation processing strategies, so as to obtain a plurality of sample mutation detection results respectively corresponding to the plurality of first mutation processing strategies; ([0121], "After the first image segmentation module and the second image segmentation module are trained, the second initial model may further train a mixed strategy of the two modules based on the two trained modules, that is, for a second sample image, to train to select which one or both of the two modules to better segment the second sample image.")
and obtaining the sample multi-mutation detection result according to the plurality of sample mutation detection results respectively corresponding to the plurality of first mutation processing strategies. ([0121], "After the first image segmentation module and the second image segmentation module are trained, the second initial model may further train a mixed strategy of the two modules based on the two trained modules, that is, for a second sample image, to train to select which one or both of the two modules to better segment the second sample image.")
Regarding claim 15, Chen teaches: The method according to claim 11 or 12, wherein the obtaining a sample multi-mutation detection result according to the sample fusion data comprises: processing the sample fusion data based on a first single mutation processing strategy to obtain the sample multi-mutation detection result. ([0121], "After the first image segmentation module and the second image segmentation module are trained, the second initial model may further train a mixed strategy of the two modules based on the two trained modules, that is, for a second sample image, to train to select which one or both of the two modules to better segment the second sample image."; Examiner's Note - Even if multiple models are used, the resulting data would still be based in part by the "first" mutation processing strategy)
Regarding claim 18, Chen teaches: An electronic device, comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs are configured to, when executed by the one or more processors, cause the one or more processors to implement the method of claim 1. (Fig. 12)
Regarding claim 19, Chen teaches: A computer readable storage medium having executable instructions therein, wherein the instructions are configured to, when executed by a processor, cause the processor to implement the method of claim 1. (Fig. 12)
Regarding claim 21, Chen teaches: An electronic device, comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs are configured to, when executed by the one or more processors, cause the one or more processors to implement the method of claim 11. (Fig. 12)
Regarding claim 22, Chen teaches: A computer readable storage medium having executable instructions therein, wherein the instructions are configured to, when executed by a processor, cause the processor to implement the method of claim 11. (Fig. 12)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (U.S. Patent Publication No. 2021/0166395-A1, hereinafter “Chen”) in view of Kamen et al. (U.S. Patent Publication No. 2021/0248736-A1, hereinafter “Kamen”).
Regarding claim 14, Chen teaches: The method according to claim 13, wherein the output value is determined according to the first output value, the second output value, and a third output value; ([0127], "The second initial model in the electronic device obtains the first segmentation error, the second segmentation error, and a third segmentation error based on the labels of the second sample images, and the first segmentation result, the second segmentation result, and the fifth segmentation result of the each second sample image.")
Chen does not teach: and wherein the method further comprises: obtaining the third output value based on a third loss function according to a sample mutation detection result corresponding to a predetermined mutation processing strategy and a sample mutation label.
However, Kamen does teach: and wherein the method further comprises: obtaining the third output value based on a third loss function according to a sample mutation detection result corresponding to a predetermined mutation processing strategy and a sample mutation label. (Kamen, [0041], " For both single modality localization loss 212 and classification loss 214, a binary cross entropy loss function is chosen as the objective function. Other loss functions may be employed, such as, e.g., a multi-scale loss function. ")
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify feature segmentation error calculation with two forms of loss function (as taught by Chen) to include more than two forms of loss function (as taught by Chen) because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, feature segmentation error calculation as modified by more then two forms of loss function can yield a predictable result of optimizing the loss functions during training (Kamen, [0041], “Combining the loss functions linearly with a weighting factor results in the overall loss that is optimized during the joint training procedure.”). Thus, a person of ordinary skill would have appreciated including in feature segmentation error calculation with two forms of loss function the ability to include more than two forms of loss function since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jinsu Hwang whose telephone number is (703)756-1370. The examiner can normally be reached Mon 6am-8am, 3pm-9pm EST; Thu 12pm-2pm EST, and Fri 9am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JINSU HWANG/Examiner, Art Unit 2667
/MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667