DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 04/24/2025, 04/30/2025 and 05/15/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5 and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. ("Source-free domain adaptive fundus image segmentation with denoised pseudo-labeling." International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer International Publishing, 2021, as provided) in view of Liu et al.(CN 112419326 A, as provided).
Regarding Claim 1,
Chen discloses A medical image segmentation method, the method comprising: performing image segmentation on a sample medical image through a source domain segmentation model, to obtain a first segmentation result, the source domain segmentation model being obtained through training based on medical image data in a source domain, the sample medical image being an unannotated medical image in a target domain, data distributions of medical images in the target domain and the source domain being different; (Chen, pg. 3, discloses
PNG
media_image1.png
453
543
media_image1.png
Greyscale
; image segmentation model is disclosed where the image is segmented and labelled and further trained on training model using class label)
performing image segmentation on the sample medical image through a target domain segmentation model, to obtain a second segmentation result; (Chen, pg. 4, Fig. 1, discloses ;;
PNG
media_image2.png
93
584
media_image2.png
Greyscale
PNG
media_image3.png
225
500
media_image3.png
Greyscale
)
correcting the first segmentation result based on the second segmentation result and a segmentation confidence level of the target domain segmentation model, to obtain a corrected segmentation result; (Chen,
PNG
media_image4.png
118
578
media_image4.png
Greyscale
PNG
media_image5.png
439
538
media_image5.png
Greyscale
; incorrect labels of image pixel that are segmented are labelled again with accuracy) and
updating training on the target domain segmentation model based on the second segmentation result and the corrected segmentation result. (Chen,
PNG
media_image2.png
93
584
media_image2.png
Greyscale
;
PNG
media_image3.png
225
500
media_image3.png
Greyscale
; target model is initialized with target data for segmentation)
Chen does not explicitly disclose performed by a computer device
Liu discloses performed by a computer device (Liu, Description, discloses there is provided a computer device, the computer device comprises a processor and a memory, the memory is stored with at least one computer program; The at least one computer program is loaded and executed by the processor to implement operations performed in an image segmentation data processing method as described above; there is provided a computer-readable storage medium, the computer-readable storage medium is stored with at least one computer program, the at least one computer program is loaded and executed by the processor to realize the operation performed in the image segmentation data processing method according to the above aspects; there is provided a computer program product or a computer program, the computer program product or computer program comprises a computer program code, the computer program code stored in a computer readable storage medium; The processor of the computer device reads the computer program code from the computer readable storage medium, the processor executes the computer program code, so that the computer device realizes the operation performed in the image segmentation data processing method according to the above aspects; computer device is disclosed)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Chen in view of Liu having a method of correcting the pixels labeled incorrectly in segmentation model and training the target model with accurate pixel segmentation using the updated pixel segmentation model with the teachings of Liu having, by performing the segmentation with use of computer device to improve efficiency of processing the image.
Regarding Claim 2,
The combination of Chen and Liu further discloses wherein the correcting the first segmentation result based on the second segmentation result and the segmentation confidence level of the target domain segmentation model, to obtain the corrected segmentation result comprises: determining a label error indication map corresponding to the first segmentation result based on the second segmentation result, the label error indication map being configured for indicating correct pixel categories that are correctly labeled and incorrect pixel categories that are incorrectly labeled in the first segmentation result, pixel categories of pixels in the sample medical image being indicated in the first segmentation result; and correcting, based on the segmentation confidence level, the incorrect pixel categories indicated in the label error indication map, to obtain the corrected segmentation result; (Chen, discloses
PNG
media_image1.png
453
543
media_image1.png
Greyscale
;
PNG
media_image4.png
118
578
media_image4.png
Greyscale
;
PNG
media_image5.png
439
538
media_image5.png
Greyscale
; incorrectly labeled and segmented pixels are labeled again with correctly and pixels are adjusted based on the confidence, probability and threshold values obtained by updating the target training model). Additionally, the rational and motivation to combine the references Chen and Liu as applied in rejection of claim 1 apply to this claim.
Regarding Claim 3,
The combination of Chen and Liu further discloses wherein the determining the label error indication map corresponding to the first segmentation result based on the second segmentation result comprises: determining, based on the first segmentation result, first pixels belonging to a first category; determining, based on the second segmentation result, potential probabilities that the first pixels belong to a second category, the second category being a pixel category different from the first category; and determining, based on the potential probabilities, the label error indication map corresponding to the first category in the first segmentation result.
(Chen, discloses
PNG
media_image1.png
453
543
media_image1.png
Greyscale
PNG
media_image4.png
118
578
media_image4.png
Greyscale
PNG
media_image5.png
439
538
media_image5.png
Greyscale
; incorrectly labeled and segmented pixels are labeled again with correctly and pixels are adjusted based on the confidence, probability and threshold values obtained by updating the target training model). Additionally, the rational and motivation to combine the references Chen and Liu as applied in rejection of claim 1 apply to this claim.
Regarding Claim 4,
The combination of Chen and Liu further discloses wherein the determining, based on the first segmentation result, the first pixels belonging to the first category comprises: determining, based on the first segmentation result, probabilities that the pixels in the sample medical image belong to the first category; determining a first probability threshold corresponding to the first category based on the probabilities that the pixels belong to the first category; and determining, in response to that a probability that a pixel belongs to the first category is greater than the first probability threshold, the pixel as a first pixel.
PNG
media_image1.png
453
543
media_image1.png
Greyscale
PNG
media_image4.png
118
578
media_image4.png
Greyscale
PNG
media_image5.png
439
538
media_image5.png
Greyscale
incorrectly labeled and segmented pixels are labeled again with correctly and pixels are adjusted based on the confidence, probability and threshold values obtained by updating the target training model). Additionally, the rational and motivation to combine the references Chen and Liu as applied in rejection of claim 1.
Regarding Claim 5,
The combination of Chen and Liu further discloses wherein the determining the first probability threshold corresponding to the first category based on the probabilities that the pixels belong to the first category comprises: determining a maximum probability value in the probabilities that the pixels belong to the first category; and determining the first probability threshold based on the maximum probability value. (Chen, discloses
PNG
media_image1.png
453
543
media_image1.png
Greyscale
;
PNG
media_image4.png
118
578
media_image4.png
Greyscale
;
PNG
media_image5.png
439
538
media_image5.png
Greyscale
; incorrectly labeled and segmented pixels are labeled again with correctly and pixels are adjusted based on the confidence, probability and threshold values obtained by updating the target training model). Additionally, the rational and motivation to combine the references Chen and Liu as applied in rejection of claim 1.
Regarding Claim 11,
The combination of Chen and Liu further discloses wherein the updating training on the target domain segmentation model based on the second segmentation result and the corrected segmentation result comprises: determining pixel category probabilities of the pixels based on the second segmentation result; determining cross-entropy losses based on the pixel category probabilities and corrected category probabilities of the pixels indicated in the corrected segmentation result; and updating the training on the target domain segmentation model based on the cross-entropy losses. (Liu, Description, discloses wherein i represents any pixel point; p represents the first label in the first label image; y represents the source domain label in the source domain image; Lreweight (p, y) represents the third source domain sub-loss value corresponding to any pixel point; λ 1 and λ 2 represent the weight coefficient; W (yi) represents the weight corresponding to any pixel point, pi represents the first label corresponding to any pixel point, yi represents the source domain label corresponding to any pixel point. wherein, represents the first difference value; h represents the length of the label image, w represents the width of the label image, c represents the category of the label image. wherein, represents a loss value determined based on cross entropy loss; represents a loss value determined based on loss of loss (a loss value determining mode). Wherein, the process of determining the fourth source domain loss value and the process of determining the third source domain loss value are the same, which will not be repeated here; the computer device uses multiple iterative training mode to train the first image segmentation model, as the iteration training times is increased, the first image segmentation model is more and more stable, so the computer device according to the iterative training times; to adjust the weight coefficient of the third source domain loss value and the fourth source domain loss value, so as to increase the iteration training times; the weight coefficient of the fourth source domain loss value becomes larger, so as to avoid the influence of the false label image generated by the unstable image segmentation model in the training process; at the same time, using the effective information in the noise label image, enhancing the robustness of the image segmentation model. then based on the third source domain loss value and the fourth source domain loss value, determining the first source domain loss value comprises: the computer device obtains the iteration training times corresponding to the training, in response to the iteration training times is not less than the first threshold value and not greater than the second threshold value, based on the third source domain loss value, the fourth source domain loss value, iteration times, the first threshold value and the second threshold value, determining the first source domain loss value. or, in response to the iterative training times greater than the second threshold value, based on the third source domain loss value and the fourth source domain loss value, determining the first source domain loss value; cross entropy values are determined for pixels being segmented in different category). Additionally, the rational and motivation to combine the references Chen and Liu as applied in rejection of claim 1.
Claims 12-16 and 17-20 recite apparatus and computer readable storage medium with elements and instructions corresponding to the method steps recited in Claims 1-5 and 1-4 respectively. Therefore, the recited elements and instructions of the apparatus and computer readable storage medium Claims 12-16 and 17-20 are mapped to the proposed combination in the same manner as the corresponding elements of Claims 1-5 and 1-4 respectively. Additionally, the rationale and motivation to combine the Chen and Liu references presented in rejection of Claim 1, apply to these claims.
Furthermore, the combination of Chen and Liu further discloses An apparatus for medical image segmentation, the apparatus comprising: a memory storing instructions; and a processor in communication with the memory, wherein, when the processor executes the instructions, the processor is configured to cause the apparatus (Liu, Description, discloses there is provided a computer device, the computer device comprises a processor and a memory, the memory is stored with at least one computer program; The at least one computer program is loaded and executed by the processor to implement operations performed in an image segmentation data processing method as described above; there is provided a computer-readable storage medium, the computer-readable storage medium is stored with at least one computer program, the at least one computer program is loaded and executed by the processor to realize the operation performed in the image segmentation data processing method according to the above aspects; there is provided a computer program product or a computer program, the computer program product or computer program comprises a computer program code, the computer program code stored in a computer readable storage medium; The processor of the computer device reads the computer program code from the computer readable storage medium, the processor executes the computer program code, so that the computer device realizes the operation performed in the image segmentation data processing method according to the above aspects; computer device is disclosed).
Furthermore, the combination of Chen and Liu further discloses The non-transitory computer-readable storage medium, wherein, when the computer-readable instructions are configured to cause the processor to perform (Liu, Description, discloses there is provided a computer device, the computer device comprises a processor and a memory, the memory is stored with at least one computer program; The at least one computer program is loaded and executed by the processor to implement operations performed in an image segmentation data processing method as described above; there is provided a computer-readable storage medium, the computer-readable storage medium is stored with at least one computer program, the at least one computer program is loaded and executed by the processor to realize the operation performed in the image segmentation data processing method according to the above aspects; there is provided a computer program product or a computer program, the computer program product or computer program comprises a computer program code, the computer program code stored in a computer readable storage medium; The processor of the computer device reads the computer program code from the computer readable storage medium, the processor executes the computer program code, so that the computer device realizes the operation performed in the image segmentation data processing method according to the above aspects; computer device is disclosed).
Allowable Subject Matter
Claims 6-10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
CN-110795938-B (Abstract a text sequence word segmentation method, belonging to the technical field of natural language processing. The method comprises: obtaining n word-dividing sub-results of the text sequence, wherein the n word-dividing sub-results are obtained by respectively performing word-dividing processing on the text sequence through n word-dividing models; processing the n word-dividing sub-results by the probability determining model branch in the result combining model, obtaining the word-dividing probability of each word-dividing position; processing the participle probability at each participle position through the activation function in the result combination, obtaining the participle result of the text sequence. The invention takes each participle position in the text sequence as unit to combine the participle results of multiple participle models so as to improve the accuracy of participle of newly appeared text sequence)
CN-108062753-B (Abstract an unsupervised domain-adaptive brain tumor semantic segmentation method based on deep adversarial learning. The method comprises the steps of deep coding-decoding full-convolution network segmentation system model setup, domain discriminator network model setup, segmentation system pre-training and parameter optimization, adversarial training and target domain feature extractor parameter optimization and target domain MRI brain tumor automatic semantic segmentation. According to the method, high-level semantic features and low-level detailed features are utilized to jointly predict pixel tags by the adoption of a deep coding-decoding full-convolution network modeling segmentation system, a domain discriminator network is adopted to guide a segmentation model to learn domain-invariable features and a strong generalization segmentation function through adversarial learning, a data distribution difference between a source domain and a target domain is minimized indirectly, and a learned segmentation system has the same segmentation precision in the target domain as in the source domain. Therefore, the cross-domain generalization performance of the MRI brain tumor full-automatic semantic segmentation method is improved, and unsupervised cross-domain adaptive MRI brain tumor precise segmentation is realized)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PINALBEN V PATEL whose telephone number is (571)270-5872. The examiner can normally be reached M-F: 10am - 8pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at 571-272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Pinalben Patel/Examiner, Art Unit 2673