Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This action is responsive to communications: Application filed on January 16, 2024, and Drawings filed on January 16, 2024.
2. Claims 1–7 are pending in this case. Claim 1 is independent claims.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With regard to claim 1 applicant claims the limitation of “inputting the target domain image x.sub.t into the source-target inter-domain semantic segmentation model F.sub.inter to obtain a category segmentation probability P.sub.t of the target domain image x.sub.t, and then using the category segmentation probability P.sub.t to calculate segmentation probability credibility S.sub.t and a target domain pseudo label custom-character; (3) arranging all target domain images x.sub.t in descending order according to the segmentation probability credibility S.sub.t, and then dividing all the target domain images x.sub.t into K subsets of target domain images {X.sub.t.sup.1, X.sub.t.sup.2, . . . X.sub.t.sup.K} on average according to an order of arrangement, wherein K is a natural number greater than 1.”
Applicant uses the symbol xt to represent a specific “target domain image” and “all target domain images”. It is unclear how the target domain image relates to the “all target domain images”. It is unclear whether the target domain image is one of the “all target domain image” or is it its own separate image that is distinguished from the “all target domain images”. It is unclear whether category segmentation probability, segmentation probability credibility and target domain pseudo label are determined for each image of the group of target domain images or are they determined only for the first defined “target domain image”. For the purpose of a compact prosecution category segmentation probability, segmentation probability credibility and target domain pseudo label are determined for each image of the group of target domain images.
It is also unclear what constitutes “and then dividing all the target domain images x.sub.t into K subsets of target domain images {X.sub.t.sup.1, X.sub.t.sup.2, . . . X.sub.t.sup.K} on average according to an order of arrangement”. It is unclear what is “on average”. It is unclear whether each subset has on average the same number of images or that the average is related to probability credibility. For the purpose of a compact prosecution, the claim will be examined as each subset has on average the same number of images.
Claims 1-7 would be allow if the applicant overcome the 112 rejections.
Pertinent Arts
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Jeong, Pub. No.: 20220101522 A1, A cell image segmentation method using scribble labels includes iteratively pre-training via an image segmentation network (U-Net) using a cell image and scribble labels indicating a cell region and a background region as training data, calculating an exponential moving average (EMA) of image segmentation prediction probabilities at a predetermined interval during the pre-training, self-training by assigning the cell region and the background region for which the EMA of image segmentation prediction probabilities is over a preset threshold to be a pseudo-label, and iteratively refining the image segmentation prediction probability based on a scribbled loss (L.sub.sp) obtained through a result of the training and an unscribbled loss (L.sub.up). Accordingly, it is possible to achieve cell image segmentation with high reliability using only scribble labels.
Buckland, Pub. No.: US 20210209758 A1: A cell image segmentation method using scribble labels includes iteratively pre-training via an image segmentation network (U-Net) using a cell image and scribble labels indicating a cell region and a background region as training data, calculating an exponential moving average (EMA) of image segmentation prediction probabilities at a predetermined interval during the pre-training, self-training by assigning the cell region and the background region for which the EMA of image segmentation prediction probabilities is over a preset threshold to be a pseudo-label, and iteratively refining the image segmentation prediction probability based on a scribbled loss (L.sub.sp) obtained through a result of the training and an unscribbled loss (L.sub.up). Accordingly, it is possible to achieve cell image segmentation with high reliability using only scribble labels.
Grady, Pub. No.: 20200184646 A1: The method involves receiving multiple images of an anatomical structure. Multiple geometric labels of the anatomical structure are received. A parameterized representation of the anatomical structure is generated based on the geometric labels and the received images. An image of a patient anatomy is received. A probability distribution is computed for a patient-specific segmentation boundary of the patient anatomy based on the parameterized representation. A segmentation boundary is generated based on the probability distribution.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DI XIAO whose telephone number is (571)270-1758. The examiner can normally be reached 9Am-5Pm est M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DI XIAO/Primary Examiner, Art Unit 2178