DETAILED ACTION
Claims 1-10 and 11-14 and 15 and 16 NOT rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea and organizing human activity NOT without significantly more.
Claim(s) 1,2,3,7,8,9,10 and 11,12,13,14 and 15 and 16 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Zheng et al. (US 10,970,829 B1).
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al. (US 10,970,829 B1) in view of Zhu et al. (US 12,469,137 B1):
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al. (US 10,970,829 B1) in view of Zhang et al. (US 2019/0279346 A1):
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al. (US 10,970,829 B1) in view of LIU et al. (CN 109285111 A) with SEARCH machine translation:
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al. (US 10,970,829 B1) in view of Phaphuangwittayakul et al. (Fast Adaptive Meta-Learning for Few-Shot Image Generation):
35 USC § 101 – Positive Statement
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The claim(s) does/do include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional element of the preamble (“training a system to segment a computerized tomography (CT1) image2”, claim 1) gives life and/or meaning to the remainder of the claim in view of and reflecting applicant’s improvement in the disclosure (faster CT & higher-contrast MR bit-mask3 segmentation improvement).
Thus claims 1-16 are statutory under 35 USC 101.
Claim Rejections – 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1,2,3,7,8,9,10 and 11,12,13,14 and 15 and 16 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Zheng et al. (US 10,970,829 B1).
PNG
media_image1.png
726
328
media_image1.png
Greyscale
Re 1. (Original), Zheng discloses A method for training a system to segment4 a computerized tomography (CT) image, comprising:
receiving5 a plurality of data groups (“in two sets”, c. 11,ll.,20-25: fig. 2: “domain A” in two set & “domain B” in two sets), each group comprising an image (“match” c. 11,ll.20-25) pair comprising a CT image, a magnetic resonance (MR) image and a segmentation mask (i.e., “shape space data Y”, c.9,ll.15-20) for the MR image;
training6 a first generator of a first generative adversarial network (GAN) using the image pairs, to generate a synthetic MR image from the CT image (via “generating synthetic MRI images”c.11,ll.25-30) based on (loss) relationships between image (“geometric shape” c.8,ll. 55-60) features (fig. 3:318-B: “Shape-consistency Loss”) of the CT image and MR image in each image pair; and
training7 a second generator of a second GAN to generate a (“real or”, c.7,ll.20-25) synthetic mask8 for the synthetic MR image based on (said loss) relationships between (said shape) features of the synthetic MR image and segmentation mask of each data group.
Re 2. (Original), Zheng discloses The method of claim 1, wherein the first generator and second generator are trained together (shown in fig. 3:”G”).
Re 3. (Currently Amended), Zheng discloses The method of claim 1 or 2, wherein the first GAN comprises a first discriminator (“DA” c.6,ll.10-15: fig. 3:312) and the second GAN comprises a second discriminator (“DB” c.6,ll.10-15: fig. 3:310), the first discriminator being (“pre-“ c.10,ll.60-65) trained before (“jointly”) training the second discriminator.
Re 7. (Currently Amended), Zheng discloses The method of any one of claims 1 to 6 claim 1,wherein the objective function of the second ( I count a first to fourth “G” in fig. 3) GAN,AN-2,is formulated as9 a first ( I count a first to fourth “G” in fig. 3) conditional GAN,aoAN-2, according10 to:
(A):
PNG
media_image2.png
54
929
media_image2.png
Greyscale
(B) others:
PNG
media_image3.png
146
1059
media_image3.png
Greyscale
wherein G, and a2 are the first generator and second generator ( I count a first to fourth “G” in fig. 3), respectively, DL and D2 are a first discriminator ( I count a first to fourth “G” in fig. 3) of the first GAN and second discriminator of the second GAN ( I count a first to fourth “G” in fig. 3), respectively, x and y are the CT image and MR image of each pair, respectively, y’ is the synthetic MR image, z and z’ are random noise vectors, L, is a loss based on the Binary Cross Entropy Loss and A is a 7egularization parameter (“applied to the cycle-consistency loss and the shape-consistency loss”, c.9,ll.50-55)11.
Re 8. (Currently Amended), Zheng discloses The method of any one of claims 1 to 7 claim 1,further comprising segmenting a further CT image (“of different patients” c.6,ll.5-10), by:
receiving the further CT image;
converting (via “cross-domain” c.4,ll.30-35) the further CT image to a synthetic magnetic resonance (MR) image (via “generating synthetic MRI images”c.11,ll.25-30) using the first generator;
generating a (“real or”, c.7,ll.20-25) segmentation mask for the synthetic MR image using the second generator; and
applying the segmentation mask to the CT image.
Re 9. (Currently Amended), Zheng discloses The method of any one of claims 1 to 8claim 1,wherein the data groups are constructed by:
for each of a plurality of subjects, receiving a magnetic resonance (MR) image (204 “(denoted as domain B in FIG. 5) “, c.10,ll.15-20) for the subject;
training a first generator (fig.3: “GA”) to generate a synthetic CT image (fig. 5:220) based on the MR image (204); and
training a second generator (figs. 3,5:”GB”) to generate a reconstructed MR image (“to reproduce the original image” c.6,ll.30-35: fig. 3:306,308: “Input”) based on the synthetic CT image and/or an original CT image (fig. 5: 202).
Re 10. (Original), Zheng teaches The method of claim 9, further comprising, after training (via figs. 3,4) the first generator, generating the synthetic CT image (220) by:
receiving (twice via fig. 3:306,308: “Input”) a further (training) MR image (via “Domains A and B may be any suitable, but diferent, domains, such as, e.g., CT, MR, DynaCT, ultrasound, PET, etc.”, c.4,ll. 55-60) ;
applying the first generator to the further MR image to generate a synthetic CT image (fig. 5:220) corresponding to the further MR image (corresponding synthetic CT image)12; and
creating an image pair (via “generators G.sub.A 310 and G.sub.B 312 to reproduce the original image” c.6,ll.30-35) comprising the corresponding synthetic CT image and the further MR image.
Claim 11 rejected like claim 1:
Re 11. (Original), Zheng discloses A system for segmenting a computerized tomography (CT) image, comprising:
memory (fig. 13);
at least one processor (fig. 13); and
a machine learning module (“as a module , component, subroutine, or other unit suitable for use in a computing environment.” c.14,ll.1-5) comprising one or more trained, machine learning models,
the memory storing instructions that, when executed by the at least one processor, cause the at least one processor to:
receive a plurality of data groups, each group comprising an image pair comprising the CT image, a magnetic resonance (MR) image and a segmentation mask for the MR image;
train a first generator of a first generative adversarial network (GAN) of the machine learning module using the image pairs, to generate a synthetic MR image from the CT image based on relationships between image features of the CT image and MR image in each image pair; and
train a second generator of a second GAN of the machine learning module to generate a synthetic mask for the synthetic MR image based on relationships between features of the MR image and segmentation mask of each data group.
Claim 12 is rejected like claim 8:
Re 12. (Original), Zheng discloses The system of claim 11, wherein the instructions further cause the at least one processor to segment a further CT image by:
receiving the further CT image;
converting the CT image to a synthetic MR image using the first generator;
generating a segmentation mask for the synthetic MR image using the second generator; and
applying the segmentation mask to the CT image.
Claim 13 rejected like claim 9:
Re 13. (Currently Amended)m Zheng discloses The system of claim 11 or 12, wherein the instructions further cause the at least one processor to construct the data groups by:
for each of a plurality of subjects, receiving a magnetic resonance (MR) image for the subject;
training a first generator of the machine learning module to generate synthetic CT images based on the MR image; and
training a second generator to generate a reconstructed MR image based on the synthetic CT image and/or an original CT image.
Claim 14 is rejected like claim 10:
Re 14. (Original), Zheng discloses The system of claim 13, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to generate the synthetic CT image,after training the first generator, by:
receiving a further MR image; and
applying the first generator to the further MR image to generate a synthetic CT image corresponding to the further MR image; and
creating an image pair comprising the corresponding synthetic CT image and the further MR image.
Claim 15 is rejected like claim 1:
Re 15. (Original), Zheng discloses A method for segmenting a computerized tomography (CT) image, comprising:
receiving13 (like input arrow) a CT image;
converting14 (like translation) the CT image to a synthetic magnetic resonance (MR) image (via “ an image in domain A translated to domain B as a synthesized image” c.6,ll. 30-35 wherein “Domains A and B may be any suitable, but different, domains, such as, e.g., CT, MR, DynaCT, ultrasound, PET, etc.” c.4,ll.50-55);
generating15 (like a GAN) a (“real or”, c.7,ll.20-25) segmentation mask for the synthetic MR image; and
applying16 (like using something) the segmentation mask to17 the CT image (“for18 domain A and domain B images” c.9,ll. 15-20 wherein “Domains A and B may be any suitable, but different, domains, such as, e.g., CT, MR, DynaCT, ultrasound, PET, etc.” c.4,ll.50-55).
Claim 16 is rejected like claims 1,11,15:
Re 16. (Original) A system for segmenting a computerized tomography (CT) image, comprising:
memory;
at least one processor; and
a machine learning module comprising one or more trained, machine learning models,
the memory storing instructions that, when executed by the at least one processor, cause the at least one processor to:
receive a CT (“input” c.5,ll. 20-25) image;
convert (via “cascaded translation” c.6,ll.25-30) the CT image to a synthetic magnetic resonance (MR) image using the machine learning module;
generating19 a segmentation mask (resulting in “the segmentation results (e.g., a segmentation mask) generated by segmentators S.sub.A and S.sub.B” c.9.ll. 35-40) for the synthetic MR image using the machine learning module; and
apply the segmentation mask (“to encourage accurate segmentation results” c.9,ll.30-35) to the CT image.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al. (US 10,970,829 B1) in view of Zhu et al. (US 12,469,137 B1):
PNG
media_image4.png
726
314
media_image4.png
Greyscale
Re 4. (Currently Amended), Zheng teaches The method of any one of claims 1 to 3 claim 1,wherein training the first GAN comprises enforcing20 pixel-level21 loss between the MR image and respective synthetic MR image.
Zheng does not teach the difference22 of claim 4 of:
enforcing23 pixel-level24.
Zhu teaches the difference of claim 4 of:
enforcing25 pixel-level26 (via “In at least one embodiment, reconstruction loss 716 can also constrain pixel-level similarity between generated “fake” data or images 726 and baseline data or images 702 in a same mode or domain, as described above.”, c.10,ll.60-65).
Since Zheng teaches a loss, one of skill in the art of losses can make Zheng’s be as Zhu’s seeing the change “loss functions 714, 716, 720 provide data that can be backpropagated in order to update probabilistic weights and improve GAN training and inference.”, Zhu c. 10,ll.30-35.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al. (US 10,970,829 B1) in view of Zhang et al. (US 2019/0279346 A1):
PNG
media_image6.png
726
345
media_image6.png
Greyscale
Re 5. (Currently Amended), Zhang discloses The method of any one of claims 1 to 4 claim 1,wherein training the second GAN comprises enforcing Binary Cross Entropy Loss (“functions 236 and 238”,c.7,ll.15-20) between the synthetic MR image and respective synthetic mask.
Zheng does not teach the difference of claim 5 of:
enforcing27 Binary.
Zhang teaches the difference of claim 5 of:
enforcing28 Binary (via “In some embodiments, to provide such a standardized scoring scheme, the training engine 218 uses a binary cross entropy loss function to enforce that all “bad” examples of images have scores that are less than a certain threshold (e.g., a score less than zero).” [00556, last S).
Since Zheng teaches a loss, one of skill in the art of losses can make Zhang’s be as Zheng’s seeing the change “providing a standardized scoring scheme in which a particular score for an image indicates that the image would be considered “good” rather than merely better than others.” Zhang [0056] 1st S,
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al. (US 10,970,829 B1) in view of LIU et al. (CN 109285111 A) with SEARCH machine translation:
PNG
media_image7.png
726
346
media_image7.png
Greyscale
Re 6. (Currently Amended), Zheng teaches claim 6 of The method of any one of claims 1 to 5 claim 1,wherein the objective function of the first GAN, ZoAx-1,is formulated as a first ( I count a first to fourth “G” in fig. 3) conditional GAN,L.GAaN-1, according to:
aQ~ltr,1x,
wherein G1 is the first generator, D1 is a first discriminator of the first GAN, x and y are the CT image and MR image of each pair, respectively, z is a random noise vector, f, is a loss based on the Li distance and A is a regularisation parameter.
Zheng does not teach the difference of claim 6 of:
conditional (GAN)29…30
z is a random noise vector…
L1 distance and A is a regularization parameter.
LIU teaches the difference of claim 6 of:
conditional (GAN)31 (“cGAN”, pg.8, 1st txt blk)…32
PNG
media_image8.png
311
997
media_image8.png
Greyscale
z is a random noise vector…
L1 distance and A is a regularization parameter (or “L1 regularization function”, pg. 9, 3rd txt blk).
Since Zheng teaches a loss function, one of skill in the art of loss functions can make Zheng’s be as LIU’s seeing the change “improving the efficiency of the model for training”, LIU, pg. 10, last txt blk.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al. (US 10,970,829 B1) in view of Phaphuangwittayakul et al. (Fast Adaptive Meta-Learning for Few-Shot Image Generation):
PNG
media_image9.png
726
604
media_image9.png
Greyscale
Re 7. (Currently Amended), Zheng teaches The method of any one of claims 1 to 6 claim 1,wherein the objective function of the second ( I count a first to fourth “G” in fig. 3) GAN,AN-2,is formulated as33 a first ( I count a first to fourth “G” in fig. 3) conditional GAN,aoAN-2, according34 to:
(A):
PNG
media_image2.png
54
929
media_image2.png
Greyscale
(B) others:
PNG
media_image3.png
146
1059
media_image3.png
Greyscale
wherein G, and a2 are the first generator and second generator ( I count a first to fourth “G” in fig. 3), respectively, DL and D2 are a first discriminator ( I count a first to fourth “G” in fig. 3) of the first GAN and second discriminator of the second GAN ( I count a first to fourth “G” in fig. 3), respectively, x and y are the CT image and MR image of each pair, respectively, y' is the synthetic MR image, z and z' are random noise vectors, L, is a loss based on the Binary Cross Entropy Loss and A is a regularisation parameter (“applied to the cycle-consistency loss and the shape-consistency loss”, c.9,ll.50-55)35.
Zheng does not teach36 under the NON-broadest reasonable interpretation (i.e., under a narrow subset of the broadest reasonable interpretation of claim 7) the difference of claim 7 of
conditional (GAN)…
PNG
media_image2.png
54
929
media_image2.png
Greyscale
z and z are random noise vectors…
Binary…
regularization.
Phaphuangwittayakul teaches37 under the NON-broadest reasonable interpretation (i.e., under a narrow subset of the broadest reasonable interpretation of claim 7) the difference of claim 7 of
conditional (“condi-tioning” “model”, pg. 2206, lcol, 2nd bullet: Fig. 1: the model) (GAN)…
PNG
media_image2.png
54
929
media_image2.png
Greyscale
i.e., equations (1)-(19):
PNG
media_image10.png
309
1027
media_image10.png
Greyscale
z and z are random noise vectors (“concatenated with the extracted feature vector r from encoder E1.”, pg. 2208, lcol, 1st para 5th S)…
(“GAN (RaGAN) [42]”) Binary (cross-entropy loss and encoder E adopting the structure of the U Net [51] encoder.”, pg. 2210. lcol, last para, 2nd S)…
regularization (via said (18)).
Since Zheng teaches GAN, one of skill in the art of GANs can make Zheng’s be as Phaphuangwittayakul’s seeing in the change to “not only to stabilise the training process but also to improve the quality of generated images”, Phaphuangwittayakul, pg. 2210, rcol, 1st S.
Conclusion
The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure.
The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action.
Citation
Relevance
IDS cited & D1 reference Rubin et al. (CT-To-MR Conditional Generative Adversarial Networks for Ischemic Stroke Lesion Segmentation)
Rubin teaches a random noise vector, pg. 2, ;col, last para, 2nd S:
The generator network, G(z), attempts to generate outputs that resemble images, y, from a distribution of training data, where z is a random noise vector, G:z→y.
as the closest to the claimed “z and z’ are random noise vectors” of claim 7’s Markush alternative (A).
IDS cited & D2 reference Zhang (Translating and Segmenting Multimodal Medical Volumes with Cycle- and Shape-Consistency Generative Adversarial Network)
Zhang (Translating and Segmenting Multimodal Medical Volumes with Cycle- and Shape-Consistency Generative Adversarial Network) corresponds to Zheng et al. (US 10,970,829 B1) as applied in the rejection of claim 1 as the closest to claim 1.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENNIS ROSARIO/
Examiner, Art Unit 2676
/Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
1 Applicant’s disclosure, page 1,lines 19-21: Consequently, well- established methods have been proposed to automatically segment tissues in MR brain scans, but few works have been done in the CT domain.”
2 Applicant’s disclosure, pages 15-20: “Compared to MR scans, CT images are usually cheaper and faster to obtain. However, MR scans have higher soft tissue contrast.”
3 mask: computing a bit pattern which, by convolution with a second pattern in a logical operation, can be used to isolate a specific subset of the second pattern for examination (Dictionary.com)
4 The preamble’s “to segment” has no result and thus the preamble has no manipulative effect in the remainder of claim 1.
5 BROAD CLAIM LANGUAGE: -ing (of “receiving”): a suffix of nouns formed from verbs (receive), expressing (i.e., put (IP) into words) the action (“receiving” more directed to the action than a result: such as received groups) of the verb (receive) or its result (none) , product (none), material (none), etc. (the art of building; a new building; cotton wadding ). (Dictionary.com)
6 BROAD CLAIOM LANGUIAGE: -ing (of “training”): a suffix of nouns formed from verbs (train), expressing (i.e., put into words) the action of the verb (train) or its result (“the synthetic MR image”) , product (“the synthetic MR image”), material, etc. (the art of building; a new building; cotton wadding ). (Dictionary.com)
7 BROAD CLAIM LANGUAGE: -ing (of “training”): a suffix of nouns formed from verbs (train), expressing (i.e., put into words) the action ( “training” more directed to the action than a result: such as a generated synthetic mask) of the verb (train) or its result (none) , product (no product of this particular “training”), material, etc. (the art of building; a new building; cotton wadding ). (Dictionary.com)
8 mask: computing a bit pattern which, by convolution with a second pattern in a logical operation, can be used (not apparent in claim 1) to isolate a specific subset of the second pattern for examination (Dictionary.com)
9 MPEP 21117 Markush Claims [R-01.2024], I. MARKUSH CLAM, 2nd para, 4th S: Although the term "Markush claim" is used throughout the MPEP, any claim that recites alternatively usable members, regardless of format, should be treated as a Markush claim:
Implicit Markush element of Markush alternatives [(A) and (B)] follows:
10 -ing (of “according”): a suffix of nouns formed from verbs, expressing the action of the verb or its result, product, material, etc. (the art of building; a new building; cotton wadding ), wherein etc. is defined: and others ((B): L(GA,GB.DA.DB.SA.SB)); and so forth; and so on (used to indicate that more of the same sort or class might have been mentioned, but for brevity have been omitted), wherein and is defined: (used to connect [Markush] alternatives (A) and (B)), wherein others is defined: different or distinct from the one ((A):LGAN-2) or ones already mentioned or implied (Dictionary.com)
11 Since Markush alternative (B) is taught the Markush element [(A) and (B)] is taught under the broadest reasonable interpretation of claim 7.
12 This in parenthesis is not limiting under the broadest reasonable interpretation
13 BROAD CLAIM LANGUAGE: i.e., putting into words the action of “receive”, etc (i.e., and so forth, i.e. likewise or correspondingly or similarly into view or consideration).
14 BROAD CLAIM LANGUAGE: i.e., putting into words the action of “convert”, etc (i.e., and so forth, i.e. likewise or correspondingly or similarly into view or consideration).
15 BROAD CLAIM LANGUAGE: i.e., putting into words the action of “generate”, etc (i.e., and so forth, i.e. likewise or correspondingly or similarly into view or consideration).
16 BROAD CLAIM LANGUAGE: i.e., putting into words the action of “apply”, etc (i.e., and so forth, i.e. likewise or correspondingly or similarly into view or consideration).
17 to: (used for expressing addition or accompaniment) with. (Dictionary.com)
18 for: intended to belong to, or be used in connection with. (Dictionary.com)
19 BROAD CLAIM LANGUAGE: i.e., putting into words the action of “generate”, etc (i.e., and so forth, i.e. likewise or correspondingly or similarly into view or consideration).
20 BROAD CLAIM LANGUAGE
21 cumulative adjective
22 THE CLAIMED INVENTION AS A WHOLE:
The problem is via applicant’s disclosure, page 1,ll.15-20:
Compared to MR scans, CT images are usually cheaper and faster to obtain. However, MR scans have higher soft tissue contrast. Consequently, well- established methods have been proposed to automatically segment tissues in MR brain scans, but few works have been done in the CT domain.
The solution is:
pg. 8,ll.5-10:
Several efforts have been made in leveraging GAN to tackle CT- related problems, but they mostly leverage GAN as a data augmentation tool rather than in the generation of synthetic MR images from which to generate a mask to identify tissue in a corresponding CT scan.
and page 12,ll. 15-25:
PNG
media_image5.png
386
788
media_image5.png
Greyscale
The claimed “enforcing pixel-level” does not make an explicit appearance (“tissue class”-“pixels”) in the above cited pages of applicant’s disclosure.
Indication of obviousness: the lack in claims 1 & 4 of the disclosed “to identify tissue in a corresponding CT scan” and “the two U-Net, they co-adapted to each other to produce the best synthetic mask” is an Indication of obviousness.
23 BROAD CLAIM LANGUAGE
24 cumulative adjective
25 BROAD CLAIM LANGUAGE
26 cumulative adjective
27 BROAD CLAIM LANGUAGE
28 BROAD CLAIM LANGUAGE
29 (italics) represent claim limitations already taught
30 ellipses (…) represent claim limitations already taught
31 (italics) represent claim limitations already taught
32 ellipses (…) represent claim limitations already taught
33 MPEP 21117 Markush Claims [R-01.2024], I. MARKUSH CLAM, 2nd para, 4th S: Although the term "Markush claim" is used throughout the MPEP, any claim that recites alternatively usable members, regardless of format, should be treated as a Markush claim:
Implicit Markush element of Markush alternatives [(A) and (B)] follows:
34 -ing (of “according”): a suffix of nouns formed from verbs, expressing the action of the verb or its result, product, material, etc. (the art of building; a new building; cotton wadding ), wherein etc. is defined: and others ((B): L(GA,GB.DA.DB.SA.SB)); and so forth; and so on (used to indicate that more of the same sort or class might have been mentioned, but for brevity have been omitted), wherein and is defined: (used to connect [Markush] alternatives (A) and (B)), wherein others is defined: different or distinct from the one ((A):LGAN-2) or ones already mentioned or implied (Dictionary.com)
35 Since Markush alternative (A) is taught the Markush element [(A) and (B)] is taught under the broadest reasonable interpretation of claim 7.
36 The examiner anticipates that applicants will narrow the claim scope of claim 7 to the claimed equation: LGAN-2.
37 examiner anticipates that claim scope of claim 7 will narrow to the claimed equation (or Markush alternative (A)): LGAN-2.; thus, Phaphuangwittayakul is being applied under 35 USC 103.