DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 3 and 7 are objected to because of the following informalities:
Claim 3 should be read as “wherein the ARIA is associated with parenchymal edema or sulcal effusion (ARIA-E) in the brain of the patient.”
Claim 7 should be amended to correct an apparent misspelling: “wherein the anti-Aβ antibody is selected from the group consisting of bapineuzumab, solanezumab, aducanumab, gantenerumab, crenezumab, donanemab, and lecanemab.”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 8-10 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
Claim 8 recites “in response to outputting the quantification of ARIA in the brain of the patient, determining one or more anti-ARIA treatments for the patient.” Claim 9 recites “administering the one or more anti-ARIA treatments to the patient.” Claim 10 recites “wherein the one or more anti-ARIA treatments comprise one or more anti-ARIA antibodies.”
Applicant’s disclosure does not teach how to administer an anti-ARIA treatment and does not describe what treatments qualify as “anti-ARIA,” except that one treatment includes “anti-ARIA antibodies.” To be clear, Applicant’s disclosure does not provide any information regarding treatments or antibodies that are anti-ARIA. The language of claims 8-10 is repeated at [0172]-[0174] without additional information. While eight (8) other paragraphs mention anti-ARIA treatments, these eight paragraphs do not provide any additional information and appear to be the same two paragraphs repeated four times. (compare [0015]-[0016] and [0046]-[0047] and [0085]-[0086] and [0100]-[0101]).
Moreover, Examiner is unable to find examples of anti-ARIA treatments or anti-ARIA antibodies that reduce edema or hemorrhages in the brain and that are supported by evidence such that the treatments or antibodies would be known by those having ordinary skill in the art prior to or at the time of filing the application. WITHINGTON, discussed below, suggests using a pulse of brain-penetrant intravenous corticosteroids (dexamethasone or methylprednisolone), plasmapheresis, and anti-convulsants for seizures. However, WITHINGTON acknowledges that “[t]o date, there are no controlled studies or formal guidelines for decision-making in ARIA management and treatment.” (p.6, left column before Conclusion).
One conceivable interpretation of “anti-ARIA treatment” is to reduce the dosage or to stop the administration of an anti-Aβ antibody, thereby decreasing the likelihood of developing ARIA. However, Applicant’s disclosure does not appear to use this interpretation because reducing or stopping the therapy is listed separately from anti-ARIA treatments. “By monitoring ARIA in the patient over time, the one or more computing devices may determine whether any of the responses above (e.g., reduced dosage, terminated or temporarily suspended administration, anti-ARIA treatments) is effective, and formulate an adjusted response accordingly.” (emphasis added) ([0016]).
Because the disclosure does not explain what is meant by an “anti-ARIA treatment” or provide any examples of “anti-ARIA antibodies,” one having ordinary skill in the art is not enabled to make or use the invention.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 8-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As explained above with respect to the Section 112(a) rejection of claim 8-10, the disclosure does not explain what is meant by an “anti-ARIA treatment” or provide any examples of “anti-ARIA antibodies.” As such, the meanings of “anti-ARIA treatment” or “anti-ARIA antibodies” would not be reasonably clear to one having ordinary skill in the art. As such, a person having ordinary skill would not be able to understand the scope of the claims. “If the language of the claim is such that a person of ordinary skill in the art could not interpret the metes and bounds of the claim so as to understand how to avoid infringement, a rejection of the claim under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph, is appropriate.” (MPEP 2173, II).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-14, 16, 18-21, and 41 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite:
accessing a set of one or more brain-scan images associated with the patient – (claims 1, 21, and 41);
inputting the set of one or more brain-scan images into one or more machine-learning models trained to generate a segmentation map based on the set of one or more brain-scan images - (claims 1, 21, and 41);
outputting a quantification of ARIA in the brain of the patient based at least in part on the segmentation map - (claims 1, 21, and 41).
Independent claims 1, 21, and 41, as drafted and under their broadest reasonable interpretation, recite a mathematical concept and/or mental process. (MPEP 2106.04(a)(2)(I)). The claims recite a mental process because the trained machine-learning model replicates a doctor’s analysis of a medical image by evaluating and providing a judgment/opinion as to ARIA. (see Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437, Federal Circuit, decided on 18 April 2025: “[C]laims that do no more than apply established methods of machine learning to a new data environment” are not patent eligible.”). However, the processes of a trained machine-learning model also involve mathematical concepts, such as pre-processing the image for the trained model (i.e. resizing/cropping, normalization) and performing forward pass mathematical operations (i.e., convolutions, downsampling, upsampling, and determining probabilities) within the trained model. As such, the claimed invention recites both a mental process and a mathematical concept (i.e., an abstract idea).
Examiner also notes that claims 1, 21, and 41 are conceptually similar to those in Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356 (Fed. Cir. 2016). The claims in Electric Power Group were found to be patent ineligible because, like the claims in this case, they essentially recited collecting information, analyzing that information, and presenting results of that analysis. “[W]e have treated collecting information, including when limited to particular content (which does not change its character as information), as within the realm of abstract ideas…we have treated analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, as essentially mental processes within the abstract-idea category…[and] we have recognized that merely presenting the results of abstract processes of collecting and analyzing information, without more…, is abstract as an ancillary part of such collection and analysis.” Electric Power Group, 830 F.3d 1353-1354.
Once it is established that the claims recite a judicial exception (i.e., an abstract idea), the next question to consider is whether the claims integrate the judicial exception into a practical application. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. (MPEP 2106.04(d)).
Additional elements should be considered to determine if they integrate the judicial exception into a practical application. Here, the additional elements include: (1) the segmentation map including a plurality of pixel-wise class labels corresponding to a plurality of pixels in the segmentation map; and (2) wherein at least one of the plurality of pixel-wise class labels comprises an indication of ARIA in the brain of the patient.
In this case, the judicial exception is not integrated into a practical application. The additional element/step of (1) is a well-understood, routine, conventional activity/element for machine learning models trained to analyze medical images. (MPEP 2106.05(A)). Segmentation maps are intended to differentiate different tissues based on the class/label assigned to each pixel. “However, the main purpose [of segmentation] remains to analyze a group of pixels or voxels and discriminate them based on subjective characteristics.” Akkineni, Sai Darahas, and S. P. K. Karri. “Deep Learning Algorithms for Brain Image Analysis.” Brain and Behavior Computing. CRC Press, 2021. 267-291) (see also FUJIBAYASHI and GAO discussed below).
The additional element of (2) (i.e., class labels indicating ARIA) does no more than generally link the use of a judicial exception to a particular technological environment or field of use. (MPEP 2106.05(h)). More specifically, the recited training model could be applied to various image segmentation scenarios in which tissues are differentiated from one another but, in this case, it is being applied to differentiate and identify ARIA. (Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437, Federal Circuit, decided on 18 April 2025: “[C]laims that do no more than apply established methods of machine learning to a new data environment” are not patent eligible.”).
If the claims recite a judicial exception and do not integrate that exception into a practical application, as is the case here, the next question is whether the claims include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims do not. A shared quality of the additional elements/steps (1) and (2) are that they do not recite any meaningful limitation that transforms the judicial exception into a patent-eligible application. (MPEP 2106.05(II)). As explained above, the additional element/step of (1) is a well-understood, routine, conventional activity. (MPEP 2106.05(A)). The additional element of (2) only generally links the judicial exception to a particular technological environment.
Accordingly, claims 1, 21, and 41 do not include patent-eligible subject matter.
Dependent claims 2-14, 16, and 18-20 also fail to recite patent-eligible subject matter.
For example, claims 2-4 (i.e., ARIA being associated with microhemorrhages, hemosiderin deposits, edema, or sulcal effusion or the patient having Alzheimer’s disease) and claims 7 and 10 (i.e., the anti-Aβ antibody being a particular anti-Aβ antibody or the anti-ARIA treatment including anti-ARIA antibodies) and claims 11-12 (i.e., type of brain-scan images or particular type of MR image) recite limitations that amount to no more than generally linking the judicial exception to a field-of-use or technological environment. (MPEP 2106.05(h)).
Claim 5 (i.e., determining a dosage adjustment in response to quantification), claim 6 (i.e., terminating or temporarily suspending use in response to quantification), and claim 8 (i.e., determining one or more anti-ARIA treatments for the patient in response to quantification) are also examples of mental processes (e.g., concepts performed in the human mind, such as observation, evaluation, judgment, opinion). (MPEP 2106.04(a)). Claims 5, 6, and 8 are typical decisions made by the doctor after analyzing and evaluating the MRI images of the brain.
Claim 9 recites “administering the one or more anti-ARIA treatments to the patient.” While administering a particular treatment can integrate the judicial exception into a practical application, the limitation of claim 9 is without specificity and is not connected to any prior limitations. When determining whether a claim applies or uses a recited judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, the particularity or generality of the treatment should be considered. (MPEP 2106.04(d)(2)). In fact, the limitations of claim 9 are similar to an example in the MPEP that was provided while explaining how a claim can be too general. More specifically, “administering a suitable medication to a patient” was found to be “not particular, and…instead merely instructions to ‘apply’ the exception in a generic way.”. (MPEP 2106.04(d)(2)(a)).
Claims 13, 14, 16, and 20 (i.e., encoder and decoder), claim 18 (i.e., trained using image augmentation), and claim 19 (i.e., pixel-wise class label) recite well-understood, routine, conventional activities/elements for machine learning models that are trained to analyze medical images, as explained below with respect to GAO in the Section 103 rejections. (MPEP 2106.05(A)).
Accordingly, claims 1-14, 16, 18-21, and 41 are rejected for lacking patent-eligible subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 11, 12-14, 16, 19-21, and 41 are rejected under 35 U.S.C. 103 as being unpatentable over a translation of Int’l. Publ. No. WO 2023/145953 A1 (hereinafter “FUJIBAYASHI”) in view of U.S. Patent Appl. Publ. No. 2020/0349697 A1 (hereinafter “GAO”).
FUJIBAYASHI teaches a system and a method that are capable of “automatically providing information on an abnormal portion” of a brain using MRI images. (p.1, lines 32-33 and lines 19-20). “[A] computer program causes a computer to acquire an MRI image, specify a signal value of the acquired MRI image, and obtain the specified signal value to display the abnormal part based on and execute the process…. information on an abnormal portion can be automatically provided from an MRI image.” (p.1, lines 37-41). The system and method can be used to evaluate amyloid related imaging abnormalities (ARIA). (p.4, lines 23-31).
With respect to claim 1, FUJIBAYASHI teaches a method for quantifying amyloid related imaging abnormalities (ARIA) in a brain of a patient by one or more computing devices “[A] computer program causes a computer to acquire an MRI image, specify a signal value of the acquired MRI image, and obtain the specified signal value to display the abnormal part based on and execute the process…. information on an abnormal portion can be automatically provided from an MRI image.” (p.1, lines 37-41). With respect to amyloid related imaging abnormalities (ARIA), see p.4, lines 23-31, which is discussed below. With respect to quantifying ARIA, Figures 7-10 illustrate screen displays the present the number and sizes of different edema sites and microhemorrhages. The method includes:
accessing a set of one or more brain-scan images associated with the patient. “The MRI apparatus 10 is an apparatus capable of capturing a tomographic image using a magnetic resonance phenomenon, and can obtain an MRI image (also referred to as an MR image). (p.2, lines 7-8). “The image data server 100 records MRI images for each patient.” (p.2, line 32). The images are part of a set. “The example in the figure indicates that the 128th slice image of 256 slice images is displayed.” (p.6, lines 16-17).
inputting the set of one or more brain-scan images into one or more machine-learning models trained to generate a segmentation map based on the set of one or more brain-scan images. “When an MRI image is input to the brain tissue identifying section 57, the brain tissue identifying section 57 identifies which tissue each pixel of the MRI image belongs to.” (p.4, lines 5-6). “Also, the brain tissue identifying unit 57 may identify brain tissue using a segmentation method. Specifically, using the Bayesian estimation algorithm from the MRI image, a mask image is generated for each tissue….” (p.3, lines 55-57). FUJIBAYASHI also teaches that the segmentation map includes a plurality of pixel-wise class labels corresponding to a plurality of pixels in the segmentation map, wherein at least one of the plurality of pixel-wise class labels comprises an indication of ARIA in the brain of the patient. With respect to the brain tissue identifying section 57, FUJIBAYASHI teaches that “a mask image is generated for each tissue with the probability that the tissue exists in each pixel region in the image as a pixel value….” (p.3, lines 56-57); With respect to a signal value specifying unit 56, FUJIBAYASHI teaches that “when the signal value of the MRI image is greater than or equal to the first threshold, it can be determined that the set (region) of pixels having the signal value is edema. Similarly, when the signal value of the MRI image is equal to or less than the second threshold, it can be determined that a set (region) of pixels having that signal value is microhemorrhage.” p.4, lines 53-55. NOTE: Examiner is interpreting “an indication of ARIA in the brain” as being taught by identifying a set (region) of pixels as being edema or a microhemorrhage.
With respect to the limitation, “inputting…into one or more machine-learning models,” FUJIBAYASHI teaches that the operations performed by the brain tissue identifying unit 57 and the signal value specifying unit 56 could be performed by learning models. More specifically, after discussing the brain tissue identifying unit 57 performing segmentation, FUJIBAYASHI then teaches that a learning model could also be used. “Also, a learning model generated by machine learning other than the Bayesian estimation algorithm may be used. For the learning model, for example, U-Net, GAN (Generative Adversarial Network), SegNet, etc. may be used.” (emphasis added) (p.3, line 58 to page 4, line 2). Notably, Figures 3 and 4 correspond to the operations performed by the brain tissue identifying unit 57 and the signal value specifying unit 56. FUJIBAYASHI reiterates that either could be performed by a learning model. “In the examples of FIGS. 3 and 4 described above, the clustering method is used to classify brain tissue, and the signal value of the MRI image is specified for each brain tissue to determine edema and microhemorrhage. Determination of microhemorrhage is not limited to this. For example, a learning model generated by machine learning or a technique based on statistical analysis (for example, discriminant analysis, which is one of techniques for automatically obtaining a threshold value) may be used.” (p.5, lines 14-16). NOTE: FUJIBAYASHI teaches inputting the images into a learning model. “The learning model 61 is generated to output edema region information, microhemorrhage region information, and normal region (neither edema nor microhemorrhage) region information when an MRI image is input. The MRI images input to the learning model 61 may be T2-weighted images, T2*-weighted images, FLAIR images, SWI images, PADRE images, QSM images, or R2* images.” (p.5, lines 19-23)).
outputting a quantification of ARIA in the brain of the patient based at least in part on the segmentation map. Figures 9 and 10 are screens that display detection results. Figure 9 is shown here and shows information on edema. “In the detection results shown in FIG. 9, one edema was detected at time t1, four edemas were detected at time t2, and two edemas were detected at time t3.” (p.7, lines 24-25). Figure 10 shows information on microbleeds. “In the detection results shown in FIG. 10, two microbleeds were detected at time t1, eight microbleeds were detected at time t2, and six microbleeds were detected at time t3.” (p.8, lines 18-19).
PNG
media_image1.png
200
400
media_image1.png
Greyscale
FUJIBAYASHI teaches that the method could be used to monitor ARIA. “ARIA includes edema with fluid accumulation (ARIA-E) and small hemorrhages on the brain called cerebral microhemorrhages (ARIA-H). Cerebral microhemorrhages are small hemosiderin deposits. ARIA can be specified, for example, on the condition that the magnetic susceptibility is equal to or greater than a predetermined magnetic susceptibility threshold and that the size of the abnormal portion is the specified ARIA size. The specified size of ARIA can be ARIA-H if it is 1 cm or less, and ARIA-E if it is over 1 cm. In particular, if the size of the ARIA is 5 cm or less, the severity of the ARIA can be mild, if it is between 5 cm and 9 cm, it is moderate, and if it is greater than 9 cm, it can be severe.” (p.4, lines 25-31). “With the above configuration, the number of microbleeds can be automatically quantitatively monitored from MRI images, and the increase or decrease in the number of microbleeds at multiple time points can be tracked. As a result, it is possible to automatically quantify microbleeds without being troublesome and unaffected by the experience of doctors, and to understand the progression of lesions and administer appropriate medication under pathological management that monitors changes over time. It can be used as an index for judgment.” (p.8, lines 42-47).
While FUJIBAYASHI teaches identifying brain tissue using a “segmentation method” and that a set (region) of pixels can correspond to edema or a microhemorrhage, (p.3, lines 55-57), it is not clear that FUJIBAYASHI teaches the one or more machine-learning models being trained to generate a segmentation map based on the set of one or more brain-scan images or that the at least one of the plurality of pixel-wise class labels comprises “an indication of ARIA.” Nonetheless, each of these is a well-known feature of learning models that analyze medical images.
For example, in the same field of endeavor, GAO teaches systems and methods that are configured to detect intracerebral hemorrhages (ICH) and “use an end-to-end multi-task learning model for modeling head scan images to solve ICH detection and segmentation problems.” (Abstract and [0022]). “The disclosed systems and methods provide several improvements over conventional approaches. First, the learning model can perform ICH detection and segmentation tasks simultaneously. This enables information sharing and complementation between the two different but closely related tasks. Modules of the learning model are jointly optimized, which can preserve the overall performance of the two tasks while reducing time consumption in both training and prediction stages...Third, the model can be flexible on the type of ICH classification labels it predicts and supports different training scenarios.” ([0023]). While the main embodiment receives images from head computed tomography (CT) scans ([0005]), GAO’s system can also be applied to other “imaging modalities suitable for head scans, including, e.g., Magnetic Resonance Imaging (MRI)….” ([0024]). GAO also notes that “[b]ased on a bleeding location in a brain, ICH can be further categorized into 5 subtypes: epidural hemorrhage (EDH), subdural hemorrhage (SDH), subarachnoid hemorrhage (SAH), cerebral parenchymal hemorrhage (CPH) and intraventricular hemorrhage (IVH). In some embodiments, ICH subtype labels on slice-level or subject-level may be included in training data.”
“The training images are previously segmented or annotated by expert operators with each pixel/voxel classified and labeled, e.g., with value 1 if the pixel/voxel indicates a bleeding or value 0 if otherwise. In some embodiments, instead of binary values, the ground truth data may be probability maps where each pixel/voxel is
PNG
media_image2.png
917
567
media_image2.png
Greyscale
associated with a probability value indicating how likely the pixel/voxel indicate a bleeding.” ([0030]). “The trained learning model may be used by image processing device 103 to detect ICH in new head scan images….” ([0034]). GAO uses the trained model “to perform one or more of: (1) predict whether ICH exists, (2) predict the subtype of ICH, and (3) determine segmentation masks of the image optionally with an estimated bleeding volume.” ([0037]). “[T]he segmentation mask can include but not limited to the following examples: 1) binary ICH masks; 2) detailed ICH subtype masks; 3) ICH or subtype masks together with other desired labels….” ([0036]).
Figure 2 of GAO is shown here. “In some embodiments, the decoder module may produce a probability map indicating the probability each pixel in the image slice belongs to a bleeding region. Processor 308 may then perform a thresholding to obtain a segmentation mask. For example, processor 308 may set pixels with probabilities above 0.8 as 1 (i.e., belong to a bleeding region) and the remaining pixels as 0 (i.e., not belong to a bleeding region).” ([0056]).
It would have been obvious to one having ordinary skill in the art at the time of filing to modify or replace the learning model of FUJIBAYASHI in order to have an end-to-end multi-task learning model, as taught in GAO, that generates a segmentation map that is based on a set of one or more brain-scan images and has a plurality of pixel-wise class labels comprising an indication of ARIA. One would have been motivated to use the system of GAO because it provides several improvements over conventional approaches, such as performing ICH detection and segmentation tasks simultaneously and being flexible on the type of ICH classification labels it predicts. There would have been a reasonable expectation of success as GAO shows that the system can detect ICH and is flexible enough to also, when trained with MRI images, identify ARIA edema.
NOTE: Examiner is interpreting “indication of ARIA” as being taught by the pixels in GAO that are classified as being part of a bleeding volume. ARIA includes edema and microhemorrhages that are attributed to treatment with anti-amyloid-beta (anti-A3) antibodies. ARIA is only determined after monitoring the patient before and during treatment to identify any new microhemorrhages or new/growing sites of edema. (see, e.g., CUMMINGS discussed below). FUJIBAYASHI teaches a system that is capable of monitoring edema and microhemorrhages over time (i.e., capable of detecting ARIA) such that identifying new edema and microhemorrhages means the pixels are indicative of ARIA.
With respect to claim 2, FUJIBAYASHI teaches that the ARIA is associated with microhemorrhages and hemosiderin deposits (ARIA-H) in the brain of the patient. ”ARIA includes edema with fluid accumulation (ARIA-E) and small hemorrhages on the brain called cerebral microhemorrhages (ARIA-H)…The specified size of ARIA can be ARIA-H if it is 1 cm or less, and ARIA-E if it is over 1 cm.” (p.4, lines 25-29). “In the detection results shown in FIG. 10, two microbleeds were detected at time t1, eight microbleeds were detected at time t2, and six microbleeds were detected at time t3.” (p.8, lines 18-19) (see Figure 10).
With respect to claim 3, FUJIBAYASHI teaches that the ARIA is associated parenchymal edema or sulcal effusion (ARIA-E) in the brain of the patient. ”ARIA includes edema with fluid accumulation (ARIA-E) and small hemorrhages on the brain called cerebral microhemorrhages (ARIA-H)…The specified size of ARIA can be ARIA-H if it is 1 cm or less, and ARIA-E if it is over 1 cm.” (p.4, lines 25-29). “In the detection results shown in FIG. 9, one edema was detected at time t1, four edemas were detected at time t2, and two edemas were detected at time t3.” (p.7, lines 24-25) (see Figure 9).
With respect to claim 11, FUJIBAYASHI teaches that wherein the set of one or more brain-scan images comprises one or more magnetic resonance imaging (MRI) images, one or more positron emission tomography (PET) images, one or more single-photon emission computed tomography (SPECT) images, one or more amyloid PET images, or any combination thereof. ”The image data server 100 records MRI images for each patient.” (p.2, line 32). The images are clearly part of a set. “The example in the figure indicates that the 128th slice image of 256 slice images is displayed.” (p.6, lines 16-17).
With respect to claim 12, FUJIBAYASHI teaches that the set of one or more brain-scan images comprises one or more fluid-attenuated inversion recovery (FLAIR) images, one or more T2*-weighted imaging (T2*WI) images, one or more T1-weighted imaging (T1WI) images, or any combination thereof. ”As used herein, MRI images are, for example, T2-weighted images, T2*-weighted images, FLAIR (Fluid-Attenuated Inversion Recovery) images, SWI images, QSM images (quantitative magnetic susceptibility mapping), R2* (R2 star) images, and PADRE (Phase Difference Enhanced Imaging) images.” (p.2, lines 14-16).
With respect to claim 13, FUJIBAYASHI does not explicitly teach the claim limitations. However, in the same field of endeavor, GAO teaches an encoder module 202 and a decoder module 204. Figure 2 illustrates the end-to-end multi-task learning model of GAO. The encoder module 202 is trained to generate a plurality of down-sampled feature maps based on the set of one or more brain-scan images. “As shown in FIG. 2, encoder module 202 may include a sequence of convolution/pooling layers to extract task-relevant features from the image slices, e.g., head CT scan slices.” ([0038]) “In FIG. 2, encoder module 202 employs a VGG architectures as an example to illustrate a feature map extraction procedure. For example, convolutional layers use multiple 3×3 kernel-sized filters and pooling layers use 2×2 size filters.” ([0039]). The ConvRNN module 206 enhances “the quality of feature maps generated from encoder module 202.” ([0040]).
GAO also teaches a decoder that is trained to generate a plurality of up-sampled feature maps based on the plurality of down-sampled feature maps and generate the segmentation map based on the plurality of up-sampled feature maps. See Figure 2. As discussed above, feature maps from the encoder are used by ConvRNN module. “Consistent with the present disclosure, ConvRNN module 206 may be used to learn contextual information between adjacent image slices across axial axis and enhance the quality of feature maps generated from encoder module 202.” ([0040]). The decoder then receives the features maps from the ConvRNN module. “Feature maps generated from ConvRNN module 206 are also used by decoder module 204 to produce segmentation masks.” NOTE: The feature maps from ConvRNN are based on the down-sampled feature maps from the encoder.
It would have been obvious to one having ordinary skill in the art at the time of filing to use the encoder and decoder modules taught in GAO. One would have been motivated to use the encoder and decoder modules of GAO because the learning model provides several improvements over conventional approaches, such as performing ICH detection and segmentation tasks simultaneously and being flexible on the type of ICH classification labels it predicts. There would have been a reasonable expectation of success as GAO shows that the system can detect ICH and is flexible enough to also, when trained with MRI images, identify ARIA edema.
With respect to claim 14 (depending from claim 13), GAO teaches that the encoder comprises a neural network. ”[E]ncoder module 202 may be in any suitable convolutional neutral network (CNN) architecture, including but not limited to the CNN component of commonly used image classification architectures such as VGG, ResNet, and DenseNet.” ([0039]) See also claim 7: “wherein the encoder is a Convolutional Neural Network (CNN).”
With respect to claim 16 (depending from claim 13), GAO teaches that the decoder comprises a neural network. “Consistent with some embodiments, the end-to-end multi-task learning model may be a fully convolutional network (FCN) that include an encoder module, a decoder module….” ([0033]). Although not described separately as a neural network, the decoder operates as a neural network as it takes the feature maps and expands them to generate the segmentation map. Moreover, it is known that an encoder-decoder architecture essentially includes two neural networks (i.e., encoder and decoder) connected to one another. (“Encoder-decoder architecture is a fundamental framework used in various fields, including natural language processing, image recognition, and speech synthesis. At its core, this architecture involves two connected neural networks: an encoder and a decoder.” (https://www.larksuite.com/en_us/topics/ai-glossary/encoder-decoder-architecture).
With respect to claim 19, FUJIBAYASHI teaches wherein the at least one of the plurality of pixel-wise class labels comprises an indication of one or more ARIA lesions. NOTE: Applicant’s disclosure refers to ARIA lesions as “areas of diffuse swelling.” ([0089]). FUJIBAYASHI teaches monitoring the number and size of edema in the brain. (see, e.g., Figure 9). “With the above configuration, the number and size of edema can be automatically monitored quantitatively from MRI images, and the increase and decrease in the number and size of edema can be tracked at multiple time points.” (p.8, lines 4-6). Edema may be color-coded and an alert can be displayed if the edema is excessive. “Although not shown, the previously detected edema and the newly detected edema may be displayed in different display modes (for example, by color coding) so that they can be compared…Also, whether the edema (hyperintense area) is chronic, subacute, or acute may be displayed in a comparable manner. Furthermore, depending on the degree of edema, an alert can be displayed on the image.” (p.7, lines 49-56).
In the FUJIBAYASHI-GAO system, the pixels associated with excessive swelling would be labeled as edema (i.e., an indication of ARIA lesions). (See, e.g., in GAO: “As shown in FIG. 4, when the signal value of the MRI image is greater than or equal to the first threshold, it can be determined that the set (region) of pixels having the signal value is edema.” (p.4, lines 51-52). Accordingly, FUJIBAYASHI-GAO system teaches “wherein the at least one of the plurality of pixel-wise class labels comprises an indication of one or more ARIA lesions.”
With respect to claim 20, FUJIBAYASHI does not teach the claim limitations. However, as explained above with respect to claim 13, GAO teaches the one or more machine-learning models comprises a segmentation model comprising an encoder trained to generate a plurality of down-sampled feature maps based on the set of one or more brain-scan images. “As shown in FIG. 2, encoder module 202 may include a sequence of convolution/pooling layers to extract task-relevant features from the image slices, e.g., head CT scan slices.” ([0038]) “In FIG. 2, encoder module 202 employs a VGG architectures as an example to illustrate a feature map extraction procedure. For example, convolutional layers use multiple 3×3 kernel-sized filters and pooling layers use 2×2 size filters.” ([0039]). The ConvRNN module 206 enhances “the quality of feature maps generated from encoder module 202.” ([0040]).
GAO also teaches detecting ARIA in the brain of the patient by generating, utilizing a classification model associated with the segmentation model, a classification score based at least in part on the plurality of down-sampled feature maps.
Hemorrhage detection in GAO is performed by a classification module (also called classifier) that receives feature maps based at least in part on the plurality of down-sampled feature maps. NOTE: Claim 20 recites “detecting ARIA in the brain of the patient by generating…a classification score…” GAO teaches detecting ICH using a classifier. “The method also includes detecting, by at least one processor, the ICH of the subject using the classifier based on the extracted feature maps of the image slices….” ([0010]). While GAO does not explicitly refer to a “classification score,” GAO teaches generating “ICH detection results” that includes subtypes of ICH and an ICH volume estimation. “Types of ICH detection results that the learning model may provide depend on the types of ground truth ICH labels used in a model training…If only slice-level ICH subtype labels are used in the model training, the model returns ŷ=(ŷsli-b, ŷsli-m, ŷseg, ŷsub-b, ŷsub-m, ŷv), where ŷseg is the predicted segmentation, ŷsli-b is the ICH predictions for all slices, ŷsli-m is the subtype predictions for all slices, ŷsub-b is the subject-level ICH prediction, ŷsub-m is the subject-level subtype prediction, and ŷv is the ICH volume estimation.” Accordingly, GAO teaches “detecting ARIA in the brain of the patient by generating…a classification score…”
Furthermore, the classification module receives feature maps from ConvRNN module that then provides feature maps to the classification module. “For example, processor 308 may execute the encoder module to extract feature maps [i.e., down-sampled feature maps] from the received image. The feature maps may then be fed to the ConvRNN module and the attention module in parallel. The ConvRNN module encodes contextual information into the feature maps, which is provided to the slice-level classification module.” ([0055]). Accordingly, GAO teaches that the “classification score [is] based at least in part on the plurality of down-sampled feature maps.”
GAO also teaches that the classification module is associated with the segmentation model. “Consistent with some embodiments, processor 308 may apply the learning model to the image to perform the slice-level classification (e.g., using the slice-level classification module) and segmentation (e.g., using the decoder module) in parallel.” ([0055]). Accordingly, GAO teaches that the “utilizing a classification model associated with the segmentation model.”
It would have been obvious to one having ordinary skill in the art at the time of filing to configure the learning model to have, as taught by GAO, an encoder, decoder, and classifier that are configured as recited in claim 20. One would have been motivated to use the encoder module, decoder module, and classification module of GAO because GAO’s learning model provides several improvements over conventional approaches, such as performing ICH detection and segmentation tasks simultaneously and being flexible on the type of ICH classification labels it predicts. There would have been a reasonable expectation of success as GAO shows that the system can detect ICH and is flexible enough to also, when trained with MRI images, identify ARIA edema.
With respect to claim 21, FUJIBAYASHI teaches the identical access, input, and output steps as described above with respect to claim 1. FUJIBAYASHI also teaches a system including one or more computing devices (“The information processing device 50 can be configured by a computer...” (p.3, line 5)), comprising: one or more non-transitory computer-readable storage media including instructions (“The storage unit 59 stores a computer program 60…” (p.3, line 9)); and one or more processors coupled to the one or more storage media, the one or more processors configured to execute the instructions (“The control unit 51 can execute processing defined by the computer program 60.” (p.3, lines 17-18)).
With respect to claim 41, FUJIBAYASHI teaches the identical access, input, and output steps as described above with respect to claim 1. FUJIBAYASHI also teaches a non-transitory computer-readable medium comprising instructions (“The storage unit 59 stores a computer program 60…” (p.3, line 9)) that, when executed by one or more processors of one or more computing devices, cause the one or more processors to perform the above steps. (“The control unit 51 can execute processing defined by the computer program 60.” (p.3, lines 17-18)).
Claims 4-7 are rejected under 35 U.S.C. 103 as being unpatentable over a translation of Int’l. Publ. No. WO 2023/145953 A1 (hereinafter “FUJIBAYASHI”) and U.S. Patent Appl. Publ. No. 2020/0349697 A1 (hereinafter “GAO”) as applied to claim 1 above, and further in view of Cummings, Jeffrey, et al. “Aducanumab: appropriate use recommendations.” The journal of prevention of Alzheimer's disease 8.4 (2021): 398-410 (hereinafter “CUMMINGS”).
With respect to claim 4, FUJIBAYASHI does not explicitly teach that the patient is an Alzheimer's disease (AD) patient having been treated with an anti-amyloid-beta (anti-A3) antibody. However, FUJIBAYASHI teaches that the method could be used to “understand the progression of lesions and administer appropriate medication under pathological management that monitors changes over time…” (p.8, lines 43-45), and ARIA is one risk for Alzheimers.
PNG
media_image3.png
198
400
media_image3.png
Greyscale
CUMMINGS teaches appropriate use recommendations for aducanumab, which “has been approved by the US Food and Drug Administration for treatment of Alzheimer’s disease (AD).” (Abstract). “Aducanumab is an amyloid-targeting monoclonal antibody delivered by monthly intravenous infusions. The pivotal trials included patients with early AD (mild cognitive impairment due to AD and mild AD dementia) who had confirmed brain amyloid using amyloid positron tomography.” (Abstract). However, “[a]ducanumab can substantially increase the incidence of amyloid-related imaging abnormalities (ARIA) with brain effusion or hemorrhage.” (emphasis added) (Abstract). As such, part of treating patients with aducanumab includes monitoring with MRI imaging. “The Expert Panel recommends MRIs prior to initiating therapy, during the titration of the drug, and at any time the patient has symptoms suggestive of ARIA.” (Abstract).
It would have been obvious to use the FUJIBAYASHI-GAO system to monitor a patient with Alzheimer's disease (AD) patient having been treated with an anti-amyloid-beta (anti-A3) antibody. ARIA is a risk of Alzheimers and the FUJIBAYASHI-GAO system is designed to identify edema and microbleeds and monitor a patient over time. One having ordinary skill in the art would have been motivated to use FUJIBAYASHI-GAO system for its intended purpose while monitoring a patient that is receiving an anti-amyloid-beta (anti-A3) antibody, such as aducanumab.
With respect to claim 5 (depending from claim 4), FUJIBAYASHI does not explicitly teach that in response to outputting the quantification of ARIA in the brain of the patient, determining a dosage adjustment of the anti-A3 antibody. However, FUJIBAYASHI does teach enabling the user to see a “medication history” at different times points, (p.7, lines 20-21), and “to understand the progression of lesions and administer appropriate medication under pathological management….” (p.8, lines 43-45).
Nonetheless, Figure 1 of CUMMINGS illustrates the monitoring schedule that should be followed while administering aducanumab to a patient who has met the enrollment criteria to receive aducanumab. An MRI is performed prior to increasing the titration from T4 to T5. If ARIA is discovered, treatment can be suspended. If ARIA is not discovered (i.e., an output with a quantification that suggests no ARIA), the dosage increases from 3 mg/kg to 6 mg/kg (i.e., a dosage adjustment).
With respect to claim 6 (depending from claim 4), FUJIBAYASHI does not explicitly teach that further comprising: in response to outputting the quantification of ARIA in the brain of the patient, terminating or temporarily suspending use of the anti-A3 antibody in the patient. However, FUJIBAYASHI does teach enabling the user to see a “medication history” at different times points, (p.7, lines 20-21), and “to understand the progression of lesions and administer appropriate medication under pathological management….” (p.8, lines 43-45).
CUMMINGS teaches that “[d]ose interruption or treatment discontinuation is recommended for symptomatic ARIA and for moderate-severe ARIA.” (Abstract). “ARIA led to discontinuation from the trials in 6.2% of patients on aducanumab and 0.6% of patients on placebo.” (p.404, top of left column).
It would have been obvious to one having ordinary skill in the art to terminate or temporarily suspend use of the aducanumab if the output from the FUJIBAYASHI-GAO system suggests that the ARIA is not being managed. Edema and/or microhemorrhages can risk the life and well-being of the patient. If the output from the FUJIBAYASHI-GAO system provides evidence that edema and/or microhemorrhages are increasing while on aducanumab, the obvious response would be to terminate or temporarily suspend treatment.
With respect to claim 7 (depending from claim 4), FUJIBAYASHI does not explicitly teach that wherein the anti-A3 antibody is selected from the group consisting of bapineuzumab, solanezumab, aducanumab, gantenerumab, crenezumab, donanembab, and lecanemab.
CUMMINGS teaches appropriate use recommendations for aducanumab, which “has been approved by the US Food and Drug Administration for treatment of Alzheimer’s disease (AD).” (Abstract). “Aducanumab is an amyloid-targeting monoclonal antibody delivered by monthly intravenous infusions.” (Abstract). It would have been obvious to one having ordinary skill in the art to use aducanumab while monitoring a patient as it has been approved for treatment of Alzheimer’s disease.
Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over a translation of Int’l. Publ. No. WO 2023/145953 A1 (hereinafter “FUJIBAYASHI”) and U.S. Patent Appl. Publ. No. 2020/0349697 A1 (hereinafter “GAO”) as applied to claim 1 above, and further in view of Withington, Charles G., and R. Scott Turner. “Amyloid-related imaging abnormalities with anti-amyloid antibodies for the treatment of dementia due to Alzheimer's disease.” Frontiers in Neurology 13 (2022): 862369 (hereinafter “WITHINGTON”).
NOTE: As discussed above with respect to the Section 112(a) and (b) rejections, Applicant’s disclosure does not enable determining or administering anti-ARIA treatments or anti-ARIA antibodies nor is it clear what is meant by these terms. For the purpose of a compact prosecution, claims 8 and 9 have been examined and rejected in light of WITHINGTON. If any anti-ARIA antibodies are known, it would have been obvious to consider and, if appropriate, administer the anti-ARIA antibodies.
With respect to claim 8 (and in light of the Section 112(a) and (b) rejections above), FUJIBAYASHI does not explicitly teach that in response to outputting the quantification of ARIA in the brain of the patient, determining one or more anti-ARIA treatments for the patient.
However, WITHINGTON teaches that “[c]ontrolled studies regarding prevention and treatment of ARIA are lacking, but anecdotal evidence suggests that a pulse of intravenous corticosteroids may be of benefit, as well as a course of anticonvulsant for seizures.” (Abstract). “If symptomatic ARIA-E is presumably associated with excessive neuroinflammation, a pulse of brain-penetrant intravenous corticosteroids (dexamethasone or methylprednisolone) may be effective in minimizing severity and/or duration of symptoms. Plasmapheresis has also been attempted to manage severe ARIA-E.” (p.6, left column before Conclusion).
It would have been obvious to one having ordinary skill in the art at the time of filing to suspend or discontinue treatment of the antibodies, in response to MRI imaging suggesting ARIA-E, and consider treatments to address neuroinflammation, such as a pulse of brain-penetrant intravenous corticosteroids, as taught in WITHINGTON. One would have been motivated to consider and determine treatments as a duty of care to the patient. Assuming no other reason to not using corticosteroids, there would be a reasonable expectation of being able to determine and administer the anti-ARIA treatment.
With respect to claim 9 (depending from claim 8) (and in light of the Section 112(a) and (b) rejections above), FUJIBAYASHI does not teach administering the one or more anti-ARIA treatments to the patient.
However, WITHINGTON teaches that “[c]ontrolled studies regarding prevention and treatment of ARIA are lacking, but anecdotal evidence suggests that a pulse of intravenous corticosteroids may be of benefit, as well as a course of anticonvulsant for seizures.” (Abstract). “If symptomatic ARIA-E is presumably associated with excessive neuroinflammation, a pulse of brain-penetrant intravenous corticosteroids (dexamethasone or methylprednisolone) may be effective in minimizing severity and/or duration of symptoms. Plasmapheresis has also been attempted to manage severe ARIA-E.” (p.6, left column before Conclusion).
It would have been obvious to one having ordinary skill in the art at the time of filing to suspend or discontinue treatment of the antibodies, in response to MRI imaging suggesting ARIA-E, and administer a treatment to address neuroinflammation, such as a pulse of brain-penetrant intravenous corticosteroids, as taught in WITHINGTON. One would have been motivated to administer the treatment as a duty of care to the patient. Assuming no other reason to not using corticosteroids, there would be a reasonable expectation of being able to administer the anti-ARIA treatment.
With respect to claim 10 (depending from claim 8) (and in light of the Section 112(a) and (b) rejections above), FUJIBAYASHI does not teach the one or more anti-ARIA treatments comprise one or more anti-ARIA antibodies.
As discussed above with respect to the Section 112(a) and (b) rejections, it is not clear what is meant by anti-ARIA treatments or antibodies or how one would administer them to a patient. However, it would have been obvious to one having ordinary skill in the art to consider using any available known treatments, including antibodies, if the edema and microhemorrhaging was excessive or otherwise called for such treatment. One would have ve been motivated to treat edema and/or hemorrhaging attributed to ARIA due to one’s duty of care.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over a translation of Int’l. Publ. No. WO 2023/145953 A1 (hereinafter “FUJIBAYASHI”) and U.S. Patent Appl. Publ. No. 2020/0349697 A1 (hereinafter “GAO”) as applied to claim 1 above, and further in view of U.S. Patent Appl. Publ. No. 2024/0420333 A1 (hereinafter “DADAR”).
With respect to claim 18, FUJIBAYASHI does not explicitly teach that wherein the one or more machine-learning models is trained using image augmentations.
In the same field of endeavor, DADAR is directed to a classifier that is trained to recognize microbleed voxels within a brain image. (Abstract). While training a learning model for segmentation tasks, DADAR teaches: “Applicant further augmented the microbleed patch dataset by randomly rotating the patches to generate additional training data. The random rotations may be performed on the full slice (not the patches) centering around the microbleed voxel; therefore, the corner voxels in the patches include information from different areas not present in other patches. Matching numbers of novel background patches may also be added to balance the training dataset. The performance of the model may be assessed using the training dataset with no augmentation, and with adding 4, 9, 14, 19, 24, and 29 random rotations to the training set, respectively.” (emphasis added) ([0077]). The data augmentation proved effective. “FIG. 10 shows the average performance of the model with these parameters trained with no augmentation, as well as the same model trained on original data plus data augmented with 4, 9, 14, 19, 24, and 29 random rotations (no augmentation was performed on validation and test sets). All models with data augmentation performed better than the model without any data augmentation.” (emphasis added) (0112]).
It would have been obvious to one having ordinary skill in the art at the time of filing to modify the FUJIBABYASHI-GAO learning model by training the learning model with data augmentation, as taught in DADAR. One of ordinary skill in the art would have used data augmentation because, as taught in DADAR, models with data augmentation perform better than models without data augmentation. There would have been a reasonable expectation of success as DADAR teaches that data augmentation can be used with learning models configured for segmentation tasks.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON P GROSS whose telephone number is (571)272-1386. The examiner can normally be reached Monday-Friday 9:00-5:00CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne M. Kozak can be reached at (571) 270-5284. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASON P GROSS/ Examiner, Art Unit 3797 /SERKAN AKAR/ Primary Examiner, Art Unit 3797