DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4-6, 10, 14-15, 17-18, 20-24, 26-29, & 57 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The claims 1-2, 4-6, 10, 14-15, 17-18, 20-24, 26-29, & 57 recite a system comprising: one or more computing devices, configured to accessing a set of one or more brain-scan images associated with the patient; inputting the set of one or more brain-scan images into one or more machine-learning models trained to: generate a segmentation map based on the set of one or more brain-scan images, the segmentation map including a plurality of pixel-wise class labels corresponding to a plurality of pixels in the segmentation map; and generate a classification score based on the segmentation map; and detecting ARIA in the brain of the patient based on the classification score; as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but presumable recitation of generic computer components.
That is, other than presumably reciting “one or more computing devices”, and “one or more processors”, nothing in the claim element precludes the step from practically being performed in the mind. For example “accessing a set of one or more brain-scan images associated with the patient” in the context of this claim encompasses a generic data gathering step which can be achieved by a user. The user could manually also “generate a segmentation map based on the set of one or more brain-scan images, the segmentation map including a plurality of pixel-wise class labels corresponding to a plurality of pixels in the segmentation map” by using a pen an paper to segment relevant portions of the brain scan as well as “generate a classification score based on the segmentation map; and detecting ARIA in the brain of the patient based on the classification score” by using the users generic expertise in determining ARIA from brain scans .
If a claim limitation, under its broadest reasonable interpretation, covers performance of the imitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. The judicial exception is not integrated into a practical application. In particular, the claim only recites three additional element – one or more computing devices and one or more processors to perform the above noted steps. The one or more computing devices and one or more processors are recited at a high-level of generality (i.e., as a generic processing system conducting generic image processing functions) such that it amounts no more than mere instructions to apply the exception using a processor. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly
more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the determining steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
The claims are not patent eligible.
As for the depending claim(s), they are also rejected under 35 USC 101 at least for the similar
reasons noted above as they are directed to abstract ideas and does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Therefore, the claims are not patent eligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4, 6, 21-24, 28-29, & 57 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dou et al (Q. Dou et al., “Automatic detection of cerebral microbleeds from MR images via 3D Convolutional Neural Networks,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1182–1195, May 2016.; hereinafter referred to as Dou)
Regarding Claim 1, Dou discloses a method for detecting amyloid related imaging abnormalities (ARIA) in a brain of a patient (“a novel automatic method to detect Cerebral microbleeds (CMBs) from magnetic resonance (MR) images by exploiting the 3D convolutional neural network (CNN). “ [Abstract], CMBs are known in the art to be ARIA), comprising, by one or more computing devices:
accessing a set of one or more brain-scan images associated with the patient (“In order to accurately and efficiently detect CMBs from volumetric brain susceptibility-weighted imaging (SWI) data, we propose a robust and efficient method by leveraging 3D CNNs.” [Introduction];
inputting the set of one or more brain-scan images into one or more machine-learning models trained to (“We, for the first time, exploit the 3D CNN for automatic detection of CMBs from volumetric brain SWI images. The 3D CNN sufficiently encodes the spatial contextual information and hierarchically extracts high-level features in a data driven way.” [Introduction]:
generate a segmentation map based on the set of one or more brain-scan images (“Fig. 2 shows an overview of the proposed cascaded framework, which is composed of two stages: screening stage and discrimination stage. In the screening stage, the 3D FCN model takes a whole volumetric data as input and directly outputs a 3D score volume. Each value on the 3D score volume represents the probability of CMB at a corresponding voxel of the input volume.“ [Methodology]),
the segmentation map including a plurality of pixel-wise class labels corresponding to a plurality of pixels in the segmentation map (“Fig. 2 shows an overview of the proposed cascaded framework, which is composed of two stages: screening stage and discrimination stage. In the screening stage, the 3D FCN model takes a whole volumetric data as input and directly outputs a 3D score volume. Each value on the 3D score volume represents the probability of CMB at a corresponding voxel of the input volume.“ [Methodology]) ;
and generate a classification score based on the segmentation map (“Fig. 2 shows an overview of the proposed cascaded framework, which is composed of two stages: screening stage and discrimination stage. In the screening stage, the 3D FCN model takes a whole volumetric data as input and directly outputs a 3D score volume. Each value on the 3D score volume represents the probability of CMB at a corresponding voxel of the input volume. Subsequently, in the discrimination stage, we further remove false positive candidates by applying a 3D CNN discrimination model to distinguish true CMBs from challenging mimics with high-level feature representations.“ [Methodology]);
and detecting ARIA in the brain of the patient based on the classification score (“The screening stage with the 3D FCN aims to accurately reject the background regions and rapidly retrieve a small number of potential candidates. The discrimination stage with the 3D CNN focuses only on the screened set of candidates to further single out the true CMBs from challenging mimics.” [Two-Stage Cascaded Framework]).
Regarding Claim 2, Dou discloses the ARIA is associated with microhemorrhages and hemosiderin deposits (ARIA-H) or parenchymal edema or sulcal effusion (ARIA-E) in the brain of the patient (“a novel automatic method to detect Cerebral microbleeds (CMBs) from magnetic resonance (MR) images by exploiting the 3D convolutional neural network (CNN). “ [Abstract], CMBs are known in the art to be ARIA-H).
Regarding Claim 4, Dou discloses the one or more machine-learning models comprises a segmentation model (3D FCN) and a classification model (3D CNN) (“We construct a 3D FCN model and a 3D CNN model tailored for two different stages and integrate them into an efficient and robust detection framework. In this cascaded framework for CMB detection, each stage serves its own mission. The screening stage with the 3D FCN aims to accurately reject the background regions and rapidly retrieve a small number of potential candidates. The discrimination stage with the 3D CNN focuses only on the screened set of candidates to further single out the true CMBs from challenging mimics.” [Two-Stage Cascaded Framework].
Regarding Claim 6, Dou discloses all limitations noted above except that the segmentation model further comprises a bidirectional feature propagation network (“The 3D convolution kernels are randomly initialized from the Gaussian distribution and trainable parameters in the network are tuned using the standard back-propagation with stochastic gradient descent by minimizing the cross entropy loss. Meanwhile, dropout strategy [36] is utilized to reduce the co-adaption of intermediate features and improve the generalization capability.” [Methodology], “The proposed 3D FCN can take an arbitrary-sized volume as input and produce a 3D score volume within a single forward propagation, and hence greatly speed up the candidate retrieval procedure without damaging the sensitivity.” [Methodology]).
Regarding Claim 21, Dou discloses the set of one or more brain-scan images comprises one or more magnetic resonance imaging (MRI) images, one or more positron emissiontomography (PET) images, one or more single-photon emission computed tomography (SPECT) images, one or more amyloid PET images, or any combination thereof (“In order to accurately and efficiently detect CMBs from volumetric brain susceptibility-weighted imaging (SWI) data, we propose a robust and efficient method by leveraging 3D CNNs.” [Introduction]).
Regarding Claim 22, Dou discloses all limitations noted above except the set of one or more brain-scan images comprises one or more fluid-attenuated inversion recovery (FLAIR) images, one or more T2*-weighted imaging (T2*WI) images, one or more T1-weighted imaging (T1WI) images, or any combination thereof (“In order to accurately and efficiently detect CMBs from volumetric brain susceptibility-weighted imaging (SWI) data, we propose a robust and efficient method by leveraging 3D CNNs.” [Introduction], SWIs are known in the art as being a form of T2*-weighted imaging).
Regarding Claim 23, Dou discloses all limitations noted above except the set of one or more brain-scan images comprises a plurality of volumes corresponding to one or more cross-sectional volumes of the brain of the patient (“Learning feature representations from all three dimensions is vitally important for biomarker detection tasks from volumetric medical data, e.g., CMB detection from SWI images. In this regard, we propose to employ the 3D convolution kernel, in the pursuance of encoding richer spatial information of the volumetric data. In this case, the feature maps are 3D blocks instead of 2D patches (we call them feature volumes hereafter). As shown in Fig. 3(b), given the same volumetric image of size X×Y×Z, when we employ a 3D convolution kernel to generate a 3D feature volume, the input to the network is the entire volumetric data. Consequently, a 3D kernel is formed and it sweeps over the whole 3D topology (see the red line). By leveraging the kernel sharing across all three dimensions, the network can take full advantage of the volumetric contextual information.” [Methodology]).
Regarding Claim 24, Dou discloses the classification score comprises a binary value indicative of an absence of ARIA or a presence of ARIA or a numerical value indicative of a severity of ARIA (“The screening stage with the 3D FCN aims to accurately reject the background regions and rapidly retrieve a small number of potential candidates. The discrimination stage with the 3D CNN focuses only on the screened set of candidates to further single out the true CMBs from challenging mimics.” [Two-Stage Cascaded Framework]).
Regarding Claim 28, Dou discloses at least one of the plurality of pixel-wise class labels comprises an indication of one or more ARIA lesions, and wherein the one or more machine-learning models is further trained to generate the classification score based on the at least one of the plurality of pixel-wise class labels (“Fig. 2 shows an overview of the proposed cascaded framework, which is composed of two stages: screening stage and discrimination stage. In the screening stage, the 3D FCN model takes a whole volumetric data as input and directly outputs a 3D score volume. Each value on the 3D score volume represents the probability of CMB at a corresponding voxel of the input volume. Subsequently, in the discrimination stage, we further remove false positive candidates by applying a 3D CNN discrimination model to distinguish true CMBs from challenging mimics with high-level feature representations.“ [Methodology]).
Regarding Claim 29, Dou discloses a system including one or more computing devices, comprising: one or more non-transitory computer-readable storage media including instructions; and one or more processors coupled to the one or more non-transitory computer-readable storage media, the one or more processors configured to execute the instructions to (“a novel automatic method to detect Cerebral microbleeds (CMBs) from magnetic resonance (MR) images by exploiting the 3D convolutional neural network (CNN). “ [Abstract], “We implemented the proposed framework based on Theano1 library using dual Intel Xeon(R) processors E5–2650 2.6 GHz and a GPU of NVIDIA GeForce GTX TITAN Z. “ [System Implementation]):
access a set of one or more brain-scan images associated with the patient (“In order to accurately and efficiently detect CMBs from volumetric brain susceptibility-weighted imaging (SWI) data, we propose a robust and efficient method by leveraging 3D CNNs.” [Introduction];
input the set of one or more brain-scan images into one or more machine-learning models trained to (“We, for the first time, exploit the 3D CNN for automatic detection of CMBs from volumetric brain SWI images. The 3D CNN sufficiently encodes the spatial contextual information and hierarchically extracts high-level features in a data driven way.” [Introduction]:
generate a segmentation map based on the set of one or more brain-scan images (“Fig. 2 shows an overview of the proposed cascaded framework, which is composed of two stages: screening stage and discrimination stage. In the screening stage, the 3D FCN model takes a whole volumetric data as input and directly outputs a 3D score volume. Each value on the 3D score volume represents the probability of CMB at a corresponding voxel of the input volume.“ [Methodology]),
the segmentation map including a plurality of pixel-wise class labels corresponding to a plurality of pixels in the segmentation map (“Fig. 2 shows an overview of the proposed cascaded framework, which is composed of two stages: screening stage and discrimination stage. In the screening stage, the 3D FCN model takes a whole volumetric data as input and directly outputs a 3D score volume. Each value on the 3D score volume represents the probability of CMB at a corresponding voxel of the input volume.“ [Methodology]) ;
and generate a classification score based on the segmentation map (“Fig. 2 shows an overview of the proposed cascaded framework, which is composed of two stages: screening stage and discrimination stage. In the screening stage, the 3D FCN model takes a whole volumetric data as input and directly outputs a 3D score volume. Each value on the 3D score volume represents the probability of CMB at a corresponding voxel of the input volume. Subsequently, in the discrimination stage, we further remove false positive candidates by applying a 3D CNN discrimination model to distinguish true CMBs from challenging mimics with high-level feature representations.“ [Methodology]);
and detecting ARIA in the brain of the patient based on the classification score (“The screening stage with the 3D FCN aims to accurately reject the background regions and rapidly retrieve a small number of potential candidates. The discrimination stage with the 3D CNN focuses only on the screened set of candidates to further single out the true CMBs from challenging mimics.” [Two-Stage Cascaded Framework]).
Regarding Claim 57, Dou discloses a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors of one or more computing devices, cause the one or more processors to (“a novel automatic method to detect Cerebral microbleeds (CMBs) from magnetic resonance (MR) images by exploiting the 3D convolutional neural network (CNN). “ [Abstract], “We implemented the proposed framework based on Theano1 library using dual Intel Xeon(R) processors E5–2650 2.6 GHz and a GPU of NVIDIA GeForce GTX TITAN Z. “ [System Implementation]):
access a set of one or more brain-scan images associated with the patient (“In order to accurately and efficiently detect CMBs from volumetric brain susceptibility-weighted imaging (SWI) data, we propose a robust and efficient method by leveraging 3D CNNs.” [Introduction];
input the set of one or more brain-scan images into one or more machine-learning models trained to (“We, for the first time, exploit the 3D CNN for automatic detection of CMBs from volumetric brain SWI images. The 3D CNN sufficiently encodes the spatial contextual information and hierarchically extracts high-level features in a data driven way.” [Introduction]:
generate a segmentation map based on the set of one or more brain-scan images (“Fig. 2 shows an overview of the proposed cascaded framework, which is composed of two stages: screening stage and discrimination stage. In the screening stage, the 3D FCN model takes a whole volumetric data as input and directly outputs a 3D score volume. Each value on the 3D score volume represents the probability of CMB at a corresponding voxel of the input volume.“ [Methodology]),
the segmentation map including a plurality of pixel-wise class labels corresponding to a plurality of pixels in the segmentation map (“Fig. 2 shows an overview of the proposed cascaded framework, which is composed of two stages: screening stage and discrimination stage. In the screening stage, the 3D FCN model takes a whole volumetric data as input and directly outputs a 3D score volume. Each value on the 3D score volume represents the probability of CMB at a corresponding voxel of the input volume.“ [Methodology]) ;
and generate a classification score based on the segmentation map (“Fig. 2 shows an overview of the proposed cascaded framework, which is composed of two stages: screening stage and discrimination stage. In the screening stage, the 3D FCN model takes a whole volumetric data as input and directly outputs a 3D score volume. Each value on the 3D score volume represents the probability of CMB at a corresponding voxel of the input volume. Subsequently, in the discrimination stage, we further remove false positive candidates by applying a 3D CNN discrimination model to distinguish true CMBs from challenging mimics with high-level feature representations.“ [Methodology]);
and detecting ARIA in the brain of the patient based on the classification score (“The screening stage with the 3D FCN aims to accurately reject the background regions and rapidly retrieve a small number of potential candidates. The discrimination stage with the 3D CNN focuses only on the screened set of candidates to further single out the true CMBs from challenging mimics.” [Two-Stage Cascaded Framework]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 5 & 10 are rejected under 35 U.S.C. 103 as being unpatentable over Dou in view of Long et al (CN 115409782 A; hereinafter referred to as Long).
Regarding Claim 5, Dou discloses all limitations noted above except that the segmentation model comprises an encoder trained to generate a plurality of down-sampled feature maps based on the set of one or more brain- scan images, and wherein the classification model comprises a decoder trained to receive the plurality of down-sampled feature maps from the encoder.
However, in a similar field of endeavor, Long teaches a brain MRI tissue analysis method [Technical Field]
Long also teaches that the segmentation model comprises an encoder trained to generate a plurality of down-sampled feature maps based on the set of one or more brain- scan images, and wherein the classification model comprises a decoder trained to receive the plurality of down-sampled feature maps from the encoder (“inputting each therapeutic brain MRI image feature into the encoder for feature extraction, outputting the coding MRI image feature corresponding to each coding layer, the encoder comprises a plurality of coding layers, each coding layer comprises a down-sampling module and an attention structure module… based on the decoder and the skip connection module, all the coding MRI image feature corresponding to the decoding layer to feature connection, outputting the target image feature corresponding to each decoding layer, the decoder comprises a plurality of decoding layers, each decoding layer comprises an upper sampling module and an attention structure module” [Contents of the invention]).
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Dou as outlined above with the segmentation model comprises an encoder trained to generate a plurality of down-sampled feature maps based on the set of one or more brain- scan images, and wherein the classification model comprises a decoder trained to receive the plurality of down-sampled feature maps from the encoder as taught by Long, because it is helpful for improving the accuracy of obtaining the target image characteristic [Contents of the invention].
Regarding Claim 10, Dou discloses all limitations noted above except that the classification model comprises an attention mechanism.
However, Long teaches that the classification model comprises an attention mechanism (“based on the decoder and the skip connection module, all the coding MRI image feature corresponding to the decoding layer to feature connection, outputting the target image feature corresponding to each decoding layer, the decoder comprises a plurality of decoding layers, each decoding layer comprises an upper sampling module and an attention structure module” [Contents of the invention]).
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Dou as outlined above with that the classification model comprises an attention mechanism as taught by Long, because it is helpful for improving the accuracy of obtaining the target image characteristic [Contents of the invention].
Claims 14, 15, 17, 18, & 20 are rejected under 35 U.S.C. 103 as being unpatentable over Dou in view of Boots et al (US 20220281963 A1; hereinafter referred to as Boots).
Regarding Claim 14, Dou discloses all limitations noted above except that the patient is an Alzheimer's disease (AD) patient having been treated with an anti-amyloid-beta (anti-A3) antibody.
However, in a similar field of endeavor, Boot teaches methods for treating Alzheimer's disease. [Abstract].
Boot also teaches that the patient is an Alzheimer's disease (AD) patient having been treated with an anti-amyloid-beta (anti-A3) antibody (“Provided are methods for treating Alzheimer's disease in a human subject in need thereof when the subject develops an Amyloid Related Imaging Abnormality (ARIA) during a treatment regimen comprising administration of multiple doses of an anti-beta-amyloid antibody (e.g., BIIB037) to the subject.” [Abstract]
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Dou as outlined above with the patient is an Alzheimer's disease (AD) patient having been treated with an anti-amyloid-beta (anti-A3) antibody as taught by Boot, because there is a need in the art for methods to reduce the incidence of ARIA in susceptible Alzheimer's disease patients during AD treatment protocols [0017].
Regarding Claim 15, Dou discloses all limitations noted above except further comprising: in response to detecting the ARIA in the brain of the patient, determining a dosage adjustment of the anti-A3antibody, terminating use of the anti-A3 antibody, or temporarily suspending the use of the anti-A3 antibody.
However, in a similar field of endeavor, Boot teaches further comprising: in response to detecting the ARIA in the brain of the patient, determining a dosage adjustment of the anti-A3antibody, terminating use of the anti-A3 antibody, or temporarily suspending the use of the anti-A3 antibody (“After the onset of ARIA in the subject, administration of the anti-beta-amyloid antibody to the subject is suspended until the ARIA resolves (and if there are clinical symptoms, until they also resolve). The method further involves resuming administration to the subject of the same dose of the anti-beta-amyloid antibody that was administered immediately prior to the subject developing the ARIA.” [0019]
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Dou as outlined above with further comprising: in response to detecting the ARIA in the brain of the patient, determining a dosage adjustment of the anti-A3antibody, terminating use of the anti-A3 antibody, or temporarily suspending the use of the anti-A3 antibody as taught by Boot, because there is a need in the art for methods to reduce the incidence of ARIA in susceptible Alzheimer's disease patients during AD treatment protocols [0017].
Regarding Claim 17, Dou discloses all limitations noted above except the anti-A3 antibody is selected from the group consisting of bapineuzumab, solanezumab, aducanumab, gantenerumab, crenezumab, donanembab, and lecanemab.
However, in a similar field of endeavor, Boot teaches the anti-A3 antibody is selected from the group consisting of bapineuzumab, solanezumab, aducanumab, gantenerumab, crenezumab, donanembab, and lecanemab (“The method involves administering to the human subject (wherein the subject is an ApoE4 carrier or ApoE4 non-carrier), multiple doses of an anti-beta-amyloid antibody (e.g., aducanumab).” [0065]
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Dou as outlined above with further comprising: in response to detecting the ARIA in the brain of the patient, determining a dosage adjustment of the anti-A3antibody, terminating use of the anti-A3 antibody, or temporarily suspending the use of the anti-A3 antibody as taught by Boot, because there is a need in the art for methods to reduce the incidence of ARIA in susceptible Alzheimer's disease patients during AD treatment protocols [0017].
Regarding Claim 18, Dou discloses all limitations noted above except further comprising: in response to detecting the ARIA in the brain of the patient, determining one or more anti- ARIA treatments for the patient and administering the one or more anti-ARIA treatments to the patient.
However, in a similar field of endeavor, Boot teaches further comprising: in response to detecting the ARIA in the brain of the patient, determining one or more anti- ARIA treatments for the patient and administering the one or more anti-ARIA treatments to the patient (“After the onset of ARIA in the subject, administration of the anti-beta-amyloid antibody to the subject is suspended until the ARIA resolves (and if there are clinical symptoms, until they also resolve). The method further involves resuming administration to the subject of the same dose of the anti-beta-amyloid antibody that was administered immediately prior to the subject developing the ARIA.” [0019])
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Dou as outlined above with further comprising: in response to detecting the ARIA in the brain of the patient, determining one or more anti- ARIA treatments for the patient and administering the one or more anti-ARIA treatments to the patient as taught by Boot, because there is a need in the art for methods to reduce the incidence of ARIA in susceptible Alzheimer's disease patients during AD treatment protocols [0017].
Regarding Claim 20, Dou discloses all limitations noted above except the one or more anti-ARIA treatments comprise one or more anti-ARIA antibodies.
However, in a similar field of endeavor, Boot teaches further comprising: in response to detecting the ARIA in the brain of the patient, determining one or more anti- ARIA treatments for the patient and administering the one or more anti-ARIA treatments to the patient (“After the onset of ARIA in the subject, administration of the anti-beta-amyloid antibody to the subject is suspended until the ARIA resolves (and if there are clinical symptoms, until they also resolve). The method further involves resuming administration to the subject of the same dose of the anti-beta-amyloid antibody that was administered immediately prior to the subject developing the ARIA.” [0019], “the method further involves subsequently administering the anti-beta-amyloid antibody at a dose that is higher than the dose that is administered upon resumption of administration after resolution of the ARIA.” [0023])
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Dou as outlined above with further comprising: in response to detecting the ARIA in the brain of the patient, determining one or more anti- ARIA treatments for the patient and administering the one or more anti-ARIA treatments to the patient as taught by Boot, because there is a need in the art for methods to reduce the incidence of ARIA in susceptible Alzheimer's disease patients during AD treatment protocols [0017].
Claims 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over Dou in view of Barkhof et al (F. Barkhof et al., “An MRI rating scale for amyloid-related imaging abnormalities with edema or effusion,” American Journal of Neuroradiology, vol. 34, no. 8, pp. 1550–1555, Feb. 2013; hereinafter referred to as Barkhoff).
Regarding Claim 26, Dou discloses all limitations noted above except the classification score comprises one of a plurality of classification scores, and wherein the plurality of classification scores comprises: a first classification score indicative of mild ARIA; a second classification score indicative of moderate ARIA; and a third classification score indicative of severe ARIA.
However, in a similar field of endeavor, Barkhof teaches a study to establish a reproducible, clinically applicable, visual MR imaging rating scale for ARIA-E and to examine its internal validity in terms of inter-rater reliability [Introduction].
Barkhof also teaches the classification score comprises one of a plurality of classification scores, and wherein the plurality of classification scores comprises: a first classification score indicative of mild ARIA; a second classification score indicative of moderate ARIA; and a third classification score indicative of severe ARIA (“The developed rating scale for ARIA-E included both the location and magnitude of presentation of parenchymal hyperintensities, sulcal hyperintensities, and gyral swelling. If ≥1 of those 3 findings was present, the changes were scored according to the anatomic location in terms of lobe and side, resulting in scores for 6 regions bilaterally: frontal lobe, parietal lobe, temporal lobe, occipital lobe, central region (basal ganglia, thalamus, internal and external capsules, corpus callosum, insula), and infratentorial region (brain stem and cerebellum). Within each region, a score of 0–5 was given on the basis of the spatial extent and multifocality of the abnormality. “ [Results], “As shown in Fig 4, the cases used in this study represented a wide range of ARIA-E pathology and illustrate the dynamics of the scale. Among the 5 cases with the highest scores, the score was strongly driven by parenchymal hyperintensity in cases 1 and 2 (with some additional sulcal hyperintensity), whereas sulcal hyperintensity was the major determinant in cases 3, 5, and 10 (with barely any parenchymal hyperintensity in the latter 2). Scores for swelling followed those of sulcal hyperintensity rather than those of parenchymal hyperintensity. Raters provided identical scores for case 8 with a score of 3 for both sulcal hyperintensity and gyral swelling in the left frontal region by both raters. Case 7 had similar but not identical scores by the 2 raters, with both raters identifying lesions in the same regions and the same type of lesions within each region, but with 1 rater-provided score 1 category higher for 2 of the 7 regions with lesions. Both raters identified case 3 with the highest score, and the individual components scored were essentially identical. Case 10 had the largest absolute difference in the total score between the 2 raters, and this was due to 1 rater identifying 2 additional regions with lesions and having higher scores in the regions where both raters identified lesions (Fig 5).” [Description of Findings]).
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Dou as outlined above with the classification score comprises one of a plurality of classification scores, and wherein the plurality of classification scores comprises: a first classification score indicative of mild ARIA; a second classification score indicative of moderate ARIA; and a third classification score indicative of severe ARIA as taught by Barkhof, because an MR imaging scale that is both reproducible and easily implemented would assist in monitoring and evaluating this adverse event [Abstract].
Regarding Claim 27, Dou discloses all limitations noted above except the classification score comprises a Barkhof Grand Total Score (BGTS) score.
However, in a similar field of endeavor, Barkhof teaches the classification score comprises a Barkhof Grand Total Score (BGTS) score (“When a finding covered multiple lobes, the maximum in-plane diameter of the abnormality involving that particular lobe was measured and scored accordingly. Figures 1–3 provide examples of assessing the size and extent of the pathologic changes. A total score can be derived by summing up the 12 regional scores (range, 0–60) from the characteristic, with the maximum score defining the regional score.” [Results]).
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Dou as outlined above with the classification score comprises a Barkhof Grand Total Score (BGTS) score as taught by Barkhof, because an MR imaging scale that is both reproducible and easily implemented would assist in monitoring and evaluating this adverse event [Abstract].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN MALDONADO whose telephone number is 703-756-1421. The examiner can normally be reached 8:00 am-4:00 pm PST M-Th Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached on (571) 272-7230. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Steven Maldonado/
Patent Examiner, Art Unit 3797
/SHAHDEEP MOHAMMED/Primary Examiner, Art Unit 3797