Prosecution Insights
Last updated: April 19, 2026
Application No. 18/123,410

BRAIN IMAGING SYSTEM AND BRAIN IMAGING METHOD

Final Rejection §103
Filed
Mar 20, 2023
Examiner
ALDARRAJI, ZAINAB MOHAMMED
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
A-Moy Limited
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
3y 5m
To Grant
83%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
81 granted / 121 resolved
-3.1% vs TC avg
Strong +16% interview lift
Without
With
+16.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
29 currently pending
Career history
150
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
21.6%
-18.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 121 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 07/07/2025 has been entered. Response to Amendment The proposed reply filed on 07/07/2025 has been entered. Claims 1-6, 8-18, and 20-24 remain pending in the current application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 8-9, 13-16, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Yang (US Pub No. 2016/0166159) in the view of Akkus (NPL: “Deep learning for brain MRI segmentation: state of art and future directions”) and Kang (WO 2017139895). Regarding claim 1, Yang teaches a brain imaging system, comprising (para. 0038; CT, MRI and MN imaging systems): a first imaging device, configured to capture a first brain image set by scanning a patient, wherein the first brain image set includes a plurality of first brain images that provides cerebral data representing a first contrast agent in a brain of the patient over time (para. 0038; A bolus of contrasting agents is introduced via a needle into a patient nt, for example, the arm of the patient. However the bolus can be input to any other part of the patient. A region of interest (ROI) may be a tissue 6 in apart of the patient's brain as shown in FIG. 1. Alternatively, the ROI may be a pixel or a plurality of pixels, where any pixels represent a calculated image to produce one or more perfusion maps. Blood circulating throughout the patient will contain the contrast agent and in particular may be delivered to the tissue 6 via artery 8 and the blood flowing through the tissue 6 is returned to the heart via vein 10. Raw data and/or images collected by a scan, such as from a CT scanner 20, MRI scanner 30 or NM scanner 35 arc forwarded to a data storage system 40.); a second imaging device, configured to capture a second brain image set by scanning the patient, wherein the second brain image set includes a plurality of second brain images that provides cerebral data representing a second contrast agent in the brain of the patient over time (para. 0038; A bolus of contrasting agents is introduced via a needle into a patient nt, for example, the arm of the patient. However the bolus can be input to any other part of the patient. A region of interest (ROI) may be a tissue 6 in apart of the patient's brain as shown in FIG. 1. Alternatively, the ROI may be a pixel or a plurality of pixels, where any pixels represent a calculated image to produce one or more perfusion maps. Blood circulating throughout the patient will contain the contrast agent and in particular may be delivered to the tissue 6 via artery 8 and the blood flowing through the tissue 6 is returned to the heart via vein 10. Raw data and/or images collected by a scan, such as from a CT scanner 20, MRI scanner 30 or NM scanner 35 arc forwarded to a data storage system 40.); and a processor electrically connected to the first imaging device and the second imaging device, wherein the processor is configured to (para. 0038; A computer program operating on a processor 50, in the form of a computer, is used to retrieve the various images or raw data from any one of the scanners 20, 30 or 35 or from the data storage system 40. The computer program need not reside on computer 50, but may reside in a console computer linked to any one of the scanners 20, 30 or 35.): obtain, by performing an image pre-processing process on the first brain image set and the second brain image set, a first processed brain image set and a second processed brain image set (paras. 0038 and 0072; the program performs various pre-processing including (i) motion correction; (ii) detection of AIF and VOF; (iii) setting baseline time period before contrast arrival; and (iv) converting a signal intensity profile to contrast concentration time curve.); obtain, by performing an image enhancing process on the first processed brain image set and the second processed brain image set, a first enhanced brain image set and a second enhanced brain image set (para. 0038; The program then processes those images to provide an improved data set for a clinician to use, particularly in relation to perfusion indices including blood flow, blood volume, mean transit time, arterial delay time, arterial dispersion time or relative arterial dispersion, tissue dispersion time or relative tissue dispersion. The system is performing data processing on the images to provide an enhanced images after processing them); select, by using a first model having been trained, first features from the first enhanced image set that are optimal for estimating cerebral perfusion (para. 0056; Using the computer program, the user selects the initial AIF and VIF, the program, will automatically derive the AIF.sub.t(t) input to the tissue 6 based on the first model and the convolution thereof. Secondly the program will estimate tissue blood flow F.sub.t and IRF R.sub.e(t) and derive parameter values used to build the simulated tissue IRF R.sub.s(t) in the second model. The program farther calculates a simulated contrast curve at the tissue of interest. The seven parameters F.sub.t, t.sub.1, σ.sub.1, α.sub.1, σ.sub.2, α.sub.2 and t.sub.2 are optimized through a least squares method in order to fit the simulated C.sub.s(t) to the measured tissue curve C(t). the examiner is interpreting the parameters as the first features that are optimal for estimating cerebral perfusion.); select, by using a second model having been trained, second features from the second enhanced image set that are optimal for brain lesion identification (paras. 0072-0078; the program process data on a pixel-by-pixel basis to produce various perfusion maps using a model-free deconvolution technique such as the singular value decomposition (SVD) technique by taking into account arterial delay and dispersion effects. the program identifies ischemic pixels where DT is greater than a predetermined first threshold value and CBV is below a second threshold value. the program identifies the infarct portion of the ischemic lesion, where DT is greater than a predetermined third threshold value and/or CBV is below a forth threshold value. The examiner is interpreting the second features as the DT and CBV); obtain, by performing calculations on first features, a plurality of brain perfusion indices (para. 0057; With the optimized seven parameters F.sub.1, t.sub.1, σ.sub.1, α.sub.1, σ.sub.2, α.sub.2 and t.sub.2, several quantitative perfusion indices can be determined as BF, MTT, BV, DT, TDT, etc.); and identify, by inputting the second features to a model having been trained, position information and volume information of one or more target brain lesions in the brain of the patient (paras. 0077-0078; the ischemic penumbra can be identified at step 914, by the mismatch region between the ischemic lesion and infarct. Then a tissue status map can be created to display the infarct (red) 702 surrounded by the penumbra (green) 704 as a color overlay on the corresponding raw image. the program measures the penumbra volume as a percentage of the total said ischemic lesion to determine treatment of the acute stroke patient. The examiner is interpreting the map with the color coding of the lesion as a location indication and the volume of penumbra as percentage of the volume of the lesion.). However, Yang fails to explicitly teach using deep learning models to detect abnormalities in the brain; wherein the deep learning models are a long-short term memory (LSTM) neural networks capable of learning order dependence in fitting time-series data. Akkus, in the same field of endeavor, teaches using deep learning models to detect abnormalities in the brain (page 450; An emerging machine learning technique referred to as deep learning [1], can help avoid limitations of classical machine learning algorithms, and its self-learning of features may enable identification of new useful imaging features for quantitative analysis of brain MRI. Deep learning refers to neural networks with many layers (usually more than five) that extract a hierarchy of features from raw input images. It is a new and popular type of machine learning techniques that extract a complex hierarchy of features from images due to their self-learning ability as opposed to the hand-crafted feature extraction in classical machine learning algorithms.). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the models of Yang to incorporate the teaching Akkus to include deep learning models to detect brain abnormalities. Doing so will help in identification of new useful imaging features for quantitative analysis of a large number of brain images and accurate classification of brain abnormalities as taught within Akkus in page 450. However, Yang in the view of Akkus fail to explicitly teach that the deep learning models are a long-short term memory (LSTM) neural networks capable of learning order dependence in fitting time-series data. Kang, in the same field of endeavor, teaches are a long-short term memory (LSTM) neural networks capable of learning order dependence in fitting time-series data (paras. 0040-0059; The Long Short Term Memory (LSTM) neural network allows us to perform group feature selections and classifications. The LSTM machine learning algorithms are discussed in more detail below. From this process, the set of bitplanes to be isolated from image sequences to reflect temporal changes in HC is obtained. An image filter is configured to isolate the identified bitplanes in subsequent steps described below. The Long Short Term Memory (LSTM) neural network, or a suitable alternative such as non-linear Support Vector Machine, and deep learning may again be used to assess the existence of common spatial-temporal patterns of hemoglobin changes across subjects. One recurrent neural network is known as the Long Short Term Memory (LSTM) neural network, which is a category of neural network model specified for sequential data analysis and prediction. The LSTM neural network comprises at least three layers of cells. The first layer is an input layer, which accepts the input data. The second (and perhaps additional) layer is a hidden layer, which is composed of memory cells (see Fig. 12). The final layer is output layer, which generates the output value based on the hidden layer using Logistic Regression. The goal is to classify the sequence into different conditions. The Logistic Regression output layer generates the probability of each condition based on the representation sequence from the LSTM hidden layer. The examiner notes that LSTM is trained on temporal hemoglobin signal sequence to capture spatiotemporal patterns and classify physiological states from sequential inputs, thus, LSTM is used for learning order dependence and fitting time series data for classifying physiological state.). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the deep learning models of Yang in the view of Akkus to incorporate the teaching Kang to include Long Short-Term Memory (LSTM) neural network. Doing so will enable the system to learn meaningful order dependent features for improved feature selection and classification. The LSTM is able to identify which temporal image features (bitplanes) mater most for physiological state classification, detect patterns of physiological change across time and regions, and model sequential inputs while preserving important earlier information as disclosed in paras. 0040, 0046, and 0051-0059. Thus, by modifying the deep learning models of Yang in the view of Akkus with the LSTM neural network of Kang would enable the system to accurately model temporal patterns and sequential relationship. Regarding claim 2, Yang teaches the brain imaging system according to claim 1, wherein the first imaging device is a computed tomography (CT) imaging device, the plurality of first brain images are CT brain images, the second imaging device is a magnetic resonance imaging (MRI) device, and the plurality of second brain images are MRI brain images (para. 0038; the imaging devices are CT and MRI and the system is a multimodal imaging system which uses images from both imaging devices). Regarding claim 3, Yang teaches the brain imaging system according to claim 2, however, fails to explicitly teach wherein the pre-processing process includes: performing a re-alignment process to align positions of the brain in each image; performing a co-registration process to normalize sizes and coordinates in each image; and performing a segmentation process to isolate a target region of the brain in each image. Akkus, in the same field of endeavor, teaches pre-processing process includes: performing a re-alignment process to align positions of the brain in each image (page 452, right col; Registration is spatial alignment of the images to a common anatomical space. Intrapatient registration aims to align the images of different sequences, e.g., T1 and T2, to obtain a multi-channel representation for each location within the brain.); performing a co-registration process to normalize sizes and coordinates in each image (page 452, right col; Registration is spatial alignment of the images to a common anatomical space); and performing a segmentation process to isolate a target region of the brain in each image (pages 454-455; lesion segmentation where it provides segmented region in an image). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the preprocessing steps of Yang to incorporate the teaching Akkus to include preprocessing steps of the images including alignment, registration, and segmentation. Doing so will help in automated analysis of the images and improve the learning process. Doing so would help to avoid suppression of true patterns of structures and intensity differentiation in the output of the models as taught within Akkus in pages 452 and 457. Regarding claim 4, Yang teaches the brain imaging system according to claim 3, however, fails to explicitly teach wherein the co-registration process further includes: obtaining a target brain atlas from a plurality of reference brain atlases built from one or more representations of brain; spatially normalizing the brain of each image to a coordinate system; and registering the brain of each image to the target brain atlas by matching anatomy of the brain with a representation of anatomy in the target brain atlas. Akkus, in the same field of endeavor, teaches obtaining a target brain atlas from a plurality of reference brain atlases built from one or more representations of brain (page 454, right col; supervised machine learning methods that, given a representative dataset, learn the textural and appearance properties of lesions [46], and atlas-based methods that combine both supervised and unsupervised learning into a unified pipeline by registering labeled data or a known cohort data into a common anatomical space); spatially normalizing the brain of each image to a coordinate system (page 452, right col; Registration is spatial alignment of the images to a common anatomical space); and registering the brain of each image to the target brain atlas by matching anatomy of the brain with a representation of anatomy in the target brain atlas (pages 452 and 454; an atlas based methods register images to a common space (atlas)). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the device of Yang to incorporate the teaching Akkus to include co-registration step. Doing so will help in automated analysis of the images and improve the learning process. Doing so would help to avoid suppression of true patterns of structures and intensity differentiation in the output of the models as taught within Akkus in pages 452 and 457. Regarding claim 8, Yang teaches the brain imaging system according to claim 1, wherein the processor is further configured to: detecting a vessel occlusion, infarction or ischemia region of the first brain image set according to the plurality of brain perfusion indices (para. 0015; detecting the infarct portion of the ischemic lesion, where DT is greater than a predetermined third threshold value and/or CBV is below a fourth threshold value). Regarding claim 9, Yang teaches the brain imaging system according to claim 8, wherein the plurality of brain perfusion indices include one or more of a first concentration curve, a first cerebral blood flow, a first cerebral blood volume, a first cerebral blood mean transit time and a first contrast agent time to peak (para. 0038; The program then processes those images to provide an improved data set for a clinician to use, particularly in relation to perfusion indices including blood flow, blood volume, mean transit time, arterial delay time, arterial dispersion time or relative arterial dispersion, tissue dispersion time or relative tissue dispersion.). Regarding claim 13, Yang teaches a brain imaging method, comprising (para. 0038; CT, MRI and MN imaging systems): configuring a first imaging device, configured to capture a first brain image set by scanning a patient, wherein the first brain image set includes a plurality of first brain images that provides cerebral data representing a first contrast agent in a brain of the patient over time (para. 0038; A bolus of contrasting agents is introduced via a needle into a patient nt, for example, the arm of the patient. However, the bolus can be input to any other part of the patient. A region of interest (ROI) may be a tissue 6 in a part of the patient's brain as shown in FIG. 1. Alternatively, the ROI may be a pixel or a plurality of pixels, where any pixels represent a calculated image to produce one or more perfusion maps. Blood circulating throughout the patient will contain the contrast agent and in particular may be delivered to the tissue 6 via artery 8 and the blood flowing through the tissue 6 is returned to the heart via vein 10. Raw data and/or images collected by a scan, such as from a CT scanner 20, MRI scanner 30 or NM scanner 35 arc forwarded to a data storage system 40.); configuring a second imaging device, configured to capture a second brain image set by scanning the patient, wherein the second brain image set includes a plurality of second brain images that provides cerebral data representing a second contrast agent in the brain of the patient over time (para. 0038; A bolus of contrasting agents is introduced via a needle into a patient nt, for example, the arm of the patient. However the bolus can be input to any other part of the patient. A region of interest (ROI) may be a tissue 6 in apart of the patient's brain as shown in FIG. 1. Alternatively, the ROI may be a pixel or a plurality of pixels, where any pixels represent a calculated image to produce one or more perfusion maps. Blood circulating throughout the patient will contain the contrast agent and in particular may be delivered to the tissue 6 via artery 8 and the blood flowing through the tissue 6 is returned to the heart via vein 10. Raw data and/or images collected by a scan, such as from a CT scanner 20, MRI scanner 30 or NM scanner 35 arc forwarded to a data storage system 40.); and configuring a processor electrically connected to the first imaging device and the second imaging device, to (para. 0038; A computer program operating on a processor 50, in the form of a computer, is used to retrieve the various images or raw data from any one of the scanners 20, 30 or 35 or from the data storage system 40. The computer program need not reside on computer 50, but may reside in a console computer linked to any one of the scanners 20, 30 or 35.): obtain, by performing an image pre-processing process on the first brain image set and the second brain image set, a first processed brain image set and a second processed brain image set (paras. 0038 and 0072; the program performs various pre-processing including (i) motion correction; (ii) detection of AIF and VOF; (iii) setting baseline time period before contrast arrival; and (iv) converting a signal intensity profile to contrast concentration time curve.); obtain, by performing an image enhancing process on the first processed brain image set and the second processed brain image set, a first enhanced brain image set and a second enhanced brain image set (para. 0038; The program then processes those images to provide an improved data set for a clinician to use, particularly in relation to perfusion indices including blood flow, blood volume, mean transit time, arterial delay time, arterial dispersion time or relative arterial dispersion, tissue dispersion time or relative tissue dispersion. The system is performing data processing on the images to provide an enhanced images after processing them); select, by using a first model having been trained, first features from the first enhanced image set that are optimal for estimating cerebral perfusion (para. 0056; Using the computer program, the user selects the initial AIF and VIF, the program, will automatically derive the AIF.sub.t(t) input to the tissue 6 based on the first model and the convolution thereof. Secondly the program will estimate tissue blood flow F.sub.t and IRF R.sub.e(t) and derive parameter values used to build the simulated tissue IRF R.sub.s(t) in the second model. The program farther calculates a simulated contrast curve at the tissue of interest. The seven parameters F.sub.t, t.sub.1, σ.sub.1, α.sub.1, σ.sub.2, α.sub.2 and t.sub.2 are optimized through a least squares method in order to fit the simulated C.sub.s(t) to the measured tissue curve C(t). the examiner is interpreting the parameters as the first features that are optimal for estimating cerebral perfusion.); select, by using a second model having been trained, second features from the second enhanced image set that are optimal for brain lesion identification (paras. 0072-0078; the program process data on a pixel-by-pixel basis to produce various perfusion maps using a model-free deconvolution technique such as the singular value decomposition (SVD) technique by taking into account arterial delay and dispersion effects. the program identifies ischemic pixels where DT is greater than a predetermined first threshold value and CBV is below a second threshold value. the program identifies the infarct portion of the ischemic lesion, where DT is greater than a predetermined third threshold value and/or CBV is below a forth threshold value. The examiner is interpreting the second features as the DT and CBV); obtain, by performing calculations on first features, a plurality of brain perfusion indices (para. 0057; With the optimized seven parameters F.sub.1, t.sub.1, σ.sub.1, α.sub.1, σ.sub.2, α.sub.2 and t.sub.2, several quantitative perfusion indices can be determined as BF, MTT, BV, DT, TDT, etc.); and identify, by inputting the second features to a model having been trained, position information and volume information of one or more target brain lesions in the brain of the patient (paras. 0077-0078; the ischemic penumbra can be identified at step 914, by the mismatch region between the ischemic lesion and infarct. Then a tissue status map can be created to display the infarct (red) 702 surrounded by the penumbra (green) 704 as a color overlay on the corresponding raw image. the program measures the penumbra volume as a percentage of the total said ischemic lesion to determine treatment of the acute stroke patient. The examiner is interpreting the map with the color coding of the lesion as a location indication and the volume of penumbra as percentage of the volume of the lesion.). However, Yang fails to explicitly teach using deep learning models to detect abnormalities in the brain; wherein the deep learning models are a long-short term memory (LSTM) neural networks capable of learning order dependence in fitting time-series data. Akkus, in the same field of endeavor, teaches using deep learning models to detect abnormalities in the brain (page 450; An emerging machine learning technique referred to as deep learning [1], can help avoid limitations of classical machine learning algorithms, and its self-learning of features may enable identification of new useful imaging features for quantitative analysis of brain MRI. Deep learning refers to neural networks with many layers (usually more than five) that extract a hierarchy of features from raw input images. It is a new and popular type of machine learning techniques that extract a complex hierarchy of features from images due to their self-learning ability as opposed to the hand-crafted feature extraction in classical machine learning algorithms.). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the models of Yang to incorporate the teaching Akkus to include deep learning models to detect brain abnormalities. Doing so will help in identification of new useful imaging features for quantitative analysis of a large number of brain images and accurate classification of brain abnormalities as taught within Akkus in page 450. However, Yang in the view of Akkus fail to explicitly teach that the deep learning models are a long-short term memory (LSTM) neural networks capable of learning order dependence in fitting time-series data. Kang, in the same field of endeavor, teaches are a long-short term memory (LSTM) neural networks capable of learning order dependence in fitting time-series data (paras. 0040-0059; The Long Short Term Memory (LSTM) neural network allows us to perform group feature selections and classifications. The LSTM machine learning algorithms are discussed in more detail below. From this process, the set of bitplanes to be isolated from image sequences to reflect temporal changes in HC is obtained. An image filter is configured to isolate the identified bitplanes in subsequent steps described below. The Long Short Term Memory (LSTM) neural network, or a suitable alternative such as non-linear Support Vector Machine, and deep learning may again be used to assess the existence of common spatial-temporal patterns of hemoglobin changes across subjects. One recurrent neural network is known as the Long Short Term Memory (LSTM) neural network, which is a category of neural network model specified for sequential data analysis and prediction. The LSTM neural network comprises at least three layers of cells. The first layer is an input layer, which accepts the input data. The second (and perhaps additional) layer is a hidden layer, which is composed of memory cells (see Fig. 12). The final layer is output layer, which generates the output value based on the hidden layer using Logistic Regression. The goal is to classify the sequence into different conditions. The Logistic Regression output layer generates the probability of each condition based on the representation sequence from the LSTM hidden layer. The examiner notes that LSTM is trained on temporal hemoglobin signal sequence to capture spatiotemporal patterns and classify physiological states from sequential inputs, thus, LSTM is used for learning order dependence and fitting time series data for classifying physiological state.). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the deep learning models of Yang in the view of Akkus to incorporate the teaching Kang to include Long Short-Term Memory (LSTM) neural network. Doing so will enable the system to learn meaningful order dependent features for improved feature selection and classification. The LSTM is able to identify which temporal image features (bitplanes) mater most for physiological state classification, detect patterns of physiological change across time and regions, and model sequential inputs while preserving important earlier information as disclosed in paras. 0040, 0046, and 0051-0059. Thus, by modifying the deep learning models of Yang in the view of Akkus with the LSTM neural network of Kang would enable the system to accurately model temporal patterns and sequential relationship. Regarding claim 14, Yang teaches the brain imaging method according to claim 13, wherein the first imaging device is a computed tomography (CT) imaging device, the plurality of first brain images are CT brain images, the second imaging device is a magnetic resonance imaging (MRI) device, and the plurality of second brain images are MRI brain images (para. 0038; the imaging devices are CT and MRI and the system is a multimodal imaging system which uses images from both imaging devices). Regarding claim 15, Yang teaches the brain imaging method according to claim 14, however, fails to explicitly teach wherein the pre-processing process includes: performing a re-alignment process to align positions of the brain in each image; performing a co-registration process to normalize sizes and coordinates in each image; and performing a segmentation process to isolate a target region of the brain in each image. Akkus, in the same field of endeavor, teaches pre-processing process includes: performing a re-alignment process to align positions of the brain in each image (page 452, right col; Registration is spatial alignment of the images to a common anatomical space. Intrapatient registration aims to align the images of different sequences, e.g., T1 and T2, to obtain a multi-channel representation for each location within the brain.); performing a co-registration process to normalize sizes and coordinates in each image (page 452, right col; Registration is spatial alignment of the images to a common anatomical space); and performing a segmentation process to isolate a target region of the brain in each image (pages 454-455; lesion segmentation where it provides segmented region in an image). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the device of Yang to incorporate the teaching Akkus to include preprocessing steps of the images including alignment, registration, and segmentation. Doing so will help in automated analysis of the images and improve the learning process. Doing so would help to avoid suppression of true patterns of structures and intensity differentiation in the output of the models as taught within Akkus in pages 452 and 457. Regarding claim 16, Yang teaches the brain imaging method according to claim 15 however, fails to explicitly teach wherein the co-registration process further includes: obtaining a target brain atlas from a plurality of reference brain atlases built from one or more representations of brain; spatially normalizing the brain of each image to a coordinate system; and registering the brain of each image to the target brain atlas by matching anatomy of the brain with a representation of anatomy in the target brain atlas. Akkus, in the same field of endeavor, teaches obtaining a target brain atlas from a plurality of reference brain atlases built from one or more representations of brain (page 454, right col; supervised machine learning methods that, given a representative dataset, learn the textural and appearance properties of lesions [46], and atlas-based methods that combine both supervised and unsupervised learning into a unified pipeline by registering labeled data or a known cohort data into a common anatomical space); spatially normalizing the brain of each image to a coordinate system (page 452, right col; Registration is spatial alignment of the images to a common anatomical space); and registering the brain of each image to the target brain atlas by matching anatomy of the brain with a representation of anatomy in the target brain atlas (pages 452 and 454; an atlas based methods register images to a common space (atlas)). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the device of Yang to incorporate the teaching Akkus to include co-registration step. Doing so will help in automated analysis of the images and improve the learning process. Doing so would help to avoid suppression of true patterns of structures and intensity differentiation in the output of the models as taught within Akkus in pages 452 and 457. Regarding claim 20, Yang teaches the brain imaging method according to claim 13, wherein the processor is further configured to: detecting a vessel occlusion, infarction or ischemia region of the first brain image set according to the plurality of brain perfusion indices (para. 0015; detecting the infarct portion of the ischemic lesion, where DT is greater than a predetermined third threshold value and/or CBV is below a fourth threshold value). Regarding claim 21, Yang teaches the brain imaging method according to claim 20, wherein the plurality of brain perfusion indices include one or more of a first concentration curve, a first cerebral blood flow, a first cerebral blood volume, a first cerebral blood mean transit time and a first contrast agent time to peak (para. 0038; The program then processes those images to provide an improved data set for a clinician to use, particularly in relation to perfusion indices including blood flow, blood volume, mean transit time, arterial delay time, arterial dispersion time or relative arterial dispersion, tissue dispersion time or relative tissue dispersion.). Claims 5-6 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Yang (US Pub No. 2016/0166159) in the view of Akkus (NPL: “Deep learning for brain MRI segmentation: state of art and future directions”) and Kang (WO 2017139895) and in further view of Selvy (NPL: “A Proficient Clustering Technique to Detect CSF Level in MRI Brain imaging using PSO algorithm”). Regarding claim 5, Yang teaches the brain imaging system according to claim 1, however, fails to explicitly teach wherein the image enhancing process includes: applying a contrast-limited adaptative histogram equalization (CLAHE) algorithm on each image to locally enhance differences between normal regions and regions of interest. Selvy, in the same field of endeavor, teaches applying a contrast-limited adaptative histogram equalization (CLAHE) algorithm on each image to locally enhance differences between normal regions and regions of interest (page 300, left col; Contrast Limited Adaptive Histogram Equalization (CLAHE) [19] was originally developed for medical imaging and has proven to be successful for enhancement of low-contrast images. CLAHE algorithm partitions the images into contextual regions and applies the histogram equalization to each pixel. This evens out the distribution of used grey values and thus makes hidden features of the image more visible. The full grey spectrum is used to express the image. CLAHE operates on small regions in the image, rather than the entire image. Each small region is contrast enhanced, so that the histogram of the output region approximately matches the histogram specified by the 'Distribution' parameter.). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the device of Yang in the view of Akkus and Kang to incorporate the teaching Selvy to include the use of CLAHE algorithm. Doing so will help enhance small regions in the image instead than the entire image as taught within Selvy in page 300, left col. Regarding claim 6, Yang teaches the brain imaging system according to claim 5, however, fails to explicitly teach wherein the image enhancing process further includes: performing a particle swarm optimization algorithm, before applying the CLAHE algorithm, to obtain optimal parameters for the CLAHE algorithm; and applying the CLAHE algorithm on each image by utilizing the optimal parameters. Selvy, in the same field of endeavor, teaches performing a particle swarm optimization algorithm, before applying the CLAHE algorithm, to obtain optimal parameters for the CLAHE algorithm (page 299, left col; . PSO is a population-based stochastic approach for solving continuous nonlinear functions. PSO method optimizes the objective function. PSO is initialized with a group of random particles (solutions) and then searches for optimal solution by updating generations. Particles move through the solution space, and are evaluated according to some fitness criterion after each iteration time step. In every iteration, each particle is updated by the two best values, the first one is the fitness value it has achieved so for. This value is called p_best. Another best value that is tracked by the particle swarm optimizer is the best value obtained so far by any particle in the population. This is called the global best value (i.e) g_best.); and applying the CLAHE algorithm on each image by utilizing the optimal parameters (page 300, left col; Contrast Limited Adaptive Histogram Equalization (CLAHE) [19] was originally developed for medical imaging and has proven to be successful for enhancement of low-contrast images. CLAHE algorithm partitions the images into contextual regions and applies the histogram equalization to each pixel. This evens out the distribution of used grey values and thus makes hidden features of the image more visible. The full grey spectrum is used to express the image. CLAHE operates on small regions in the image, rather than the entire image. Each small region is contrast enhanced, so that the histogram of the output region approximately matches the histogram specified by the 'Distribution' parameter.). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the device of Yang in the view of Akkus and Kang to incorporate the teaching Selvy to include the use of CLAHE algorithm. Doing so will help enhance small regions in the image instead than the entire image as taught within Selvy in page 300, left col. Regarding claim 17, Yang teaches the brain imaging method according to claim 13, however, fails to explicitly teach wherein the image enhancing process includes: applying a contrast-limited adaptative histogram equalization (CLAHE) algorithm on each image to locally enhance differences between normal regions and regions of interest. Selvy, in the same field of endeavor, teaches applying a contrast-limited adaptative histogram equalization (CLAHE) algorithm on each image to locally enhance differences between normal regions and regions of interest (page 300, left col; Contrast Limited Adaptive Histogram Equalization (CLAHE) [19] was originally developed for medical imaging and has proven to be successful for enhancement of low-contrast images. CLAHE algorithm partitions the images into contextual regions and applies the histogram equalization to each pixel. This evens out the distribution of used grey values and thus makes hidden features of the image more visible. The full grey spectrum is used to express the image. CLAHE operates on small regions in the image, rather than the entire image. Each small region is contrast enhanced, so that the histogram of the output region approximately matches the histogram specified by the 'Distribution' parameter.). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the device of Yang in the view of Akkus and Kang to incorporate the teaching Selvy to include the use of CLAHE algorithm. Doing so will help enhance small regions in the image instead than the entire image as taught within Selvy in page 300, left col. Regarding claim 18, Yang teaches the brain imaging method according to claim 17, however, fails to explicitly teach wherein the image enhancing process further includes: performing a particle swarm optimization algorithm, before applying the CLAHE algorithm, to obtain optimal parameters for the CLAHE algorithm; and applying the CLAHE algorithm on each image by utilizing the optimal parameters. Selvy, in the same field of endeavor, teaches performing a particle swarm optimization algorithm, before applying the CLAHE algorithm, to obtain optimal parameters for the CLAHE algorithm (page 299, left col; . PSO is a population-based stochastic approach for solving continuous nonlinear functions. PSO method optimizes the objective function. PSO is initialized with a group of random particles (solutions) and then searches for optimal solution by updating generations. Particles move through the solution space, and are evaluated according to some fitness criterion after each iteration time step. In every iteration, each particle is updated by the two best values, the first one is the fitness value it has achieved so for. This value is called p_best. Another best value that is tracked by the particle swarm optimizer is the best value obtained so far by any particle in the population. This is called the global best value (i.e) g_best.); and applying the CLAHE algorithm on each image by utilizing the optimal parameters (page 300, left col; Contrast Limited Adaptive Histogram Equalization (CLAHE) [19] was originally developed for medical imaging and has proven to be successful for enhancement of low-contrast images. CLAHE algorithm partitions the images into contextual regions and applies the histogram equalization to each pixel. This evens out the distribution of used grey values and thus makes hidden features of the image more visible. The full grey spectrum is used to express the image. CLAHE operates on small regions in the image, rather than the entire image. Each small region is contrast enhanced, so that the histogram of the output region approximately matches the histogram specified by the 'Distribution' parameter.). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the device of Yang in the view of Akkus and Kang to incorporate the teaching Selvy to include the use of CLAHE algorithm. Doing so will help enhance small regions in the image instead than the entire image as taught within Selvy in page 300, left col. Claims 10-12 and 22-24 are rejected under 35 U.S.C. 103 as being unpatentable over Yang (US Pub No. 2016/0166159) in the view of Akkus (NPL: “Deep learning for brain MRI segmentation: state of art and future directions”) and Kang (WO 2017139895) and in further view of Tajbakhsh (NPL: “comparing two class of end-to-end machine learning models in lung nodule detection and classification: MTANNs vs. CNNs”.). Regarding claim 10, Yang teaches the brain imaging system according to claim 1, wherein the processor is further configured to: using different type of images to identify brain lesions (paras. 0038 and 0071; the system uses images from MRI and CT imaging systems to process them to produce various perfusion maps including CBV, VBF, MTT and DT, and further create a color-coded tissue status map consisting of an infarct (red) 702 and penumbra (green) regions overlaid on the corresponding raw image.). However, Yang fails to explicitly teach select, according to type of the one or more target brain lesions, the third deep learning model from a plurality of candidate deep learning models having been trained for identifying different types of brain lesions, wherein the plurality of candidate deep learning models are trained by a plurality of training sets having different types of images, respectively. Tajbakhsh, in the same field of endeavor, teaches select, according to type of the one or more target lesions, the third deep learning model from a plurality of candidate deep learning models having been trained for identifying different types of lesions, wherein the plurality of candidate deep learning models are trained by a plurality of training sets (pages 481 and 485; The training set in the division scenario consisted of patches from 10 malignant nodules and 60 benign nodules (because of the 6 MTANNs in the ensemble), and the testing set consisted of the patches from 66 malignant nodules and 353 benign nodules. During the testing stage, the probability of each ROI being a malignant nodule was computed as the average of probabilities assigned to the patches that were extracted from the ROI with data augmentation. For performance comparison, we used ROC analysis. As can be seen, CNNs with varying depths performed comparably, yielding no significant performance improvement compared to each other. However, the MTANNs achieved a substantial improvement over the CNN-based systems. compared the performance of the CNNs and MTANNs after being trained using limited training data. Our experiments showed that the performance of the MTANNs was higher than that of the CNNs for both lung nodule detection and classification. In the second scenario, we used large training datasets for training the CNNs. We observed a lower performance gap between the two models, but the difference was still significant. The examiner notes that the testing and validation of different deep learning models shows the candidate model with the best performance to detect lesions). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the device of Yang in the view of Akkus and Kang to incorporate the teaching Tajbakhsh to include the testing and validation of different deep learning models to select the candidate deep learning model for lesion detection. Doing so will help to compare different models and select the best performance model as taught within Tajbakhsh in page 485. Regarding claim 11, Yang teaches the brain imaging system according to claim 10, using different type of images to identify brain lesions (paras. 0038 and 0071; the system uses images from MRI and CT imaging systems to process them to produce various perfusion maps including CBV, VBF, MTT and DT, and further create a color-coded tissue status map consisting of an infarct (red) 702 and penumbra (green) regions overlaid on the corresponding raw image.). However, Yang fails to explicitly teach the candidate deep learning models are trained by the different images, and the trained candidate deep learning models are each tested to determine whether or not each of the candidate deep learning models can be selected to identify the one or more target lesions. Tajbakhsh, in the same field of endeavor, teaches the candidate deep learning models are trained by the different images, and the trained candidate deep learning models are each tested to determine whether or not each of the candidate deep learning models can be selected to identify the one or more target lesions (pages 481 and 485; The training set in the division scenario consisted of patches from 10 malignant nodules and 60 benign nodules (because of the 6 MTANNs in the ensemble), and the testing set consisted of the patches from 66 malignant nodules and 353 benign nodules. During the testing stage, the probability of each ROI being a malignant nodule was computed as the average of probabilities assigned to the patches that were extracted from the ROI with data augmentation. For performance comparison, we used ROC analysis. As can be seen, CNNs with varying depths performed comparably, yielding no significant performance improvement compared to each other. However, the MTANNs achieved a substantial improvement over the CNN-based systems. compared the performance of the CNNs and MTANNs after being trained using limited training data. Our experiments showed that the performance of the MTANNs was higher than that of the CNNs for both lung nodule detection and classification. In the second scenario, we used large training datasets for training the CNNs. We observed a lower performance gap between the two models, but the difference was still significant. The examiner notes that the testing and validation of different deep learning models shows the candidate model wi
Read full office action

Prosecution Timeline

Mar 20, 2023
Application Filed
Oct 02, 2024
Non-Final Rejection — §103
Jan 06, 2025
Response Filed
Apr 07, 2025
Final Rejection — §103
Jul 07, 2025
Request for Continued Examination
Jul 11, 2025
Response after Non-Final Action
Jul 16, 2025
Non-Final Rejection — §103
Oct 20, 2025
Response Filed
Dec 11, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599331
Hyperspectral Image-Guided Ocular Imager for Alzheimer's Disease Pathologies
2y 5m to grant Granted Apr 14, 2026
Patent 12594038
ESTIMATION OF CONTACT FORCE OF CATHETER EXPANDABLE ASSEMBLY
2y 5m to grant Granted Apr 07, 2026
Patent 12588887
MEDICAL DEVICE POSITION SENSING COMPONENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12582479
METHOD AND SYSTEM FOR AUTOMATIC PLANNING OF A MINIMALLY INVASIVE THERMAL ABLATION AND METHOD FOR TRAINING A NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12569189
DEVICE, METHOD AND COMPUTER PROGRAM FOR DETERMINING SLEEP EVENT USING RADAR
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
83%
With Interview (+16.1%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 121 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month