DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claim 1-16 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-14 of U.S. Patent No. 11786309. Although the claims at issue are not identical, they are not patentably distinct from each other.
Regarding claims 1 and 9. A computer-implemented method for identifying targets of interest (TOls) for deep brain stimulation (DBS), the method comprising:
obtaining a set of medical imaging data pertaining to human cranial anatomy, the set of medical imaging data sampled from a plurality of humans in one or more imaging modalities,
wherein the medical imaging data comprises image slices taken along at least one of coronal, sagittal and axial planes relative to the human cranial anatomy;
re-slicing at least a portion of the medical imaging data through one or more planes that are at an angular orientation with respect to at least one of the coronal, sagittal and axial planes, thereby obtaining re-sliced medical imaging data;
training a first artificial neural network (ANN) engine using a portion of the medical imaging data that has not been re-sliced and a portion of the re- sliced medical imaging data, wherein the medical imaging data is appropriately labeled, to obtain a validated and tested ANN engine configured to identify one or more TOls in a human brain image; and
executing the first ANN engine, in response to an input image of a patient's brain obtained using a particular imaging modality, to identify at least one particular TOI in the patient's brain for DBS (see claims 1 and 8 of US-PAT-NO: 11786309. With particular regard to the present Claim 1, Claim 1 of the patent includes all of the elements of the present claim 1. The “target of interest” in the present claim is not meaningfully different from the “region of interest” in the patent claim. Therefore, this is an “anticipation-type” non-statutory double patenting rejection. See MPEP 804 (II)(B)(2).).
Regarding claims 2 and 10. The method as recited in claim 1, further comprising:
blending two or more co-registered image slices selected from at least one of the medical imaging data that has not been re-sliced or the portion of the re-sliced medical imaging data to obtain hybrid image slices (see claims 1 and 8 of US-PAT-NO: 11786309);
training the first ANN engine using a portion of the hybrid image slices in addition to the medical imaging data that has not been re-sliced and the portion of the re-sliced medical imaging data (see claims 2 and 9 of US-PAT-NO: 11786309).
Regarding claims 3 and 11. The method as recited in claim 1, further comprising: blending two or more co-registered image slices selected from at least one of the medical imaging data that has not been re-sliced or the portion of the re-sliced medical imaging data to obtain hybrid image slices; training a second ANN engine using a portion of the hybrid image slices to obtain a validated and tested ANN engine configured to identify one or more TOls in the human brain image; executing the first and second ANN engines separately with respect to the input image of the patient's brain and combining the TOI identifications obtained respectively therefrom for improving quality of identification of the at least one particular TOI (see claims 1 and 8 of US-PAT-NO: 11786309).
Regarding claims 4 and 12. The method as recited in claim 1, further comprising performing, prior to the training, morphological image processing of image slices of the medical imaging data that has not been re-sliced or the portion of the re- sliced medical imaging data, wherein the morphological image processing includes at least one of edge detection, contrast boosting and shape detection (see claims 3 and 10 of US-PAT-NO: 11786309).
Regarding claims 5 and 13. The method as recited in claim 1, further comprising performing a dropout technique with respect to the first ANN engine wherein a select number of computational nodes are dropped from a particular neural network layer in each training epoch (see claims 4 and 11 of US-PAT-NO: 11786309).
Regarding claims 6 and 14. The method as recited in claim 1, further comprising: building an electrode scene with respect to the at least one particular TOI of the patient's brain image for placing a DBS lead thereat; and determining an optimal trajectory for implanting the DBS lead in the patient's brain relative to a particular electrode of the DBS lead (see claims 5 and 12 of US-PAT-NO: 11786309).
Regarding claims 7 and 15. The method as recited in claim 6, further comprising: co-registering a computed tomography (CT) image of the patient's brain with the input image of the patient having the at least one particular TOI identified for stimulation, wherein the input image of the patient's brain comprises one of a pre-operative or intra-operative magnetic resonance imaging (MRI) scan; and obtaining an entry point coordinate set and a target point coordinate set with respect to the patient's brain for performing an implant procedure to implant the DBS lead using the optimal trajectory, wherein the entry point coordinate set is operative to identify a burr hole location on the patient's cranium and the target point coordinate set is operative to identify a location relative to the at least one particular TOI in the patient's brain (see claims 5 and 13 of US-PAT-NO: 11786309).
Regarding claims 8 and 16. The method as recited in claim 7, further comprising: providing the entry point coordinate set, the target point coordinate set and data relating to the optimal trajectory to a stereotactic surgery system including a guiding apparatus containing the DBS lead; and automatically guiding the DBS lead to the at least one particular TOI based on the entry point coordinate set, the target point coordinate set and the data relating to the optimal trajectory data to place the particular electrode proximate to the at least one particular TOI (see claims 6 and 14 of US-PAT-NO: 11786309).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4, 6-9, 12, 14-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crawford (PGPUB: 2020/0297228 A1) in view of Pan (NPL: Early Detection of Alzheimer's Disease Using Magnetic Resonance Imaging: A Novel Approach Combining Convolutional Neural Networks and Ensemble Learning), and further in view of Leuthardt (PGPUB: 20190090749 A1).
Regarding claims 1 and 9. Crawford teaches a computer-implemented method for identifying targets of interest (TOls) for deep brain stimulation (DBS), the method comprising:
obtaining a set of medical imaging data (see Fig. 3, paragraph 40, the operations 300 may include receiving a first image volume, such as a CT scan, from a preoperative image capture device at a first time (Block 302)) pertaining to human cranial anatomy, the set of medical imaging data sampled from a plurality of humans in one or more imaging modalities (see paragraph 67, to automatically determine whether a feature of the brain is a sulcus or a gyms, machine learning or template matching may be used, with computerized models trained on the appearance on medical images of a sulcus or gyms; Sulcus and gyms are features of the brain/cranial anatomy. Images of sulcus and gyms corresponds to image data pertaining to human cranial anatomy),
wherein the medical imaging data comprises image slices taken along at least one of coronal, sagittal and axial planes relative to the human cranial anatomy (see Fig. 13 A and B, paragraph 64, a computer system with software may be used to plan a linear trajectory 206 of an electrode into a target location deep within the brain. The surgeon first selects the target structure (e.g., subthalamic nucleus, globus pallidus interna, ventral intermediate nucleus) and then either estimates a first trajectory based on default angles away from the target, e.g., 15 degrees from the midsagittal plane and 60 degrees from the axial plane (FIG. 13), or requires the user to select an entry point anatomically).
However, Crawford does not expressly teach re-slicing at least a portion of the medical imaging data through one or more planes that are at an angular orientation with respect to at least one of the coronal, sagittal and axial planes, thereby obtaining re-sliced medical imaging data.
Pan teaches that to facilitate the CNN training, verification, and testing, a 3D image set of each subject was re-sliced into three 2D image sets, each of the sagittal, coronal, or transverse orientation (with X, Y, and Z axes perpendicular to the sagittal, coronal, and transverse planes, respectively). A preprocessed 3D MRI image (of 121 x 145 x 121) was thus re-sliced into 121 sagittal, 145 coronal, and 121 transverse slices; the values on the X, Y, and Z axis were {-90, -88, -87, ... 90}, {-126, -125, -123, ... 90}, and {-72, - 71, -69, ... 108}, respectively. For example, X(i), iE{-90, -88, -87, ... 90} is the sagittal slice through the point [i, 0, 0]. Here, the numbers within the brackets were the MNI coordinates (see page 4, section: MRI Preprocessing).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Crawford by Pan for providing the CNN training, verification, and testing, a 3D image set of each subject was re-sliced into three 2D image sets, each of the sagittal, coronal, or transverse orientation (with X, Y, and Z axes perpendicular to the sagittal, coronal, and transverse planes, respectively). A preprocessed 3D MRI image (of 121 x 145 x 121) was thus re-sliced into 121 sagittal, 145 coronal, and 121 transverse slices, as re-slicing at least a portion of the medical imaging data through one or more planes that are at an angular orientation with respect to at least one of the coronal, sagittal and axial planes, thereby obtaining re-sliced medical imaging data. Therefore, combining the elements from prior arts according to known methods and technique, such as a 3D image set of each subject was re-sliced into three 2D image sets, each of the sagittal, coronal, or transverse orientation (with X, Y, and Z axes perpendicular to the sagittal, coronal, and transverse planes, respectively), would yield predictable results.
Crawford, in view of Pan, teaches:
training a first artificial neural network (ANN) engine using a portion of the medical imaging data that has not been re-sliced and a portion of the re-sliced medical imaging data (see Crawford, paragraph 67, to automatically determine whether a feature of the brain is a sulcus or a gyms, machine learning or template matching may be used, with computerized models trained on the appearance on medical images of a sulcus or gyms; any known machine learning model can be used as disclosed by Crawford, which includes artificial neural networks; see Pan, Page 4: section of MRI Preprocessing, to facilitate the CNN training, verification, and testing, a 3D image set of each subject was re-sliced into three 2D image sets, each of the sagittal, coronal, or transverse orientation (with X, Y, Z axes perpendicular to the sagittal, coronal, and transverse planes, respectively))), wherein the medical imaging data (see Crawford, paragraph 67, to automatically determine whether a feature of the brain is a sulcus or a gyms, machine learning or template matching may be used, with computerized models trained on the appearance on medical images of a sulcus or gyms; training data is commonly labelled for easier training), to obtain a validated and tested ANN engine configured to identify one or more TOls in a human brain image (see Crawford, paragraph 67, To automatically determine whether a feature of the brain is a sulcus or a gyms, machine learning or template matching may be used, with computerized models trained on the appearance on medical images of a sulcus or gyms; Crawford, paragraph 63, in an exemplary method, DBS electrodes are positioned in the brain through the gyms 1204 and avoid penetrating through the sulcus 1202, especially at or near the surface of the cerebral cortex. Similarly, the surgeon avoids passing the electrode through any region of the brain posterior to the coronal bony structure; the gyms corresponds to the region of interests and the sulcus corresponds to the regions of avoidance). However, the combination does not expressly teach obtaining a validated and tested ANN engine to distinguish a region of interest and a region of avoidance.
Leuthardt teaches that the more widespread use of functional mapping by clinical and surgical practitioners made possible by supervised classification systems using the multilayer perceptron (MLP) enables the use of functional mapping for localization of therapeutic interventions such as neuromodulation, surgical ablation, or implants, for guidance of neurosurgical or radiotherapeutic interventions to avoid avoidance of critical structures, for diagnosis of neurological disorders, and for assessment of treatment efficacy for neurological disorders; multilayer perceptron is a type of artificial neural network; (see paragraph 6) perceptron training and testing used previously acquired data sets. All patients were young adults screened to exclude neurological impairment and psychotropic medications. Demographic information and acquisition parameters are given in Table 1 (see paragraph 136).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Leuthardt for providing a validated and tested ANN engine to distinguish regions of interest and avoidance. The motivation in doing so would lie in obtaining an ANN engine that has been appropriately tested and validated to obtain accurate and reliable results.
The combination teaches:
executing the first ANN engine, in response to an input image of a patient's brain obtained using a particular imaging modality, to identify at least one particular TOI in the patient's brain for DBS (see Crawford, paragraph 67, For evaluating possible electrode trajectories, image processing enables a small region (i.e., ROI) surrounding the planned path of the electrode to be evaluated to determine the number of times the trajectory passes “in-out-in” through brain folds. To automatically determine whether a feature of the brain is a sulcus or a gyms, machine learning (construed as ANN engine) or template matching may be used, with computerized models trained on the appearance on medical images of a sulcus or gyms; see Crawford, Fig. 12, paragraph 63, In an exemplary method, DBS electrodes are positioned in the brain through the gyms 1204 and avoid penetrating through the sulcus 1202, especially at or near the surface of the cerebral cortex. Similarly, the surgeon avoids passing the electrode through any region of the brain posterior to the coronal bony structure; see Leuthardt, paragraph 6, The more widespread use of functional mapping by clinical and surgical practitioners made possible by supervised classification systems using the multilayer perceptron (MLP) enables the use of functional mapping for localization of therapeutic interventions such as neuromodulation, surgical ablation, or implants, for guidance of neurosurgical or radiotherapeutic interventions to avoid avoidance of critical structures, for diagnosis of neurological disorders, and for assessment of treatment efficacy for neurological disorders).
However, the combination does not expressly teach the medical imaging data is appropriately labeled.
Leuthardt teaches that the landmark segmentation unit 330 may use the pre-trained segmentation model 134 to sequentially or simultaneously perform semantic segmentation on one or more slices of the 3D volume image 200 to generate the 3D mask image 136 labeled with the AC, PC, and third ventricle regions. For example, pixels in the 3D coordinate space of the 3D volume image 200 may be labeled with labels indicating correspondence to the AC region (e.g., red color or “1”), the PC region (e.g., green color or “2”), and the third ventricle region (e.g., blue color or “3”), or no correspondence (e.g., original pixel value or “0”) to generate the 3D mask image 136 (see paragraph 58).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Leuthardt for providing pixels in the 3D coordinate space of the 3D volume image 200 may be labeled with labels indicating correspondence to the AC region, as teach the medical imaging data is appropriately labeled. Therefore, combining the elements from prior arts according to known methods and technique, such as Therefore, combining the elements from prior arts according to known methods and technique would yield predictable results, would yield predictable results.
Regarding claims 4 and 12. The combination teaches the method as recited in claim 1, further comprising performing, prior to the training, morphological image processing of image slices of the medical imaging data that has not been re-sliced or the portion of the re- sliced medical imaging data, wherein the morphological image processing includes at least one of edge detection, contrast boosting and shape detection (see Leuthardt, paragraph 54, the pre-processing unit 310 may be configured to enhance display characteristics of the 3D volume image 20; in one embodiment, each of the slices in the 3D volume image may be normalized such that image characteristics of the slices in the 3D volume image 200 may be enhanced in terms of, for example, brightness, contrast, etc ).
Regarding claims 6 and 14. The combination teaches the method as recited in claim 1, further comprising: building an electrode scene with respect to the at least one particular TOI of the patient's brain image for placing a DBS lead thereat (see Crawford, paragraph 64, a computer system with software may be used to plan a linear trajectory 206 of an electrode into a target location deep within the brain. The surgeon first selects the target structure (e.g., subthalamic nucleus, globus pallidus interna, ventral intermediate nucleus) and then either estimates a first trajectory based on default angles away from the target, e.g., 15 degrees from the midsagittal plane and 60 degrees from the axial plane (FIG. 13), or requires the user to select an entry point anatomically); and
determining an optimal trajectory for implanting the DBS lead in the patient's brain relative to a particular electrode of the DBS lead (see Crawford, paragraph 66, in an exemplary embodiment of the invention, image processing is used to analyze regions of the brain at and near the first trajectory, predicting and suggesting which direction and by how much the surgeon should move to meet the conditions for optimal placement of the electrode).
Regarding claims 7 and 15. The combination teaches the method as recited in claim 6, further comprising:
co-registering a computed tomography (CT) image of the patient's brain with the input image of the patient having the at least one particular TOI identified for stimulation, wherein the input image of the patient's brain comprises one of a pre-operative or intra-operative magnetic resonance imaging (MRI) scan (see Crawford, paragraph 38, some embodiments, multiple pre-operative exam images (e.g., CT or magnetic resonance (MR) images) may be co-registered such that it is possible to transform coordinates of any given point on the anatomy to the corresponding point on all other pre-operative exam images); and
obtaining an entry point coordinate set and a target point coordinate set with respect to the patient's brain for performing an implant procedure to implant the DBS lead using the optimal trajectory (see Crawford, paragraph 70, in another embodiment, the system provides a method for displaying the location or locations of best entry into the skull for the desired target, by generating a map of the surface of the skull or brain that indicates by use of visual characteristics such as colors to indicate the accuracy score of the electrode path according to the criteria determined by the user),
wherein the entry point coordinate set is operative to identify a burr hole location on the patient's cranium (see Crawford, paragraph 59, for example, the robot may pivot about the entry point 1162 into the anatomical feature 1128 (e.g., a patient's head). This entry point pivoting is advantageous as it allows the user to make a smaller burr hole without limiting their ability to adjust the target location 1164 intraoperatively. The cone 1160 represents the range of trajectories that may be reachable through a single entry hole) and the target point coordinate set is operative to identify a location relative to the at least one particular TOI in the patient's brain (see Crawford, paragraph 58, the trajectory to the target location 1058 is adjusted by the ring and arc angles of the stereotactic frame (e.g., a Leksell frame). These coordinates may be set manually, and the stereotactic frame may be used as a backup or as a redundant system in case the robot fails or cannot be tracked or registered successfully. The linear x, y, z offsets to the center point (i.e., target location 1058) are adjusted via the mechanisms of the frame. A cone 1060 is centered around the target location 1058, and shows the adjustment zone that can be achieved by modifying the ring and arc angles of the Leksell or other type of frame).
Regarding claims 8 and 16. The combination teaches the method as recited in claim 7, further comprising: providing the entry point coordinate set, the target point coordinate set and data relating to the optimal trajectory (see Crawford, paragraph 70, in another embodiment, the system provides a method for displaying the location or locations of best entry into the skull for the desired target, by generating a map of the surface of the skull or brain that indicates by use of visual characteristics such as colors to indicate the accuracy score of the electrode path according to the criteria determined by the user; see Crawford, paragraph 58-59, further describes how coordinates for entry and target points are determined ) to a stereotactic surgery system including a guiding apparatus containing the DBS lead (see Crawford, paragraph 57, one use for the embodiments described herein is to plan trajectories and to control a robot to move into a desired trajectory, after which the surgeon will place implants such as electrodes through a guide tube held by the robot); and
automatically guiding the DBS lead to the at least one particular TOI based on the entry point coordinate set, the target point coordinate set and the data relating to the optimal trajectory data to place the particular electrode proximate to the at least one particular TOI (see Crawford, paragraph 57, one use for the embodiments described herein is to plan trajectories and to control a robot to move into a desired trajectory, after which the surgeon will place implants such as electrodes through a guide tube held by the robot; see Crawford, paragraph 71, alternately, the robot system may utilize a “gravity” mode, wherein, when an applied force is applied by the user, the robot arm would automatically position the end effector toward the nearest and most accurate entry location. In another embodiment, an applied force by the user may also move the robot arm off that trajectory and automatically cause the robot arm to move the end effector toward then next closest accurate location with the accuracy score exceeding a base threshold).
Claim(s) 2 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crawford (PGPUB: 2020/0297228 A1) in view of Pan (NPL: Early Detection of Alzheimer's Disease Using Magnetic Resonance Imaging: A Novel Approach Combining Convolutional Neural Networks and Ensemble Learning), in view of Leuthardt (PGPUB: 20190090749 A1), in view of Leuthardt (PGPUB: 20190090749 A1), and further in view of Nalepa (NPL: Data Augmentation for Brain-Tumor Segmentation: A Review ).
Regarding claims 2 and 10. The method as recited in claim 1, further comprising:
blending two or more co-registered image slices selected from at least one of the medical imaging data that has not been re-sliced or the portion of the re-sliced medical imaging data (see Crawford, paragraph 38, multiple pre-operative exam images (e.g., CT or magnetic resonance (MR) images) may be co-registered such that it is possible to transform coordinates of any given point on the anatomy to the corresponding point on all other pre-operative exam images; see Pan, page 4, section: MRI Preprocessing, to facilitate the CNN training, verification, and testing, a 3D image set of each subject was re-sliced into three 2D image sets, each of the sagittal, coronal, or transverse orientation (with X, Y, Z axes perpendicular to the sagittal, coronal, and transverse planes, respectively)). However, the combination does not expressly teach that the co-registered image slices are blended to obtain hybrid image slices.
Nalepa teaches: blending images that a promising approach of combining training samples using their linear combinations (referred to as mixup) was proposed by Zhang et al. (2017), and further enhanced for medical image segmentation by Eaton-Rosen et al. in their mixmatch algorithm (Eaton-Rosen et al., 2019), which additionally introduced a technique of selecting training samples that undergo linear combination (see Section 2.4: Data Augmentation by Generating Artificial Data) for increasing training sample sizes and improving the quality of training samples.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Crawford, in view of Pan, and in further view of Leuthardt, with the above teachings of Nalepa to incorporate the blending of images. The motivation in doing so would lie in generating a larger and higher quality training set, resulting in a more accurate machine learning network.
Crawford, in view of Pan, and in further view of Leuthardt and Nalepa teaches: training the first ANN engine using a portion of the hybrid image slices in addition to the medical imaging data that has not been re-sliced and the portion of the re-sliced medical imaging data (see Crawford, paragraph 67, for evaluating possible electrode trajectories, image processing enables a small region surrounding the planned path of the electrode to be evaluated to determine the number of times the trajectory passes “in-out-in” through brain folds. To automatically determine whether a feature of the brain is a sulcus or a gyms, machine learning or template matching may be used, with computerized models trained on the appearance on medical images of a sulcus or gyms; dee Nalepa, Section 2.4: Data Augmentation by Generating Artificial Data: a promising approach of combining training samples using their linear combinations (referred to as mixup) was proposed by Zhang et al. (2017), and further enhanced for medical image segmentation by Eaton-Rosen et al. in their mixmatch algorithm (Eaton-Rosen et al., 2019), which additionally introduced a technique of selecting training samples that undergo linear combination”).
Claim(s) 5 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crawford (PGPUB: 2020/0297228 A1) in view of Pan (NPL: Early Detection of Alzheimer's Disease Using Magnetic Resonance Imaging: A Novel Approach Combining Convolutional Neural Networks and Ensemble Learning), and further in view of Leuthardt (PGPUB: 20190090749 A1), and further in view of Siemionow (PGPUB: 20190142519 A1 ).
Regarding claims 5 and 13. The combination does not expressly teach the method as recited in claim 1, further comprising performing a dropout technique with respect to the first ANN engine wherein a select number of computational nodes are dropped from a particular neural network layer in each training epoch.
Siemionow teaches that training the CNN model may include training a CNN model including a contracting path and an expanding path. The contracting path may include a number of convolutional layers, a number of pooling layers and dropout layers. Each pooling and dropout layer may be preceded by at least one convolutional layer (see paragraph 30).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Crawford, in view of Pan, and in further view of Leuthardt, with the above teachings of Siemionow to incorporate the use of dropout in the training of a neural network. The motivation in doing so would lie in preventing overfitting in a neural network and more accurate results.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIN JIA whose telephone number is (571)270-5536. The examiner can normally be reached 9:00 am-7:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571)272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIN JIA/Primary Examiner, Art Unit 2663