DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994).
The disclosure of the prior-filed application, Application No. 18/583,328, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. The claim limitations “merging diffusion tensor image (DTI) data associated with a scan of a patient body with a three-dimensional medical model to create a merged three-dimensional image in a three-dimensional coordinate space;” are not previously discussed in application 18/583,328. The claim is given the effective filing date of the current application which is 05/23/2024.
Claim Objections
Claim 13 is objected to because of the following informalities: the limitation “A computer program product (“product”) comprising” should be removed from the claim to improve clarity as the claims referencing claim 13 are in reference to “the non-transitory computer-readable medium”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 5, 7, 11, 13, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shalayev et al. (US 2019/0290361)(Hereinafter referred to as Shalayev) in view of Avisar et al. (US 2021/0015583)(Hereinafter referred to as Avisar)
Regarding claim 1, Shalayev teaches a method (The present invention has utility as a system and method for designing an implant that improves bone-to implant stability and osseointegration. Intraoperatively, a computer-assisted surgical system may prepare the anatomy and aid in implant placement to exploit the stability and osseointegration design features of the implant. See paragraph [0018]) comprising: identifying fibers of selected portions of the merged three-dimensional image associated with a region of interest defined by a virtual area selection performed by a physical instrument (FIG. 5 illustrates the progression of designing an implant for a craniotomy procedure in accordance with the embodiments described herein. A model of the skull 102 is obtained and displayed in a GUI 100 as shown at 94. The skull model 102 may have been generated from a CT scan, while the inner brain tissue may have been generated from an MRI scan, with both scans being fused to plan the procedure in the GUI 100. The user determines a region of bone required for removal 104 to reach a targeted site in the brain. The geometry and amount of bone to be removed 104 is at the discretion of the surgeon, but it should be the least amount of bone that can be removed while maintaining the ability to reach the target site and conduct the procedure safely. The GUI 100 includes tools to design the initial implant 106 to replace the required region for removal 104. In an embodiment, the initial implant is designed with a point and click tool that allows the user to insert singular points at the desired outer boundary of the initial implant. Subsequently, the planning software automatically generates the implant volume and geometry by connecting the singular points with splines that follow the curvature of the bone using mathematical models such as non-uniform rational Bezier spline (NURBS). This ensures the curvature of the implant matches the natural curvature of the skull. See paragraph [0058])( The initial implant area is zoomed-in so the user can identify stability regions surrounding the implant as shown at 96. A bone density map 108 of the skull 102 is displayed to the user in the form of brightness values, which provides the relative bone densities of the bone. The user highlights or labels the desired stability regions as represented by 110a and 110b and one or more stability features are augmented to the initial implant to interact with these stability regions 110. See paragraph [0059])(Physical instrument is computing device used for assisted surgery);
highlighting the fibers as a fiber bundle (A region of bone tissue is highlighted in figure 5 and can be considered a fiber bundle)(bone tissue contains collagen fibers which make up the organic part of the bone matrix, thus by highlighting a region of bone the user is highlighting fibers which can be considered a fiber bundle); and
displaying the highlighted fiber bundle as a colorized subset of the merged three-dimensional image (by being highlighted it is clear that the fiber bundle is colorized) ( FIG. 5 illustrates the progression of designing an implant for a craniotomy procedure in accordance with the embodiments described herein. A model of the skull 102 is obtained and displayed in a GUI 100 as shown at 94. See paragraph [0058]), but is silent to merging diffusion tensor image (DTI) data associated with a scan of a patient body with a three-dimensional medical model to create a merged three-dimensional image in a three-dimensional coordinate space.
Avisar teaches utilizing an MD6DM system which is a multidimensional 6 degrees of freedom movement within a spherical model which is a combination of CT, MRI, DIT, etc., which is patient specific and a representative brain model such as atlas data to create a partially patient specific model (The MD6DM is rendered in real time using a SNAP model built from the patient's own data set of medical images including CT, MRI, DTI etc., and is patient specific. A representative brain model, such as Atlas data, can be integrated to create a partially patient specific model if the surgeon so desires. The model gives a 360° spherical view from any point on the MD6DM. Using the MD6DM, the viewer is positioned virtually inside the anatomy and can look and observe both anatomical and pathological structures as if he were standing inside the patient's body. The viewer can look up, down, over the shoulders etc., and will see native structures in relation to each other, exactly as they are found in the patient. Spatial relationships between internal structures are preserved and can be appreciated using the MD6DM. See paragraph [0023]).
Shalayev and Avisar teach of utilizing combined medical models utilizing MRI data, DTI is a form of MRI, for presenting information to a medical professional and Avisar teaches that the combined data can be partially patient specific, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev with the data merging techniques of Avisar such that the medical professional could visualize a representative brain model in combination with the DTI data to look for legions or tumors and provide additional care in response to the findings.
Regarding claim 5, Shalayev in view of Avisar teaches the method of claim 1, but is silent to wherein the three-dimensional medical model comprises a magnetic resonance imaging (MRI) scan. However,
Avisar further teaches that the partially patient specific model can include multiple datasets including an MRI scan (The MD6DM is rendered in real time using a SNAP model built from the patient's own data set of medical images including CT, MRI, DTI etc., and is patient specific. A representative brain model, such as Atlas data, can be integrated to create a partially patient specific model if the surgeon so desires. The model gives a 360° spherical view from any point on the MD6DM. Using the MD6DM, the viewer is positioned virtually inside the anatomy and can look and observe both anatomical and pathological structures as if he were standing inside the patient's body. The viewer can look up, down, over the shoulders etc., and will see native structures in relation to each other, exactly as they are found in the patient. Spatial relationships between internal structures are preserved and can be appreciated using the MD6DM. See paragraph [0023]), therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev in view of Avisar with the MRI data combination technique of Avisar such that the three-dimensional medical model could also include MRI data and atlas data to improve the data availability during review of the scans.
Regarding claim 7, Shalayev teaches A system (The present invention has utility as a system and method for designing an implant that improves bone-to implant stability and osseointegration. Intraoperatively, a computer-assisted surgical system may prepare the anatomy and aid in implant placement to exploit the stability and osseointegration design features of the implant. See paragraph [0018]) (With reference to FIG. 4, a particular embodiment of a robotic surgical system 50 to prepare and/or install the implant is shown in the context of an operating room (OR). The surgical system 50 generally includes a surgical robot 52, a computing system 54, and a tracking system 56. See paragraph [0047]) comprising one or more processors, and a non-transitory computer-readable medium including one or more sequences of instructions that, when executed by the one or more processors, cause the system to perform operations (The computing system 54 generally includes a planning computer 70 including a processor; a device computer 72 including a processor; a tracking computer 7 4 including a processor; and peripheral devices. Processors operate in system 54 to perform computations associated with the inventive method. See paragraph [0049])( The planning computer 70 contains hardware ( e.g., processors, controllers, and memory), software, data and utilities that are dedicated to the implant design and planning of a surgical procedure, either pre-operatively or intraoperatively. See paragraph [0050]) comprising:
identifying fibers of selected portions of the merged three-dimensional image associated with a region of interest defined by a virtual area selection performed by a physical instrument (FIG. 5 illustrates the progression of designing an implant for a craniotomy procedure in accordance with the embodiments described herein. A model of the skull 102 is obtained and displayed in a GUI 100 as shown at 94. The skull model 102 may have been generated from a CT scan, while the inner brain tissue may have been generated from an MRI scan, with both scans being fused to plan the procedure in the GUI 100. The user determines a region of bone required for removal 104 to reach a targeted site in the brain. The geometry and amount of bone to be removed 104 is at the discretion of the surgeon, but it should be the least amount of bone that can be removed while maintaining the ability to reach the target site and conduct the procedure safely. The GUI 100 includes tools to design the initial implant 106 to replace the required region for removal 104. In an embodiment, the initial implant is designed with a point and click tool that allows the user to insert singular points at the desired outer boundary of the initial implant. Subsequently, the planning software automatically generates the implant volume and geometry by connecting the singular points with splines that follow the curvature of the bone using mathematical models such as non-uniform rational Bezier spline (NURBS). This ensures the curvature of the implant matches the natural curvature of the skull. See paragraph [0058])( The initial implant area is zoomed-in so the user can identify stability regions surrounding the implant as shown at 96. A bone density map 108 of the skull 102 is displayed to the user in the form of brightness values, which provides the relative bone densities of the bone. The user highlights or labels the desired stability regions as represented by 110a and 110b and one or more stability features are augmented to the initial implant to interact with these stability regions 110. See paragraph [0059])(Physical instrument is computing device used for assisted surgery);
highlighting the fibers as a fiber bundle (A region of bone tissue is highlighted in figure 5 and can be considered a fiber bundle)(bone tissue contains collagen fibers which make up the organic part of the bone matrix, thus by highlighting a region of bone the user is highlighting fibers which can be considered a fiber bundle)( FIG. 5 illustrates the progression of designing an implant for a craniotomy procedure in accordance with the embodiments described herein. A model of the skull 102 is obtained and displayed in a GUI 100 as shown at 94. See paragraph [0058]); and
displaying the highlighted fiber bundle as a colorized subset of the merged three-dimensional image (by being highlighted it is clear that the fiber bundle is colorized) ( FIG. 5 illustrates the progression of designing an implant for a craniotomy procedure in accordance with the embodiments described herein. A model of the skull 102 is obtained and displayed in a GUI 100 as shown at 94. See paragraph [0058]), but is silent to merging diffusion tensor image (DTI) data associated with a scan of a patient body with a three-dimensional medical model to create a merged three-dimensional image in a three-dimensional coordinate space.
Avisar teaches utilizing an MD6DM system which is a multidimensional 6 degrees of freedom movement within a spherical model which is a combination of CT, MRI, DIT, etc., which is patient specific and a representative brain model such as atlas data to create a partially patient specific model (The MD6DM is rendered in real time using a SNAP model built from the patient's own data set of medical images including CT, MRI, DTI etc., and is patient specific. A representative brain model, such as Atlas data, can be integrated to create a partially patient specific model if the surgeon so desires. The model gives a 360° spherical view from any point on the MD6DM. Using the MD6DM, the viewer is positioned virtually inside the anatomy and can look and observe both anatomical and pathological structures as if he were standing inside the patient's body. The viewer can look up, down, over the shoulders etc., and will see native structures in relation to each other, exactly as they are found in the patient. Spatial relationships between internal structures are preserved and can be appreciated using the MD6DM. See paragraph [0023]).
Shalayev and Avisar teach of utilizing combined medical models utilizing MRI data, DTI is a form of MRI, for presenting information to a medical professional and Avisar teaches that the combined data can be partially patient specific, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev with the data merging techniques of Avisar such that the medical professional could visualize a representative brain model in combination with the DTI data to look for legions or tumors and provide additional care in response to the findings.
Regarding claim 11, Shalayev in view of Avisar teaches the system of claim 7, but is silent to wherein the three-dimensional medical model comprises a magnetic resonance imaging (MRI) scan. However,
Avisar further teaches that the partially patient specific model can include multiple datasets including an MRI scan (The MD6DM is rendered in real time using a SNAP model built from the patient's own data set of medical images including CT, MRI, DTI etc., and is patient specific. A representative brain model, such as Atlas data, can be integrated to create a partially patient specific model if the surgeon so desires. The model gives a 360° spherical view from any point on the MD6DM. Using the MD6DM, the viewer is positioned virtually inside the anatomy and can look and observe both anatomical and pathological structures as if he were standing inside the patient's body. The viewer can look up, down, over the shoulders etc., and will see native structures in relation to each other, exactly as they are found in the patient. Spatial relationships between internal structures are preserved and can be appreciated using the MD6DM. See paragraph [0023]), therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev in view of Avisar with the MRI data combination technique of Avisar such that the three-dimensional medical model could also include MRI data and atlas data to improve the data availability during review of the scans.
Regarding claim 13, Shalayev teaches A computer program product (“product”) comprising a non-transitory computer-readable medium having a computer-readable program code embodied therein to be executed by one or more processors, the program code including instructions (The computing system 54 generally includes a planning computer 70 including a processor; a device computer 72 including a processor; a tracking computer 7 4 including a processor; and peripheral devices. Processors operate in system 54 to perform computations associated with the inventive method. See paragraph [0049])( The planning computer 70 contains hardware ( e.g., processors, controllers, and memory), software, data and utilities that are dedicated to the implant design and planning of a surgical procedure, either pre-operatively or intraoperatively. See paragraph [0050]) to perform:
identifying fibers of selected portions of the merged three-dimensional image associated with a region of interest defined by a virtual area selection performed by a physical instrument (FIG. 5 illustrates the progression of designing an implant for a craniotomy procedure in accordance with the embodiments described herein. A model of the skull 102 is obtained and displayed in a GUI 100 as shown at 94. The skull model 102 may have been generated from a CT scan, while the inner brain tissue may have been generated from an MRI scan, with both scans being fused to plan the procedure in the GUI 100. The user determines a region of bone required for removal 104 to reach a targeted site in the brain. The geometry and amount of bone to be removed 104 is at the discretion of the surgeon, but it should be the least amount of bone that can be removed while maintaining the ability to reach the target site and conduct the procedure safely. The GUI 100 includes tools to design the initial implant 106 to replace the required region for removal 104. In an embodiment, the initial implant is designed with a point and click tool that allows the user to insert singular points at the desired outer boundary of the initial implant. Subsequently, the planning software automatically generates the implant volume and geometry by connecting the singular points with splines that follow the curvature of the bone using mathematical models such as non-uniform rational Bezier spline (NURBS). This ensures the curvature of the implant matches the natural curvature of the skull. See paragraph [0058])( The initial implant area is zoomed-in so the user can identify stability regions surrounding the implant as shown at 96. A bone density map 108 of the skull 102 is displayed to the user in the form of brightness values, which provides the relative bone densities of the bone. The user highlights or labels the desired stability regions as represented by 110a and 110b and one or more stability features are augmented to the initial implant to interact with these stability regions 110. See paragraph [0059])(Physical instrument is computing device used for assisted surgery);
highlighting the fibers as a fiber bundle (A region of bone tissue is highlighted in figure 5 and can be considered a fiber bundle)(bone tissue contains collagen fibers which make up the organic part of the bone matrix, thus by highlighting a region of bone the user is highlighting fibers which can be considered a fiber bundle)( FIG. 5 illustrates the progression of designing an implant for a craniotomy procedure in accordance with the embodiments described herein. A model of the skull 102 is obtained and displayed in a GUI 100 as shown at 94. See paragraph [0058]); and
displaying the highlighted fiber bundle as a colorized subset of the merged three-dimensional image (by being highlighted it is clear that the fiber bundle is colorized) ( FIG. 5 illustrates the progression of designing an implant for a craniotomy procedure in accordance with the embodiments described herein. A model of the skull 102 is obtained and displayed in a GUI 100 as shown at 94. See paragraph [0058]), but is silent to merging diffusion tensor image (DTI) data associated with a scan of a patient body with a three-dimensional medical model to create a merged three-dimensional image in a three-dimensional coordinate space;
Avisar teaches utilizing an MD6DM system which is a multidimensional 6 degrees of freedom movement within a spherical model which is a combination of CT, MRI, DIT, etc., which is patient specific and a representative brain model such as atlas data to create a partially patient specific model (The MD6DM is rendered in real time using a SNAP model built from the patient's own data set of medical images including CT, MRI, DTI etc., and is patient specific. A representative brain model, such as Atlas data, can be integrated to create a partially patient specific model if the surgeon so desires. The model gives a 360° spherical view from any point on the MD6DM. Using the MD6DM, the viewer is positioned virtually inside the anatomy and can look and observe both anatomical and pathological structures as if he were standing inside the patient's body. The viewer can look up, down, over the shoulders etc., and will see native structures in relation to each other, exactly as they are found in the patient. Spatial relationships between internal structures are preserved and can be appreciated using the MD6DM. See paragraph [0023]).
Shalayev and Avisar teach of utilizing combined medical models utilizing MRI data, DTI is a form of MRI, for presenting information to a medical professional and Avisar teaches that the combined data can be partially patient specific, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev with the data merging techniques of Avisar such that the medical professional could visualize a representative brain model in combination with the DTI data to look for legions or tumors and provide additional care in response to the findings.
Regarding claim 17, Shalayev in view of Avisar teaches the non-transitory computer-readable medium of claim 13, but is silent to wherein the three-dimensional medical model comprises a magnetic resonance imaging (MRI) scan.
However, Avisar further teaches that the partially patient specific model can include multiple datasets including an MRI scan (The MD6DM is rendered in real time using a SNAP model built from the patient's own data set of medical images including CT, MRI, DTI etc., and is patient specific. A representative brain model, such as Atlas data, can be integrated to create a partially patient specific model if the surgeon so desires. The model gives a 360° spherical view from any point on the MD6DM. Using the MD6DM, the viewer is positioned virtually inside the anatomy and can look and observe both anatomical and pathological structures as if he were standing inside the patient's body. The viewer can look up, down, over the shoulders etc., and will see native structures in relation to each other, exactly as they are found in the patient. Spatial relationships between internal structures are preserved and can be appreciated using the MD6DM. See paragraph [0023]), therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev in view of Avisar with the MRI data combination technique of Avisar such that the three-dimensional medical model could also include MRI data and atlas data to improve the data availability during review of the scans.
Claim(s) 2, 8, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shalayev et al. (US 2019/0290361)(Hereinafter referred to as Shalayev) in view of Avisar et al. (US 2021/0015583)(Hereinafter referred to as Avisar) in view of Buller et al. (US 2020/0004225)(Hereinafter referred to as Buller)
Regarding claim 2, Shalayev in view of Avisar teaches The method of claim 1, but is silent to further comprising displaying areas other than the fiber bundle of the three-dimensional image as a grayscale color.
Buller teaches a configurable color for option for 3D models ((The display aid may be configurable, with at least one color, pattern, and/or shade (e.g., grayscale) selected for: the virtual model of the 3D object; selected portion(s) of the virtual model of the 3D object; highlighted portions of the virtual model of the 3D object; an application background; a reference plane; and/or a selection tool overlay. See paragraph [0233]))
Shalayev in view of Avisar and Buller teach of presenting 3D model data to a user and Buller teaches that colors can be selected for various portions including grayscale, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev in view of Avisar with the color configuration techniques of Buller such that the user could visualize various colors for individual portions.
Regarding claim 8, Shalayev in view of Avisar teaches The system of claim 7, but is silent to wherein the one or more sequences of instructions, when executed by the one or more processors, cause the system to further perform: displaying areas other than the fiber bundle of the three-dimensional image as a grayscale color.
Buller teaches a configurable color for option for 3D models ((The display aid may be configurable, with at least one color, pattern, and/or shade (e.g., grayscale) selected for: the virtual model of the 3D object; selected portion(s) of the virtual model of the 3D object; highlighted portions of the virtual model of the 3D object; an application background; a reference plane; and/or a selection tool overlay. See paragraph [0233]))
Shalayev in view of Avisar and Buller teach of presenting 3D model data to a user and Buller teaches that colors can be selected for various portions including grayscale, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev in view of Avisar with the color configuration techniques of Buller such that the user could visualize various colors for individual portions.
Regarding claim 14, Shalayev in view of Avisar teaches The non-transitory computer-readable medium of claim 13, but is silent to wherein the program code includes instructions to further perform: displaying areas other than the fiber bundle of the three-dimensional image as a grayscale color.
Buller teaches a configurable color for option for 3D models ((The display aid may be configurable, with at least one color, pattern, and/or shade (e.g., grayscale) selected for: the virtual model of the 3D object; selected portion(s) of the virtual model of the 3D object; highlighted portions of the virtual model of the 3D object; an application background; a reference plane; and/or a selection tool overlay. See paragraph [0233]))
Shalayev in view of Avisar and Buller teach of presenting 3D model data to a user and Buller teaches that colors can be selected for various portions including grayscale, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev in view of Avisar with the color configuration techniques of Buller such that the user could visualize various colors for individual portions.
Claim(s) 3, 9, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shalayev et al. (US 2019/0290361)(Hereinafter referred to as Shalayev) in view of Avisar et al. (US 2021/0015583)(Hereinafter referred to as Avisar) in view of Moreau (US 2024/0225472)(Hereinafter referred to as Moreau)
Regarding claim 3, Shalayev in view of Avisar teaches the method of claim 1, but is silent to further comprising applying a plurality of different colors to the merged three-dimensional image as respective different regions of a brain of the scan of the patient body; and displaying the merged three-dimensional image including the brain with the highlighted fiber bundle and the plurality of different colors.
Moreau teaches creating different colors for different white matter fiber bundles of a 3d representation (The displaying may include, for example, displaying only the 3D representations of white matter (thereby excluding other 3D representations of tractogram streamlines of the tractogram not output by the method) and/or highlighting (e.g., with a different color or with a label) the 3D representations of white matter fiber bundles represented by the tractogram streamlines output by the method. See paragraph [0082])
Shalayev in view of Avisar and Moreau teach of presenting 3D visual information to users and Buller teaches that individual fiber bundles can have different colors, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev in view of Avisar with the coloring different regions technique of Moreau such that the user could visualize different regions in the color they desired.
Regarding claim 9, Shalayev in view of Avisar teaches the system of claim 7, but is silent to wherein the one or more sequences of instructions, when executed by the one or more processors, cause the system to further perform: applying a plurality of different colors to the merged three-dimensional image as respective different regions of a brain of the scan of the patient body; and displaying the merged three-dimensional image including the brain with the highlighted fiber bundle and the plurality of different colors.
Moreau teaches creating different colors for different white matter fiber bundles of a 3d representation (The displaying may include, for example, displaying only the 3D representations of white matter (thereby excluding other 3D representations of tractogram streamlines of the tractogram not output by the method) and/or highlighting (e.g., with a different color or with a label) the 3D representations of white matter fiber bundles represented by the tractogram streamlines output by the method. See paragraph [0082])
Shalayev in view of Avisar and Moreau teach of presenting 3D visual information to users and Moreau teaches that individual fiber bundles can have different colors, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev in view of Avisar with the coloring different regions technique of Moreau such that the user could visualize different regions in the color they desired.
Regarding claim 15, Shalayev in view of Avisar teaches the non-transitory computer-readable medium of claim 13, but is silent to wherein the program code includes instructions to further perform: applying a plurality of different colors to the merged three-dimensional image as respective different regions of a brain of the scan of the patient body; and displaying the merged three-dimensional image including the brain with the highlighted fiber bundle and the plurality of different colors.
Moreau teaches creating different colors for different white matter fiber bundles of a 3d representation (The displaying may include, for example, displaying only the 3D representations of white matter (thereby excluding other 3D representations of tractogram streamlines of the tractogram not output by the method) and/or highlighting (e.g., with a different color or with a label) the 3D representations of white matter fiber bundles represented by the tractogram streamlines output by the method. See paragraph [0082])
Shalayev in view of Avisar and Moreau teach of presenting 3D visual information to users and Moreau teaches that individual fiber bundles can have different colors, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Shalayev in view of Avisar with the coloring different regions technique of Moreau such that the user could visualize different regions in the color they desired.
Allowable Subject Matter
Claims 4, 6, 10, 12, 16, 18 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The prior art of record alone or in combination is silent to the limitations, “wherein the identified fibers and the region of interest are selected by a virtual portion of the physical instrument making contact with the region of interest inside the merged three-dimensional image.”, of claim 4 when read in light of the rest of the limitations of claim 4 and the claims to which claim 4 depends and thus claim 4 contains allowable subject matter.
The prior art of record alone or in combination is silent to the limitations, “wherein the merging the (DTI) data associated with a three-dimensional medical model comprises matching intensities one or more pixelated portions of the DTI data with intensities one or more pixelated portions of the three-dimensional medical model. ”, of claim 6 when read in light of the rest of the limitations of claim 6 and the claims to which claim 6 depends and thus claim 6 contains allowable subject matter.
The prior art of record alone or in combination is silent to the limitations, “wherein the identified fibers and the region of interest are selected by a virtual portion of the physical instrument making contact with the region of interest inside the merged three-dimensional image. ”, of claim 10 when read in light of the rest of the limitations of claim 10 and the claims to which claim 10 depends and thus claim 10 contains allowable subject matter.
The prior art of record alone or in combination is silent to the limitations, “wherein the merging the (DTI) data associated with a three-dimensional medical model comprises matching intensities one or more pixelated portions of the DTI data with intensities one or more pixelated portions of the three-dimensional medical model. ”, of claim 12 when read in light of the rest of the limitations of claim 12 and the claims to which claim 12 depends and thus claim 12 contains allowable subject matter.
The prior art of record alone or in combination is silent to the limitations, “wherein the identified fibers and the region of interest are selected by a virtual portion of the physical instrument making contact with the region of interest inside the merged three-dimensional image. ”, of claim 16 when read in light of the rest of the limitations of claim 16 and the claims to which claim 16 depends and thus claim 16 contains allowable subject matter.
The prior art of record alone or in combination is silent to the limitations, “wherein the merging the (DTI) data associated with a three-dimensional medical model comprises matching intensities one or more pixelated portions of the DTI data with intensities one or more pixelated portions of the three-dimensional medical model. ”, of claim 18 when read in light of the rest of the limitations of claim 18 and the claims to which claim 18 depends and thus claim 18 contains allowable subject matter.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS R WILSON whose telephone number is (571)272-0936. The examiner can normally be reached M-F 7:30-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (572)-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS R WILSON/Primary Examiner, Art Unit 2611