DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first
inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 8 Jan 2025, 20 Feb 2025, 8 April 2025, & 15 Dec 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: DEFORMATION OF 3D NAVIGATIONAL ROADMAP BASED ON CURVE INDUCTIVE SENSOR.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 3-9, 11-12, 16-17, 23-26, 40-52 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Walker et al. (US PGPUB 20170151027; hereinafter "Walker1").
With regards to Claim 1, a method of displaying a navigational view for an endoluminal device (processor for performing the method, see Walker1 ¶ [0051]), comprising:
a. receiving at least one 2-D image comprising a lumen to be navigated by said endoluminal device (tracking and rendering a virtual instrument on fluoroscopic images; see Walker1 ¶ [0073]);
b. detecting a 3-D location of a tip and a shape of said endoluminal device (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, wherein shape is inferred from a plurality of EM sensors or fiber optic shape sensors; see Walker1 ¶ [0068, 0070, 0076]);
c. calculating a position and/or a shape of said endoluminal device within said at least one 2-D image (the time history of sensor locations provides insight as to the shape of the anatomy or the possible shape of the instrument; see Walker1 ¶ [0077]);
d. displaying on said 2-D image said endoluminal device according to said calculation (tracking and rendering a virtual instrument on fluoroscopic images; see Walker1 ¶ [0073]);
e. amending said displaying of said endoluminal device on said 2-D image by repeating steps "b" and "c" while said endoluminal device is being moved (as the instrument 18 moves through the patient, the tracking information of the sensor can be used to update {i.e. amending} the position of the elongate instrument 18 relative to the anatomy, image, or model such that the representation of the elongate instrument can be displayed moving in real-time in an anatomical image or model; see Walker1 ¶ [0055]).
With regards to Claim 31, wherein said at least one 2-D image is a 2-D X-ray image (tracking and rendering a virtual instrument on fluoroscopic images; see Walker1 ¶ [0073]).
With regards to Claim 41, wherein said at least one 2-D image comprises a 2-D view of at least one segment of said endoluminal device (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, wherein each imaged radiopaque marker corresponds to a segment; see Walker1 ¶ [0068 & 0070]).
With regards to Claim 54, wherein said calculating a position and/or a shape of said endoluminal device comprises comparing said at least one segment as viewed in said at least one 2-D image with said detected 3-D location of said tip and said shape of said endoluminal device in order to identify a location of said at least one segment along said endoluminal device (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, wherein the registration step {e.g. coordinate transformation matrices} amount to a bidirectional comparison; see Walker1 ¶ [0068 & 0070]).
With regards to Claim 65, further comprising utilizing said identified location to perform a comparison between said detected 3-D location of said tip and said shape of said endoluminal device and said at least one 2-D image (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, ; see Walker1 ¶ [0068 & 0070]).
With regards to Claim 76, further comprising analyzing said comparison to generate a navigational view of said endoluminal device on said at least one 2-D image (3-D model of the anatomy is generated from pre-operative imaging {i.e. navigational view}, such as from a pre-op CT scan, and the instrument model interacts with the anatomy model to simulate instrument shape during a procedure; see Walker1 ¶ [0077]).
With regards to Claim 87, further comprising displaying said generated navigational view by re-projecting a result of said analysis on said at least one 2-D image (by tracking the path of the instrument through the anatomy over time, the relative shape of the deformed vessels may be determined and both the instrument model and the anatomical model may be updated, i.e. the updating step amounts to reprojecting the sensor locations from the registration step to the anatomical model; see Walker1 ¶ [0077]).
With regards to Claim 98, further comprising amending said displaying of said generated navigational view by repeating:
a. said calculating a position and/or a shape of said endoluminal device comprises comparing said at least one segment as viewed in said at least one 2-D image with said detected 3-D location of said tip and said shape of said endoluminal device in order to identify a location of said at least one segment along said endoluminal device (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}; see Walker1 ¶ [0068 & 0070]);
b. utilizing said identified location to perform a comparison between said detected 3-D location of said tip and said shape of said endoluminal device and said at least one 2-D image (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, ; see Walker1 ¶ [0068 & 0070]);
c. analyzing said comparison to generate a navigational view of said endoluminal device on said at least one 2-D image (3-D model of the anatomy is generated from pre-operative imaging {i.e. navigational view}, such as from a pre-op CT scan, and the instrument model interacts with the anatomy model to simulate instrument shape during a procedure; see Walker1 ¶ [0077]); and
d. displaying said generated navigational view by re-projecting a result of said analysis on said at least one 2-D image (by tracking the path of the instrument through the anatomy over time, the relative shape of the deformed vessels may be determined and both the instrument model and the anatomical model may be updated, i.e. the updating step amounts to reprojecting the sensor locations from the registration step to the anatomical model; see Walker1 ¶ [0077]);
while said endoluminal device is being moved (By tracking the path of the instrument through the anatomy over time {i.e. while moving}, the relative shape of the deformed vessels may be determined and both the instrument model and the anatomical model may be updated; see Walker1 ¶ [0077]).
With regards to Claim 111, wherein said at least one 2-D image comprises, within said at least one 2-D image, a 2-D view of one or more EM markers and/or EM reference sensors located at EM known locations (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}; see Walker1 ¶ [0068 & 0070]).
With regards to Claim 1211, further comprising one or more of:
a. correlating said known 3-D location of said one or more EM markers and/or EM reference sensors with a location of said one or more EM markers and/or EM reference sensors in said 2-D image (claimed in the alternative);
b. comparing said correlation with said detected 3-D location of said tip and said shape of said endoluminal device (claimed in the alternative);
c. analyzing said comparison to generate a navigational view of said endoluminal device on said at least one 2-D image (claimed in the alternative);
d. displaying said generated navigational view by re-projecting a result of said analysis on said at least one 2-D image (as the instrument 18 moves through the patient, the tracking information of the sensor can be used to update the position of the elongate instrument 18 {i.e. reprojecting result of analysis} relative to the anatomy, image, or model such that the representation of the elongate instrument can be displayed moving in real-time in an anatomical image or model; see Walker1 ¶ [0055]).
With regards to Claim 161, wherein said at least one 2-D image comprises a 2-D view of a plurality of markers located in known locations along said endoluminal device (wherein the probe includes both EM & radiopaque marker tracking sensors; see Walker1 ¶ [0070]).
With regards to Claim 1716, further comprising one or more of:
a. comparing a location of said plurality of markers located in known locations along said endoluminal device as viewed in said at least one 2-D image with their actual 3-D known location along said endoluminal device and according to said detected 3-D location of said tip and said shape of said endoluminal device (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}; see Walker1 ¶ [0068 & 0070]);
b. analyzing said comparison to generate a navigational view of said endoluminal device on said at least one 2-D image (registering {I.e. analyzing said comparison} the EM sensor with the anatomical image frame {e.g. 2D/3D fluoroscopic image} and displaying a representation {i.e. navigational view} of the instrument onto the anatomical image; see Walker1 ¶ [0072]);
c. displaying said generated navigational view by re-projecting a result of said analysis on said at least one 2-D image (registering {I.e. analyzing said comparison} the EM sensor with the anatomical image frame {e.g. 2D/3D fluoroscopic image} and displaying a representation {i.e. navigational view} of the instrument onto the anatomical image; see Walker1 ¶ [0072]);
d. amending said displaying of said generated navigational view by repeating one or more of:
i. correlating said known 3-D location of said one or more EM markers and/or EM reference sensors with a location of said one or more EM markers and/or EM reference sensors in said 2-D image (claimed in the alternative);
ii. comparing said correlation with said detected 3-D location of said tip and said shape of said endoluminal device (claimed in the alternative);
iii. analyzing said comparison to generate a navigational view of said endoluminal device on said at least one 2-D image (claimed in the alternative);
iv. displaying said generated navigational view by re-projecting a result of said analysis on said at least one 2-D image (as the instrument 18 moves through the patient, the tracking information of the sensor can be used to update the position of the elongate instrument 18 {i.e. reprojecting result of analysis} relative to the anatomy, image, or model such that the representation of the elongate instrument can be displayed moving in real-time in an anatomical image or model; see Walker1 ¶ [0055]);
while said endoluminal device is being moved (as the instrument 18 moves through the patient, the tracking information of the sensor can be used to update the position of the elongate instrument 18 {i.e. reprojecting result of analysis} relative to the anatomy, image, or model such that the representation of the elongate instrument can be displayed moving in real-time in an anatomical image or model; see Walker1 ¶ [0055]).
With regards to Claim 231, wherein said detecting a 3-D location of a tip and a shape of said endoluminal device is performed by one or more of EM tip sensing (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, ; see Walker1 ¶ [0068 & 0070]), (The detector is configured to detect light signals that pass through the optical fiber, and an associated controller is configured to determine the geometric configuration of at least a portion of the medical instrument based on a spectral analysis of the reflected portions of the light signals; see Walker1 ¶ [0008]),
With regards to Claim 241, wherein said displaying comprises displaying on a 3-D roadmap said endoluminal device according to said calculation (pre-operative 3-D anatomical model reference frame AMF for the model depicted on the visual display 35; see Walker1 ¶ [0054]).
With regards to Claim 2524, further comprising amending said displaying of said endoluminal device on said 3-D roadmap by repeating steps "b" and "c" while said endoluminal device is being moved (by tracking the path of the instrument through the anatomy over time, the relative shape of the deformed vessels may be determined and both the instrument model and the anatomical model may be updated, i.e. the updating step amounts to repeating steps b & c; see Walker1 ¶ [0077]).
With regards to Claim 2611, further comprising utilizing said EM reference sensors to track a movement of a patient; and said method further comprises compensating for said movements performed by said patient by moving said detected 3-D location of said tip and said shape of said endoluminal device accordingly (the algorithms used to calculate the articulation and roll {i.e. 3D location} of an instrument in assisted driving may need to take into consideration the pulsatile flow in the arteries as well as heart and breath motions. In some embodiments, biological motions of the patient may be predicted and used to improve the performance of assisted driving. For example, in the images, the target may appear to move in sync with the patient's motion {i.e. compensation of 3D location}; see Walker1 ¶ [0133]).
With regards to Claim 40, a method of displaying a navigational view for a field of view for an endoluminal device, comprising:
a. generating a volume from a plurality of images (3-D model of the anatomy is generated from pre-operative imaging {i.e. navigational view}, such as from a pre-op CT scan, and the instrument model interacts with the anatomy model to simulate instrument shape during a procedure; see Walker1 ¶ [0077]);
b. detecting tip and shape of said endoluminal device (the time history of sensor locations provides insight as to the shape of the anatomy or the possible shape of the instrument; see Walker1 ¶ [0077]);
c. deforming said volume based on said detected tip and shape of said endoluminal device (by tracking the path of the instrument through the anatomy over time, the relative shape of the deformed vessels may be determined and both the instrument model and the anatomical model may be updated; see Walker1 ¶ [0077]); and
d. displaying a navigational view using said deformed volume and said detected tip and shape of said endoluminal device (By tracking the path of the instrument through the anatomy over time, the relative shape of the deformed vessels may be determined and both the instrument model and the anatomical model may be updated; see Walker1 ¶ [0077]).
With regards to Claim 4140, wherein said generating a volume from a plurality of images comprises one or more of:
a. receiving said plurality of images (3-D model of the anatomy is generated from pre-operative imaging {i.e. navigational view}, such as from a pre-op CT scan, and the instrument model interacts with the anatomy model to simulate instrument shape during a procedure; see Walker1 ¶ [0077]);
b. analyzing said plurality of images to detect one or more vessels within said plurality of images (marking the vessels in the 3D volume via segmentation & marking algorithm; see Walker1 ¶ [0101] & incorporated US PAT 9,256,940 Abstract & FIG. 5);
c. combining multiple phases of said detected one or more vessels into a single data structure comprising vessels of interest in said field of view (imaging while the instrument is traversing blood vessels; see Walker1 ¶ [0077]; compensating for heart and/or breath motions via a converging stabilization algorithm {e.g. gating or motion smoothing/averaging}; see Walker1 ¶ [0133]);
d. combining results from "b" and "c" into a common 3-D space, thereby generating said volume (as the instrument 18 moves through the patient, the tracking information of the sensor can be used to update {i.e. amending} the position of the elongate instrument 18 relative to the anatomy, image, or model such that the representation of the elongate instrument can be displayed moving in real-time in an anatomical image or model; see Walker1 ¶ [0055]).
With regards to Claim 4241, wherein said combining results from "b" and "c" into a common 3-D space, further comprises combining vascular segments with their associated 3-D spatial extends (determining the centerline {i.e. 3D spatial extends} from the 3D volume {i.e. 3D anatomical model}; see Walker1 ¶ [0122]).
With regards to Claim 4340, wherein said plurality of images are one or more of angiograms images, X-ray images, Cone-beam images, CT images, MRI images (3-D model of the anatomy is generated from pre-operative imaging {i.e. navigational view}, such as from a pre-op CT scan, and the instrument model interacts with the anatomy model to simulate instrument shape during a procedure; see Walker1 ¶ [0077]).
With regards to Claim 4440, wherein said volume comprises one or more data comprising descriptions of paths along which vascular centerlines extend, descriptions of nodes at which paths join and/or bifurcate, and descriptions of vascular cross-sections along the paths (dotted lines or other indicators are provided to show the ideal instrument path or the boundary {i.e. description of cross-section} of suitable paths for the instrument as illustrated in FIG. 13; see Walker1 ¶ [0119]).
With regards to Claim 4540, wherein said generating a volume from a plurality of images further comprises associating said generated volume with a deformation model (by tracking the path of the instrument through the anatomy over time, the relative shape of the deformed vessels may be determined and both the instrument model and the anatomical model may be updated; see Walker1 ¶ [0077]).
With regards to Claim 4641, wherein said receiving said plurality of images is performed in real-time (as the instrument 18 moves through the patient, the tracking information of the sensor can be used to update {i.e. amending} the position of the elongate instrument 18 relative to the anatomy, image, or model such that the representation of the elongate instrument can be displayed moving in real-time in an anatomical image or model; see Walker1 ¶ [0055]).
With regards to Claim 4740, wherein said detecting tip and shape of said endoluminal device comprises one or more of:
a. associating between said endoluminal device and sensor raw data received from one or more sensors in said endoluminal device (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, wherein shape is inferred from a plurality of EM sensors or fiber optic shape sensors; see Walker1 ¶ [0068, 0070, 0076]);
b. reconstructing a 3-D shape of said endoluminal device based on said association (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, wherein shape is inferred from a plurality of EM sensors or fiber optic shape sensors; see Walker1 ¶ [0068, 0070, 0076]); and
c. detecting a shape location of said endoluminal device based on a coordinate system (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, wherein shape is inferred from a plurality of EM sensors or fiber optic shape sensors; see Walker1 ¶ [0068, 0070, 0076]).
With regards to Claim 4847, wherein said one or more sensors comprise one or more of an inductive EM sensor (EM sensing coil (i.e., an EM sensor) in a fluctuating magnetic field. The fluctuating magnetic field induces a current in the coil based on the coil's position and orientation within the field. The coil's position and orientation can thus be determined by measuring the current in the coil; see Walker1 ¶ [0008 & 0046]) (claimed in the alternative).
With regards to Claim 4947, wherein said deforming said volume based on said detected tip and shape of said endoluminal device comprises one or more of:
a. calculating said deforming based on constrains imposed by said reconstructing a 3-D shape of said endoluminal device based on said association and said detecting a shape location of said endoluminal device based on a coordinate system (correcting for deformation of pre-op 3D image {i.e. anatomical model} as overlaid based on the real-time synchronization of a live fluoro image with the sensed instrument position; see Walker1 ¶ [0086]); and
b. calculating said deforming based on received images taken in real-time (wherein the 3D model is deformed in concert with the refreshed live fluoro image; see Walker1 ¶ [0077, 0083-0084]).
With regards to Claim 5040, wherein said displaying is performed on a 2-D X-ray image (live fluor image; see Walker1 ¶ [00083]).
With regards to Claim 51, a system (robotically-assisted instrument driving system 10; see Walker1 ¶ [0048]) configured for displaying a navigational view for an endoluminal device (catheter 18), comprising:
a. an endoluminal device (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, wherein shape is inferred from a plurality of EM sensors or fiber optic shape sensors; see Walker1 ¶ [0068, 0070, 0076]);
b. a sensor for detecting a 3-D location of a tip and a shape of said endoluminal device (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, wherein shape is inferred from a plurality of EM sensors or fiber optic shape sensors; see Walker1 ¶ [0068, 0070, 0076]);
c. a user interface (The controller 34 further includes one or more interfaces 54 (e.g., communication databuses or network interfaces) for receiving user inputs from the user input device 33; see Walker1 ¶ [0053]); and
d. a processor unit (processor 50) configured to:
i. receive at least one 2-D image comprising a lumen to be navigated by said endoluminal device (tracking and rendering a virtual instrument on fluoroscopic images; see Walker1 ¶ [0073]);
ii. calculate a position and/or a shape of said endoluminal device within said at least one 2-D image, based on the detected 3-D location and shape of said device (registering the 3D EM sensor coordinate from {via EM sensor data} and the fluoroscopy coordinate frame {via known markers within FoV}, wherein shape is inferred from a plurality of EM sensors or fiber optic shape sensors; see Walker1 ¶ [0068, 0070, 0076]; the time history of sensor locations provides insight as to the shape of the anatomy or the possible shape of the instrument; see Walker1 ¶ [0077]);
iii. display on said 2-D image said endoluminal device according to said calculation; and iv. amend said displaying of said endoluminal device on said 2- D image by repeating steps "ii" and "iii" while said endoluminal device is being moved (as the instrument 18 moves through the patient, the tracking information of the sensor can be used to update {i.e. amending} the position of the elongate instrument 18 relative to the anatomy, image, or model such that the representation of the elongate instrument can be displayed moving in real-time in an anatomical image or model; see Walker1 ¶ [0055]).
With regards to Claim 5251, wherein the processor unit is configured to:
a. generate a volume from a plurality of images (3-D model of the anatomy is generated from pre-operative imaging {i.e. navigational view}, such as from a pre-op CT scan, and the instrument model interacts with the anatomy model to simulate instrument shape during a procedure; see Walker1 ¶ [0077]);
b. deform said volume based on said detected tip and shape of said endoluminal device (by tracking the path of the instrument through the anatomy over time, the relative shape of the deformed vessels may be determined and both the instrument model and the anatomical model may be updated; see Walker1 ¶ [0077]); and
c. display navigational view using said deformed volume and said detected tip and shape of said endoluminal device (By tracking the path of the instrument through the anatomy over time, the relative shape of the deformed vessels may be determined and both the instrument model and the anatomical model may be updated; see Walker1 ¶ [0077]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 2, 10, & 21 are rejected under 35 U.S.C. 103 as being unpatentable over Walker1 in further view of Walker et al. (US PGPUB 20160228032; hereinafter "Walker2").
With regards to Claim 21, while Walker1 discloses all of the limitations of intervening claim 9 as shown above, it appears that Walker1 may be silent wherein said method does not require retaking new images in order to perform said "e". However, Walker2 teaches of a system and method for tracking a flexible elongate instrument within a patient (see Walker2 Abstract).
In particular, Walker2 teaches of wherein said wherein said method does not require retaking new images in order to perform said "e" (predicting the movement of the catheter based on an initial measurement of localization measurement {i.e. local EM sensor measurement & remote image segmentation from fluoro image} which also includes deformation information , i.e.; see Walker2 ¶ [0036-0037, 0079, 0092]).
Walker1 and Walker2 are both considered to be analogous to the claimed invention because they are in the same field of tracking a flexible elongate instrument. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Walker to incorporate the above teachings of Walker2 to provide at least amending does not require retaking new images in order to perform said amending. Doing so would aid in approximating the shape and heading of the flexible device 10 (see Walker2 ¶ [0028]).
With regards to 109, wherein said amending does not require retaking new images in order to perform said amending (predicting the movement of the catheter based on an initial measurement of localization measurement {i.e. local EM sensor measurement & remote image segmentation from fluoro image} which also includes deformation information , i.e.; see Walker2 ¶ [0036-0037, 0079, 0092]).
With regards to Claim 2117, wherein said amending does not require retaking new images in order to perform said amending (predicting the movement of the catheter based on an initial measurement of localization measurement {i.e. local EM sensor measurement & remote image segmentation from fluoro image} which also includes deformation information , i.e.; see Walker2 ¶ [0036-0037, 0079, 0092]).
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Walker1 in further view of Duindam et al. (US PGPUB 20130204124; hereinafter "Duindam").
With regards to Claim 221, while Walker discloses using a fiber optic position and shape tracking sensors (see Walker ¶ [0046]), however it appears that Walker may be silent to wherein said detecting a 3-D location of a tip and a shape of said endoluminal device is performed utilizing one or more of an EM curve inductive sensor and an EM curve resistive sensor. However, Duindam teaches of a flexible steerable needle and a shape sensor for measuring the shape of the needle (see Duindam Abstract). In particular, Duindam teaches of a shape sensor includes a flexible piezoresistive sensor arrays or wire strain detectors (see Duindam ¶ [0050]).
Walker and Duindam are both considered to be analogous to the claimed invention because they are in the same field of tracking elongate interventional devices. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Walker to incorporate the above teachings of Duindam to provide at least an EM curve resistive sensor. Doing so would aid in delivering accurate shape measurement of needle which can enable more precise control and/or enhanced error correction to ensure that needle 110 accurately traverses a desired surgical trajectory (see Duindam ¶ [0032]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASHISH S. JASANI whose telephone number is (571) 272-6402. The examiner can normally be reached M-F 9:00 am - 5:00 pm (CST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached on (571) 270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASHISH S. JASANI/Examiner, Art Unit 3798
/KEITH M RAYMOND/Supervisory Patent Examiner, Art Unit 3798