DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-10, 14-26 and 30 are currently pending
Claims 4, and 19-20 are withdrawn from prosecution.
Claims 1-3, 5-10, 14-18 and 21-26, and 30 are rejected.
Response to Arguments
Applicant’s request to hold the double patenting rejections with respect to U.S. Patent App. Nos. 17/965,657 (US 2023/0121370), 17/971,873 (U.S. 2023/0126813), 17/747,903 (U.S. 2022/0369934), 17/952,645 (U.S. 2023/0097431), and 17/863,211 (U.S. 2024/0016425), in abeyance until such a time when allowability of the claims are determined, is acknowledged.
Applicant’s arguments in Applicant’s responses filed 11/21/2025 with respect to the rejection of claims 1 and 16 under 35 U.S.C. 130 have been fully considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
That is, newly found prior art, Hirakawa, K., US 20050228221 A1 have been introduced to teach defining the reference plane and the reference frame.
Therefore, the claims stand rejected.
Claim Objections
Claims 9, 19, 25 and 30 are objected to because of the following informalities
Claim 9 recites a plurality of curved shapes stored in the memory. Meanwhile claim 7 from which claim 9 depends recites “a curved shape stored in a memory”. Hence, claim 9 should be amended to clearly define if the plurality of curved shapes in the memory of claim 9 includes the recited “a curved shape stored in a memory”.
Claim 19 (withdrawn from prosecution), claims 25 and 30 depend from claim 16 which is a method. Meanwhile claims 19, 25 and 30 recites “The method according to claim 16, wherein: the system includes…”; “The method according to claim 23, wherein the system includes…”; and “The method according to claim 16, wherein the system is coupled…”, respectively. The recitations appear to mix statutory categories and hence should be amended to define the limitations to a single statutory category.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-3, 5-10, 14-18 and 21-26, and 30 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 8 and 16 recite “the curved portion of the optical fiber”. There is insufficient antecedent basis for this limitation in the claim since the recitation of the “curved portion” is attributed to the 3D shape of the optical fiber.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 6, 14-17, 22, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Chopra, et al., US 20140275997 A1 in view of Flexman, et al., US 20200022764 A1 and Hirakawa, K., US 20050228221 A1.
Regarding claim 1, Chopra teaches a medical device system (interventional system 100 of fig. 1) comprising:
a medical device (interventional instrument system 200 of fig. 2) comprising an optical fiber (paragraph 37) having one or more of core fibers (paragraph 39), each of the one or more core fibers including a plurality of sensors (array of Fiber Bragg Gratings (FBGs) of paragraphs 38 and 40 and fig. 2) distributed along a longitudinal length of a corresponding core fiber (paragraph 40 discloses that the FBGs are provided within each with a specified spacing), each sensor of the plurality of sensors configured to: (i) reflect a light signal of a different spectral width based on received incident light (paragraph 40), and (ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber (paragraphs 40-41); and
a console (control system 112 of fig. 1 and paragraph 24) including one or more processors and a non-transitory computer-readable medium having stored thereon logic, when executed by the one or more processors (paragraphs 30 and 78), causes operations including:
providing an incident light signal to the optical fiber (paragraph 41 discloses sending light down the fiber);
receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors (paragraphs 40-41 indicate that the light reflections are of a specific band of wavelength);
processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber (paragraph 42 indicates that the data from the FBGs are used to reconstruct the shape of the fiber, paragraph 38 indicating that the shape is three-dimensional);
wherein: the curved portion of the 3D shape extends along a curved portion of the optical fiber (paragraph 42 states that “regions of the cores containing FBG's, if located at points where the fiber is bent, can thereby be used to determine the amount of bending at those points. These data, combined with the known spacings of the FBG regions, can be used to reconstruct the shape of the fiber”),
orienting the 3D shape within the reference frame (paragraph 58 discloses that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253”) ; and
rendering the image of the 3D shape on a display of the system in accordance with the reference frame (paragraph 58 then states that “an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed”; paragraph 64; Figures 5 and 7).
Chopra does not teach wherein: the curved portion of the optical fiber is disposed within a vasculature of a patient, and the curved portion of the optical fiber is defined by the curved portion of the vasculature.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion of the optical fiber is disposed within a vasculature of a patient, and the curved portion of the optical fiber is defined by the curved portion of the vasculature”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra wherein: the curved portion of the optical fiber is disposed within a vasculature of a patient, and the curved portion of the optical fiber is defined by the curved portion of the vasculature, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3) with a reasonable expectation of success as modified Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Chopra in view of Flexman fails to teach defining a reference plane based on a curved portion of the 3D shape, and defining a reference frame for displaying an image of the 3D shape based on the reference plane.
However, within the same field of endeavor, Hirakawa teaches an endoscopic information processing system comprising: a shape analyzing unit that analyzes an insertional shape acquired by detecting the shape of an insertion unit of an endoscope which is inserted into a body cavity; and an information providing unit that provides information on the situation of handling the endoscope according to the result of the analysis performed by the shape analyzing unit (abstract), defining a reference plane based on a curved portion of the 3D shape, and defining a reference frame for displaying an image of the 3D shape based on the reference plane (paragraph 188 states that “the shape processing unit 213 calculates coordinates representing the positions of the source coils. Moreover, the shape processing unit 213 infers the shape of the insertion unit from the calculated coordinates representing the positions, and produces a shape-of-insertion unit image signal”. Paragraph 212 then states that “the presentation attributes include information on rotation of an insertional shape-of-endoscope image about an X-axis, and information on rotation thereof about a Y-axis. A system of coordinates defined in order to display an insertional shape image is presented through an insertional shape presentation screen 330 shown in FIG. 24(b)”, and paragraph 238 describing that the rotations of the insertional shape image about the X-axis and Y-axis are achieved by performing the known coordinate transformation on coordinates in the three-dimensional space that represent points on the insertional shape image. That is the X and Y axis information (reference plane) provide the system of coordinates for displaying that insertional shape of the endoscope).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra as modified by Flexman, for defining a reference plane based on a curved portion of the 3D shape, and defining a reference frame for displaying an image of the 3D shape based on the reference plane, as taught by Hirakawa, to provide accurate and relevant shape information about the device during insertion (paragraphs 19-20).
Regarding claim 2, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 1 above.
Chopra further teaches wherein orienting the 3D shape includes orienting the reference plane with respect to the reference frame (see paragraph 58 indicates determining a pose of a distal end of the shape sensor fiber 253 in an image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 23. The imaging frame comprises the reference plane).
Regarding claim 6, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 2 above.
Chopra further teaches wherein the operations further include fixing the reference plane with respect to the reference frame (see paragraph 58 indicates determining a pose of a distal end of the shape sensor fiber 253 in an image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 23. The registration step fixes the image reference frame to the reference frame of the shape sensor 23).
Regarding claim 14, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 1 above.
Chopra further teaches wherein: the system is coupled with a patient imaging system (paragraph 32 discloses several imaging systems coupled to the system), and the operations further include: receiving image data from the patient imaging system (paragraph 32 further discloses receiving acquired dataset from the imaging systems); and rendering an image of the patient on the display along with an image of the 3D shape (paragraph 32 further discloses creating a model of the patient based on the acquired dataset).
Regarding claim 15, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 1 above.
Chopra further teaches wherein the medical device is one of an introducer wire, a guidewire, a stylet, a stylet within a needle, a needle with the optical fiber inlayed into a cannula of the needle or a catheter with the optical fiber inlayed into one or more walls of the catheter (paragraph 36 discloses a catheter).
Regarding claim 16, Chopra teaches a method for detecting placement of a medical device within a patient body (paragraph 58 discloses a method 320 for using the interventional instrument tracking system 250), the method comprising:
providing an incident light signal to an optical fiber (paragraph 39 discloses an optical fiber and paragraph 41 states “the optical fiber may be used to monitor the shape of at least a portion of the catheter system 202. More specifically, light passing through the optical fiber is processed to detect the shape of the instrument system 202 and for utilizing that information to assist in surgical procedures”) included within the medical device (see fig. 3 for the catheter body 254 through which the optical fiber shape sensor 253 is arranged), wherein the optical fiber (paragraph 37) includes a one or more of core fibers (paragraph 39 discloses multiple cores), each of the one or more of core fibers including a plurality of reflective gratings (array of Fiber Bragg Gratings (FBGs) of paragraphs 38 and 40 and fig. 2) distributed along a longitudinal length of a corresponding core fiber (paragraph 40 discloses that the FBGs are provided within each with a specified spacing) and each of the plurality of reflective gratings being configured to (i) reflect a light signal of a different spectral width based on received incident light (paragraph 40 states that “Each FBG comprises a series of modulations of the core's refractive index so as to generate a spatial periodicity in the refraction index. The spacing may be chosen so that the partial reflections from each index change add coherently for a narrow band of wavelengths, and therefore reflect only this narrow band of wavelengths while passing through a much broader band”), and (ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber(paragraphs 40-41 describing straining measurements, with paragraph 40 stating that “During fabrication of the FBG's, the modulations are spaced by a known distance, thereby causing reflection of a known band of wavelengths. However, when a strain is induced on the fiber core, the spacing of the modulations will change, depending on the amount of strain in the core” and paragraph 41 stating that “Thus, to measure strain, light is sent down the fiber, and characteristics of the returning light are measured. For example, FBG's produce a reflected wavelength that is a function of the strain on the fiber and its temperature.”);
receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors (paragraphs 40-41 indicate that the light reflections are of a specific band of wavelength);
processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber(paragraph 42 indicates that the data from the FBGs are used to reconstruct the shape of the fiber, paragraph 38 indicating that the shape is three-dimensional);
wherein: the curved portion of the 3D shape extends along a curved portion of the optical fiber (paragraph 42 states that “regions of the cores containing FBG's, if located at points where the fiber is bent, can thereby be used to determine the amount of bending at those points. These data, combined with the known spacings of the FBG regions, can be used to reconstruct the shape of the fiber”),
orienting the 3D shape within the reference frame (paragraph 58 discloses that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253”) ; and
rendering an image of the 3D shape on a display of the system in accordance with the reference frame (paragraph 58 then states that “an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed”; paragraph 64; Figures 5 and 7).
Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3) with a reasonable expectation of success as Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Chopra in view of Flexman fails to teach defining a reference plane based on a curved portion of the 3D shape, and defining a reference frame for displaying an image of the 3D shape based on the reference plane.
However, within the same field of endeavor, Hirakawa teaches an endoscopic information processing system comprising: a shape analyzing unit that analyzes an insertional shape acquired by detecting the shape of an insertion unit of an endoscope which is inserted into a body cavity; and an information providing unit that provides information on the situation of handling the endoscope according to the result of the analysis performed by the shape analyzing unit (abstract), defining a reference plane based on a curved portion of the 3D shape, and defining a reference frame for displaying an image of the 3D shape based on the reference plane (paragraph 188 states that “the shape processing unit 213 calculates coordinates representing the positions of the source coils. Moreover, the shape processing unit 213 infers the shape of the insertion unit from the calculated coordinates representing the positions, and produces a shape-of-insertion unit image signal”. Paragraph 212 then states that “the presentation attributes include information on rotation of an insertional shape-of-endoscope image about an X-axis, and information on rotation thereof about a Y-axis. A system of coordinates defined in order to display an insertional shape image is presented through an insertional shape presentation screen 330 shown in FIG. 24(b)”, and paragraph 238 describing that the rotations of the insertional shape image about the X-axis and Y-axis are achieved by performing the known coordinate transformation on coordinates in the three-dimensional space that represent points on the insertional shape image. That is the X and Y axis information (reference plane) provide the system of coordinates for displaying that insertional shape of the endoscope).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra as modified by Flexman, for defining a reference plane based on a curved portion of the 3D shape, and defining a reference frame for displaying an image of the 3D shape based on the reference plane, as taught by Hirakawa, to provide accurate and relevant shape information about the device during insertion (paragraphs 19-20).
Regarding claim 17, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 16 above.
Chopra further teaches wherein orienting the 3D shape includes orienting the reference plane with respect to the reference frame (see paragraph 58 indicates determining a pose of a distal end of the shape sensor fiber 253 in an image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 23. The imaging frame comprises the reference plane).
Regarding claim 22, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 17 above.
Chopra further teaches fixing the reference plane with respect to the reference frame (see paragraph 58 indicates determining a pose of a distal end of the shape sensor fiber 253 in an image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 23. The registration step fixes the image reference frame to the reference frame of the shape sensor 23).
Regarding claim 30, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 16 above.
Chopra further teaches wherein: the system is coupled with a patient imaging system (paragraph 32 discloses several imaging systems coupled to the system), and the operations further include: receiving image data from the patient imaging system (paragraph 32 further discloses receiving acquired dataset from the imaging systems); and rendering an image of the patient on the display along with an image of the 3D shape (paragraph 32 further discloses creating a model of the patient based on the acquired dataset).
Claim 3, 7, 18, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Chopra in view of Flexman and Hirakawa, as applied to claims 1 and 16 respectively above, and further in view of Prisco, G., US 20090324161 A1.
Regarding claim 3, Chopra in view of Flexman and Hirakwa teaches all the limitations of claim 1.
Chopra in view of Flexman fails to teach wherein: the reference plane is defined by three or more points disposed along the curved portion of the 3D shape, and the three or more points are equidistant from the reference plane.
However, within the same field of endeavor, Prisco teaches a shape sensing system to determine the position and orientation of one link with respect to another link in a kinematic chain. An optical fiber is coupled to two or more links in a kinematic chain. A shape sensing segment is defined to start at a proximal link and to end at a distal link, crossing one or more joints. A reference frame is defined at the start of the shape sensing segment. As the joints move, an interrogator senses strain in the shape sensing segment. The sensed strain is used to output a Cartesian position and orientation of the end of the shape sensing segment with respect to the reference frame defined at the start of the shape sensing segment (abstract). FIG. 5 is a diagrammatic view that illustrates reference frames at segment starts in an optical fiber used for shape sensing (paragraph 41). In accordance with aspects of the invention, the position and orientation of each shape sensing segment end is determined with respect to the reference frame defined at the corresponding segment start. FIGS. 6A and 6B are diagrammatic views that illustrate determining segment end position and orientation (paragraph 43). In figs. 6A and 6B, a plane is defined along the z axis showing a course of a segment 600 of the shape sensing optical fiber 500, hence teaching wherein: the portion of the 3D shape includes three or more points disposed along the 3D shape, and the three or more points are equidistant from the reference plane. Of note, the points L1, L1+S1, L2 +S2, etc. lie along the length of the optical fiber and within the plane and hence each point is equidistance to the plane.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman and Hirakawa, wherein: the reference plane is defined by three or more points disposed along the curved portion of the 3D shape, and the three or more points are equidistant from the reference plane, as taught by Prisco, to provide a more effective and accurate way to determine the shape of an optical fiber and a more effective way of producing the shape information for use in determining the position and orientation of the interventional device (paragraph 10), with a reasonable expectation of success, as Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Regarding claim 7, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 2.
Chopra in view of Flexman does not teach wherein the operations further include: comparing a curved portion of the 3D shape with a curved shape stored in memory; and as a result of the comparison, identifying the three or more points from the curved portion to define the reference plane when the curved portion of the 3D shape is consistent with the curved shape stored in memory.
However, Prisco further teaches wherein the operations further include: comparing a curved portion of the 3D shape with a curved shape stored in memory (paragraph 83 states that “In all the cases described above and depicted in FIGS. 7-11, presented in accordance with aspects of the invention, the position of the kinematic chain is estimated by combining three sources of information; (i) the Cartesian information produced by the shape sensor for each of the defined segments; (ii) the a priori knowledge of the kinematic model of the kinematic chain (e.g., as stored in a readable electronic memory); and (iii) the a priori knowledge of the nature of the mechanical constraints between the kinematic chain and the shape sensing fiber at the start and end of the segment (e.g., as stored in a readable electronic memory)” the combining of the sources of information includes comparison of the information as paragraph 123, by way of disclosing examplary advantages of the invention, states that “the segment data can be merged with a priori information about the kinematic chain (i.e., with the kinematic model of the embedding structure) to estimate the most likely position of one or more links embedding the segment of fiber”); and as a result of the comparison, identifying the three or more points from the curved portion to define the reference plane when the curved portion of the 3D shape is consistent with the curved shape stored in memory. (Prisco further teaches that a reference frame is defined at the start of the shape sensing segment. As the joints move, an interrogator senses strain in the shape sensing segment. The sensed strain is used to output a Cartesian position and orientation of the end of the shape sensing segment with respect to the reference frame defined at the start of the shape sensing segment) (abstract). As indicated above, paragraphs 63 and 123 describes the combining or merging step as a comparison with known values to determine most likely positions of the segments. FIG. 5 is a diagrammatic view that illustrates reference frames at segment starts in an optical fiber used for shape sensing (paragraph 41). In accordance with aspects of the invention, the position and orientation of each shape sensing segment end is determined with respect to the reference frame defined at the corresponding segment start. FIGS. 6A and 6B are diagrammatic views that illustrate determining segment end position and orientation (paragraph 43). In figs. 6A and 6B, a plane is defined along the z axis showing a course of a segment 600 of the shape sensing optical fiber 500, hence teaching wherein: the portion of the 3D shape includes three or more points disposed along the 3D shape, and the three or more points are equidistant from the reference plane. Of note, the points L1, L1+S1, L2 +S2, etc. lie along the length of the optical fiber and within the plane and hence each point is equidistance to the plane.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman and Hirakawa, wherein the operations further include: comparing a curved portion of the 3D shape with a curved shape stored in memory; and as a result of the comparison, identifying the three or more points from the curved portion to define the reference plane when the curved portion of the 3D shape is consistent with the curved shape stored in memory, as taught by Prisco, to provide a more effective and accurate way to determine the shape of an optical fiber and a more effective way of producing the shape information for use in determining the position and orientation of the interventional device (paragraph 10), with a reasonable expectation of success, as Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Regarding claim 18, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 16.
Chopra fails to teach wherein: the reference plane is defined by three or more points disposed along the curved portion of the 3D shape, and the three or more points are equidistant from the reference plane.
However, within the same field of endeavor, Prisco teaches a shape sensing system to determine the position and orientation of one link with respect to another link in a kinematic chain. An optical fiber is coupled to two or more links in a kinematic chain. A shape sensing segment is defined to start at a proximal link and to end at a distal link, crossing one or more joints. A reference frame is defined at the start of the shape sensing segment. As the joints move, an interrogator senses strain in the shape sensing segment. The sensed strain is used to output a Cartesian position and orientation of the end of the shape sensing segment with respect to the reference frame defined at the start of the shape sensing segment (abstract). FIG. 5 is a diagrammatic view that illustrates reference frames at segment starts in an optical fiber used for shape sensing (paragraph 41). In accordance with aspects of the invention, the position and orientation of each shape sensing segment end is determined with respect to the reference frame defined at the corresponding segment start. FIGS. 6A and 6B are diagrammatic views that illustrate determining segment end position and orientation (paragraph 43). In figs. 6A and 6B, a plane is defined along the z axis showing a course of a segment 600 of the shape sensing optical fiber 500, hence teaching wherein: the portion of the 3D shape includes three or more points disposed along the 3D shape, and the three or more points are equidistant from the reference plane. Of note, the points L1, L1+S1, L2 +S2, etc. lie along the length of the optical fiber and within the plane and hence each point is equidistance to the plane.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman and Hirakawa, wherein: the reference plane is defined by three or more points disposed along the curved portion of the 3D shape, and the three or more points are equidistant from the reference plane, as taught by Prisco, to provide a more effective and accurate way to determine the shape of an optical fiber and a more effective way of producing the shape information for use in determining the position and orientation of the interventional device (paragraph 10), with a reasonable expectation of success, as Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Regarding claim 23, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 17.
Chopra in view of Flexman and Hirakawa does not teach wherein the operations further include: searching the 3D shape to identify a portion of the 3D shape that is consistent with a curved shape stored in a memory of the system; and when a consistent portion of the 3D shape is identified, choosing three or more points along the consistent portion to defined the reference plane.
However, Prisco further teaches wherein the operations further include: searching the 3D shape to identify a portion of the 3D shape that is consistent with a curved shape stored in a memory of the system (paragraph 83 states that “In all the cases described above and depicted in FIGS. 7-11, presented in accordance with aspects of the invention, the position of the kinematic chain is estimated by combining three sources of information; (i) the Cartesian information produced by the shape sensor for each of the defined segments; (ii) the a priori knowledge of the kinematic model of the kinematic chain (e.g., as stored in a readable electronic memory); and (iii) the a priori knowledge of the nature of the mechanical constraints between the kinematic chain and the shape sensing fiber at the start and end of the segment (e.g., as stored in a readable electronic memory)” the combining of the sources of information includes comparison of the information as paragraph 123, by way of disclosing examplary advantages of the invention, states that “the segment data can be merged with a priori information about the kinematic chain (i.e., with the kinematic model of the embedding structure) to estimate the most likely position of one or more links embedding the segment of fiber”); and when a consistent portion of the 3D shape is identified, choosing three or more points along the consistent portion to defined the reference plane. Prisco further teaches that a reference frame is defined at the start of the shape sensing segment. As the joints move, an interrogator senses strain in the shape sensing segment. The sensed strain is used to output a Cartesian position and orientation of the end of the shape sensing segment with respect to the reference frame defined at the start of the shape sensing segment (abstract). FIG. 5 is a diagrammatic view that illustrates reference frames at segment starts in an optical fiber used for shape sensing (paragraph 41). In accordance with aspects of the invention, the position and orientation of each shape sensing segment end is determined with respect to the reference frame defined at the corresponding segment start. FIGS. 6A and 6B are diagrammatic views that illustrate determining segment end position and orientation (paragraph 43). In figs. 6A and 6B, a plane is defined along the z axis showing a course of a segment 600 of the shape sensing optical fiber 500, hence teaching wherein: the portion of the 3D shape includes three or more points disposed along the 3D shape, and the three or more points are equidistant from the reference plane. Of note, the points L1, L1+S1, L2 +S2, etc. lie along the length of the optical fiber and within the plane and hence each point is equidistance to the plane.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman and Hirakawa, wherein the operations further include: searching the 3D shape to identify a portion of the 3D shape that is consistent with a curved shape stored in a memory of the system; and when a consistent portion of the 3D shape is identified, choosing three or more points along the consistent portion to defined the reference plane, as taught by Prisco, to provide a more effective and accurate way to determine the shape of an optical fiber and a more effective way of producing the shape information for use in determining the position and orientation of the interventional device (paragraph 10), with a reasonable expectation of success, as Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Claims 5 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Chopra in view of Flexman and Hirakawa, as applied to claim 2, and further in view of Harks, et al., US 20190247132 A1.
Regarding claim 5, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 2.
Chopra in view of Flexman fails to teach wherein the operations further include orienting the reference plane with respect to the reference frame so that a front view image of the 3D shape according to the reference plane is aligned with a front view according to the reference frame.
However, within the same field of endeavor, Harks teaches visualizing an image object relating to an instrument (3), particularly a medical instrument, in an extracorporeal image, the image object comprising a representation of the instrument (3) or an intracorporeal image (40) acquired using the instrument (3) and the system comprising an extracorporeal image acquisition device (1) for acquiring extracorporeal images with respect to an extracorporeal image frame and an optical shape sensing tracking arrangement (2) for tracking the instrument (3) independent of the extracorporeal images with respect to a tracking frame (abstract), wherein the operations further include orienting the reference plane (three dimensional tracking frame of paragraph 61) with respect to the reference frame (registration of the tracking frame with the x-ray image frame in paragraph 63) so that a front view image of the 3D shape according to the reference plane is aligned with a front view according to the reference frame (see the matching of the frames in paragraphs 68-69, with paragraph 68 stating that “The representation of the projection of the graphical model 31 of the instrument 3 in the x-ray image is according to the positions and/or orientations of the distal portion of the instrument from the tracking frame that are transformed into the extracorporeal image frame”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra as modified by Flexman and Hirakawa, such that the operations further include orienting the reference plane with respect to the reference frame so that a front view image of the 3D shape according to the reference plane is aligned with a front view according to the reference frame, as taught by Harks, to reduce measurement errors (paragraph 62), with a reasonable expectation of success, as modified Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Regarding claim 21, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 17.
Chopra in view of Flexman and Hirakawa fails to teach orienting the reference plane with respect to the reference frame so that a front view image of the 3D shape according to the reference plane is aligned with a front view according to the reference frame.
However, Harks teaches visualizing an image object relating to an instrument (3), particularly a medical instrument, in an extracorporeal image, the image object comprising a representation of the instrument (3) or an intracorporeal image (40) acquired using the instrument (3) and the system comprising an extracorporeal image acquisition device (1) for acquiring extracorporeal images with respect to an extracorporeal image frame and an optical shape sensing tracking arrangement (2) for tracking the instrument (3) independent of the extracorporeal images with respect to a tracking frame (abstract), including orienting the reference plane (three dimensional tracking frame of paragraph 61) with respect to the reference frame (registration of the tracking frame with the x-ray image frame in paragraph 63) so that a front view image of the 3D shape according to the reference plane is aligned with a front view according to the reference frame (see the matching of the frames in paragraphs 68-69, with paragraph 68 stating that “The representation of the projection of the graphical model 31 of the instrument 3 in the x-ray image is according to the positions and/or orientations of the distal portion of the instrument from the tracking frame that are transformed into the extracorporeal image frame”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra as modified by Flexman and Hirakawa, such that the operations further include orienting the reference plane with respect to the reference frame so that a front view image of the 3D shape according to the reference plane is aligned with a front view according to the reference frame, as taught by Harks, to reduce measurement errors (paragraph 62), with a reasonable expectation of success, as modified Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Claims 8 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Chopra in view of Flexman and Prisco, as applied to claims 7 and 23 respectively above, and further in view of Messerly, S., US 20180289927 A1.
Regarding claim 8, Chopra in view of Flexman, Hirakawa and Prisco teaches all the limitations of claim 7.
Chopra in view of Flexman, Hirakawa and Prisco fails to teach wherein in use, the curved portion of the optical fiber is disposed along a basilic vein, a subclavian vein, an innominate vein, or a superior vena cava of the patient.
However, within the same field of endeavor, Messerly teaches a placement system for tracking, placing, and monitoring a catheter assembly or other medical device inserted into a body of a patient is disclosed. The placement system utilizes optical fiber-based strain sensors to assist with catheter placement (abstract), and paragraph 53 stating that “the position, orientation, shape, and other information regarding the catheter 72, as provided by the optical fiber-based sensors 204 and as described above is communicated to the user of the system 10 to assist with placing the distal tip 76B (or other portion of the catheter) at a desired location within the patient vasculature/body, such as the lower ⅓.sup.rd of the superior vena cava. In the present embodiment, such information is depicted on the display 30, included on the console 20 as part of the system 10, though it can be configured in other ways as well, including as a separate component in one embodiment”.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman, Hirakawa and Prisco, wherein in use, the curved portion of the optical fiber is disposed along a basilic vein, a subclavian vein, an innominate vein, or a superior vena cava of the patient, as taught by Messerly, as such modification would enable the catheter placement system to facilitate catheter placement within the patient's vasculature with a relatively high level of accuracy, i.e., placement of the distal tip of the catheter in a predetermined and desired position (paragraph 24), with a reasonable expectation of success, as Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Regarding claim 24, Chopra in view of Flexman, Hirakawa and Prisco teaches all the limitations of claim 23.
Chopra in view of Flexman, Hirakawa and Prisco fails to teach wherein in use, the curved portion of the optical fiber is disposed along a basilic vein, a subclavian vein, an innominate vein, or a superior vena cava of the patient.
However, within the same field of endeavor, Messerly teaches a placement system for tracking, placing, and monitoring a catheter assembly or other medical device inserted into a body of a patient is disclosed. The placement system utilizes optical fiber-based strain sensors to assist with catheter placement (abstract), and paragraph 53 stating that “the position, orientation, shape, and other information regarding the catheter 72, as provided by the optical fiber-based sensors 204 and as described above is communicated to the user of the system 10 to assist with placing the distal tip 76B (or other portion of the catheter) at a desired location within the patient vasculature/body, such as the lower ⅓.sup.rd of the superior vena cava. In the present embodiment, such information is depicted on the display 30, included on the console 20 as part of the system 10, though it can be configured in other ways as well, including as a separate component in one embodiment”.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman, Hirakawa and Prisco, wherein in use, the curved portion of the optical fiber is disposed along a basilic vein, a subclavian vein, an innominate vein, or a superior vena cava of the patient, as taught by Messerly, as such modification would enable the catheter placement system to facilitate catheter placement within the patient's vasculature with a relatively high level of accuracy, i.e., placement of the distal tip of the catheter in a predetermined and desired position (paragraph 24), with a reasonable expectation of success, as Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Claims 9 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Chopra in view of Flexman, Hirakawa and Prisco, as applied to claims 7 and 23 respectively above, and further in view of Ramachandran, et al., US 20140206988 A1.
Regarding claim 9, Chopra in view of Flexman, Hirakawa and Prisco teaches all the limitations of claim 7.
Chopra in view of Flexman and Hirakawa fails to teach comparing the curved portion of the 3D shape with the selected curved shape.
However, Prisco further teaches comparing the curved portion of the 3D shape with the selected curved shape(paragraph 83 states that “In all the cases described above and depicted in FIGS. 7-11, presented in accordance with aspects of the invention, the position of the kinematic chain is estimated by combining three sources of information; (i) the Cartesian information produced by the shape sensor for each of the defined segments; (ii) the a priori knowledge of the kinematic model of the kinematic chain (e.g., as stored in a readable electronic memory); and (iii) the a priori knowledge of the nature of the mechanical constraints between the kinematic chain and the shape sensing fiber at the start and end of the segment (e.g., as stored in a readable electronic memory)” the combining of the sources of information includes comparison of the information as paragraph 123, by way of disclosing examplary advantages of the invention, states that “the segment data can be merged with a priori information about the kinematic chain (i.e., with the kinematic model of the embedding structure) to estimate the most likely position of one or more links embedding the segment of fiber”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman and Hirakawa, for comparing a curved portion of the 3D shape with a curved shape, as taught by Prisco, to provide a more effective and accurate way to determine the shape of an optical fiber and a more effective way of producing the shape information for use in determining the position and orientation of the interventional device (paragraph 10), with a reasonable expectation of success, as Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Chopra in view of Flexman, Hirakawa and Prisco fails to teach a plurality of curved shapes stored in the memory, the plurality of curved shapes pertaining to a plurality of different insertion sites for the medical device, wherein the operations further include: receiving input from a clinician defining an insertion site for the medical device; selecting a curved shape from the plurality of curved shapes, the selected curved shape pertaining to the defined insertion site.
However, within the same field of endeavor, Ramachandran teaches a system for determining an insertion/exit position of a medical instrument and for determining how much of an instrument is internal to a patient versus external to the patient is provided (paragraph 12). Paragraph 12 continues to state that “A fiber optic strain sensing device is mounted on or integrated in a medical instrument such that the fiber optic sensing device can show a shape as well as a spatially resolved temperature distribution for the medical instrument. In one embodiment, temperature is employed to measure the position of entry/exit of the fiber enabled device within the body. This information can be employed to calculate a length of the instrument within the body as the device is inserted and manipulated further within the body in a real-time fashion”. Ramachandran then discloses a plurality of curved shapes stored in memory (paragraph 21 states that “System 100 may include a workstation or console 112 from which a procedure is supervised and/or managed. Workstation 112 preferably includes one or more processors 114 and memory 116 for storing programs and applications. Memory 116 may store an optical sensing and interpretation module 115 configured to interpret optical feedback signals from a shape and/or temperature sensing device or system 104”), the plurality of curved shapes (paragraph 30 disclose “The optical measurements recorded by the distributed fiber sensor 104 can be calibrated into accurate temperature values by use of the known fiber shapes and tension, in combination with the independent temperature reference 154, e.g., a thermistor reading, or from exposure of the fiber to known temperatures and temperature changes in a calibration step”. The known fiber shapes and tension are the curved shapes) pertaining to a plurality of different insertion sites for the medical device (paragraph 21 states that “Imaging system 110 may also be employed for collecting and processing pre-operative images (e.g., image volume 130) to map out a region of interest in the subject to create an image volume for registration and with shape/temperature sensing space”, then paragraph 36 notes that “Changes in temperature along the length of the device 104 are monitored to dynamically determine the insertion point (208) or more generally determine positions within different temperature domains. As mentioned, this can be combined with pre-operative imaging to predict whether a target is being approached. It can also be used to match and register the pre-operative imaging to the shape sensing system when the point of entry is also visible in the imaging modality”. That is, the shape data is matched to pre-operative image data comprising the shape information and the associated entry point in the image), wherein the operations further include: receiving input from a clinician defining an insertion site for the medical device (paragraph 12 states determining an insertion/exit position of the medical instrument as indicated above); selecting the curved shape from the plurality of curved shapes, the selected curved shape pertaining to the defined insertion site (paragraph 29 states that “The system 100 computes the point of entry 140 dynamically in real-time and to know the exact portion of the sensing device 104 entering the body 131. For a fiber-optic shape sensing system, detection of the fixed insertion point 140 can be employed to specify a patient specific reference launch region that moves in the patient's coordinate frame of reference rather than a frame of reference of the environment (e.g., the lab or operating room)”, that is, the shape information corresponds to a specified insertion point); and comparing the curved portion of the 3D shape with the selected curved shape.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman, Hirakawa and Prisco, to include a plurality of curved shapes stored in the memory, the plurality of curved shapes pertaining to a plurality of different insertion sites for the medical device, wherein the operations further include: receiving input from a clinician defining an insertion site for the medical device; selecting the curved shape from the plurality of curved shapes, the selected curved shape pertaining to the defined insertion site, as taught by Ramachandran, as such modification would provide reliable information regarding the insertion point of the instrument and the instrument’s locations throughout the procedure (paragraphs 2-3).
Regarding claim 25, Chopra in view of Flexman and Hirakawa teaches all the limitations of claim 23.
Chopra in view of Flexman and Hirakawa fails to teach comparing the curved portion of the 3D shape with the selected curved shape.
However, Prisco further teaches comparing the curved portion of the 3D shape with the selected curved shape(paragraph 83 states that “In all the cases described above and depicted in FIGS. 7-11, presented in accordance with aspects of the invention, the position of the kinematic chain is estimated by combining three sources of information; (i) the Cartesian information produced by the shape sensor for each of the defined segments; (ii) the a priori knowledge of the kinematic model of the kinematic chain (e.g., as stored in a readable electronic memory); and (iii) the a priori knowledge of the nature of the mechanical constraints between the kinematic chain and the shape sensing fiber at the start and end of the segment (e.g., as stored in a readable electronic memory)” the combining of the sources of information includes comparison of the information as paragraph 123, by way of disclosing examplary advantages of the invention, states that “the segment data can be merged with a priori information about the kinematic chain (i.e., with the kinematic model of the embedding structure) to estimate the most likely position of one or more links embedding the segment of fiber”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman and Hirakawa, for comparing a curved portion of the 3D shape with a curved shape, as taught by Prisco, to provide a more effective and accurate way to determine the shape of an optical fiber and a more effective way of producing the shape information for use in determining the position and orientation of the interventional device (paragraph 10), with a reasonable expectation of success, as Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Chopra in view of Flexman, Hirakawa and Prisco fails to teach a plurality of curved shapes stored in the memory, the plurality of curved shapes pertaining to a plurality of different insertion sites for the medical device, wherein the operations further include: receiving input from a clinician defining an insertion site for the medical device; selecting a curved shape from the plurality of curved shapes, the selected curved shape pertaining to the defined insertion site.
However, within the same field of endeavor, Ramachandran teaches a system for determining an insertion/exit position of a medical instrument and for determining how much of an instrument is internal to a patient versus external to the patient is provided (paragraph 12). Paragraph 12 continues to state that “A fiber optic strain sensing device is mounted on or integrated in a medical instrument such that the fiber optic sensing device can show a shape as well as a spatially resolved temperature distribution for the medical instrument. In one embodiment, temperature is employed to measure the position of entry/exit of the fiber enabled device within the body. This information can be employed to calculate a length of the instrument within the body as the device is inserted and manipulated further within the body in a real-time fashion”. Ramachandran then discloses a plurality of curved shapes stored in memory (paragraph 21 states that “System 100 may include a workstation or console 112 from which a procedure is supervised and/or managed. Workstation 112 preferably includes one or more processors 114 and memory 116 for storing programs and applications. Memory 116 may store an optical sensing and interpretation module 115 configured to interpret optical feedback signals from a shape and/or temperature sensing device or system 104”), the plurality of curved shapes (paragraph 30 disclose “The optical measurements recorded by the distributed fiber sensor 104 can be calibrated into accurate temperature values by use of the known fiber shapes and tension, in combination with the independent temperature reference 154, e.g., a thermistor reading, or from exposure of the fiber to known temperatures and temperature changes in a calibration step”. The known fiber shapes and tension are the curved shapes) pertaining to a plurality of different insertion sites for the medical device (paragraph 21 states that “Imaging system 110 may also be employed for collecting and processing pre-operative images (e.g., image volume 130) to map out a region of interest in the subject to create an image volume for registration and with shape/temperature sensing space”, then paragraph 36 notes that “Changes in temperature along the length of the device 104 are monitored to dynamically determine the insertion point (208) or more generally determine positions within different temperature domains. As mentioned, this can be combined with pre-operative imaging to predict whether a target is being approached. It can also be used to match and register the pre-operative imaging to the shape sensing system when the point of entry is also visible in the imaging modality”. That is, the shape data is matched to pre-operative image data comprising the shape information and the associated entry point in the image), wherein the operations further include: receiving input from a clinician defining an insertion site for the medical device (paragraph 12 states determining an insertion/exit position of the medical instrument as indicated above); selecting the curved shape from the plurality of curved shapes, the selected curved shape pertaining to the defined insertion site (paragraph 29 states that “The system 100 computes the point of entry 140 dynamically in real-time and to know the exact portion of the sensing device 104 entering the body 131. For a fiber-optic shape sensing system, detection of the fixed insertion point 140 can be employed to specify a patient specific reference launch region that moves in the patient's coordinate frame of reference rather than a frame of reference of the environment (e.g., the lab or operating room)”, that is, the shape information corresponds to a specified insertion point); and comparing the curved portion of the 3D shape with the selected curved shape.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman, Hirakawa and Prisco, to include a plurality of curved shapes stored in the memory, the plurality of curved shapes pertaining to a plurality of different insertion sites for the medical device, wherein the operations further include: receiving input from a clinician defining an insertion site for the medical device; selecting the curved shape from the plurality of curved shapes, the selected curved shape pertaining to the defined insertion site, as taught by Ramachandran, as such modification would provide reliable information regarding the insertion point of the instrument and the instrument’s locations throughout the procedure (paragraphs 2-3).
Claims 10 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Chopra in view of Flexman, Hirakawa, Prisco, and Ramachandran, as applied to claims 9 and 25 respectively above, and further in view of Messerly, S., US 20180289927 A1.
Regarding claim 10, Chopra in view of Flexman, Hirakawa, Prisco and Ramachandran teaches all the limitations of claim 9.
Chopra in view of Flexman, Hirakawa, Prisco and Ramachandran does not teach wherein the input further defines the insertion site as located on a right side or a left side of the patient.
However, Messerly further teaches wherein the input further defines the insertion site as located on a right side or a left side of the patient (paragraph 26 states that “FIG. 2 shows the general relation of these components to a patient 70 during a procedure to place a catheter 72 into the patient vasculature through a skin insertion site 73. FIG. 2 shows that the catheter 72 generally includes a proximal portion 74 that generally remains exterior to the patient and a distal potion 76 that generally resides within the patient vasculature after placement is complete”. Fig. 2 shows the insertion site 73 on the patient’s right arm).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman, Hirakawa, Prisco, and Ramachandran, wherein the input further defines the insertion site as located on a right side or a left side of the patient, as taught by Messerly, as such modification would enable the catheter placement system to facilitate catheter placement within the patient's vasculature with a relatively high level of accuracy, i.e., placement of the distal tip of the catheter in a predetermined and desired position (paragraph 24), with a reasonable expectation of success, as Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Regarding claim 26, Chopra in view of Flexman, Hirakawa, Prisco, and Ramachandran teaches all the limitations of claim 25.
Chopra does not teach wherein the input further defines the insertion site as located on a right side or a left side of the patient.
Chopra in view of Flexman, Prisco and Ramachandran does not teach wherein the input further defines the insertion site as located on a right side or a left side of the patient.
However, Messerly further teaches wherein the input further defines the insertion site as located on a right side or a left side of the patient (paragraph 26 states that “FIG. 2 shows the general relation of these components to a patient 70 during a procedure to place a catheter 72 into the patient vasculature through a skin insertion site 73. FIG. 2 shows that the catheter 72 generally includes a proximal portion 74 that generally remains exterior to the patient and a distal potion 76 that generally resides within the patient vasculature after placement is complete”. Fig. 2 shows the insertion site 73 on the patient’s right arm).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Chopra, as modified by Flexman, Prisco, and Ramachandran, wherein the input further defines the insertion site as located on a right side or a left side of the patient, as taught by Messerly, as such modification would enable the catheter placement system to facilitate catheter placement within the patient's vasculature with a relatively high level of accuracy, i.e., placement of the distal tip of the catheter in a predetermined and desired position (paragraph 24), with a reasonable expectation of success, as modified Chopra shares the same goal of providing improved navigation of interventional devices using shape sensor with reduced errors (paragraph 3).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 12,038338 B2 in view of Chopra, et al., US 20140275997 A1, and Flexman, et al., US 20200022764 A1. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of U.S. Patent No. 12,038338 B2.
Instant Application
U.S. Patent No., US 12038338 B2
1. (Original) A medical device system comprising:
a medical device comprising an optical fiber having one or more of core fibers, each of the one or more core fibers including a plurality of sensors distributed along a longitudinal length of a corresponding core fiber, each sensor of the plurality of sensors configured to:
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and
a console including one or more processors and a non-transitory computer-readable medium having stored thereon logic, when executed by the one or more processors, causes operations including: providing an incident light signal to the optical fiber;
receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors;
1. A medical device system, comprising:
an optical fiber having one or more core fibers, each of the one or more core fibers including a plurality of sensors distributed along a longitudinal length of a corresponding core fiber and each sensor of the plurality of sensors being configured to
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and
a console including one or more processors and a non-transitory computer-readable medium having stored thereon logic, when executed by the one or more processors, causes operations including:
providing a broadband incident light signal to the optical fiber;
receiving reflected light signals of different spectral widths of the broadband incident light signal by one or more of the plurality of sensors;
U.S. Patent No., US 12038338 B2 does not teach processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame.
However, Chopra teaches interventional system 100 of fig. 1 for processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber (paragraph 42 indicates that the data from the FBGs are used to reconstruct the shape of the fiber, paragraph 38 indicating that the shape is three-dimensional); defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber (paragraph 58 discloses defining an image reference frame, stating that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253. Optionally, an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed. The image may be of the distal end of the flexible body 254 superimposed on an image from the patient model”. The distal end 218 is wholly disposed within the patient as shown in figs. 3, hence, when paragraph 58 indicates the determination of the image reference frame based on a pose of the distal end, it means the determination is based on a distal end of the shape sensor disposed within the patient. The reference portion 251 of the shape sensor fiber 253 is disposed in a predefined reference shape according to paragraph 53);
orienting the 3D shape within the reference frame (paragraph 58 discloses that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253”) ; and
rendering an image of the 3D shape on a display of the system in accordance with the reference frame (paragraph 58 then states that “an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed”; paragraph 64; Figures 5 and 7).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure U.S. Patent No., US 12038338 B2 processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame, as taught by Chopra, providing an improved navigation system for tracking interventional instruments in surgical environments (paragraph 3).
US 12038338 B2 in view of Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure US 12038338 B2 as modified by Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3).
Claims 1 and 16 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 11 of copending Application No. 17/965,657 (U.S. P.G. Pub. No. US 20230121370 A1) in view of . Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application.
Instant Application
Copending Application 17/965,657
1. (Original) A medical device system comprising: a medical device comprising an optical fiber having one or more of core fibers, each of the one or more core fibers including a plurality of sensors distributed along a longitudinal length of a corresponding core fiber, each sensor of the plurality of sensors configured to:
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and
a console including one or more processors and a non-transitory computer-readable medium having stored thereon logic, when executed by the one or more processors, causes operations including:
providing an incident light signal to the optical fiber; receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors;
processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; defining a reference frame for displaying an image of the 3D shape; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame.
1. A medical system comprising: an ultrasound imaging probe having a first optical fiber integrated therein, wherein the first optical fiber includes a first set of one or more of core fibers, each of the first set of one or more core fibers including a first plurality of sensors distributed along a longitudinal length of a corresponding core fiber and each sensor of the first plurality of sensors being configured to
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and
a console optically coupled with the ultrasound imaging probe via a first elongate member that includes a second optical fiber having a second plurality of sensors, the console including one or more processors and a non- transitory computer-readable medium having stored thereon logic, when executed by the one or more processors, causes operations including:
providing an incident light signal to the first optical fiber via the first elongate member; receiving reflected light signals of different spectral widths of the incident light from one or more of the first plurality of sensors or one or more of the second plurality of sensors;
processing the reflected light signals associated with the first or second plurality of sensors to determine a first three-dimensional (3D) shape extending along a length including at least portions of the first optical fiber and the second optical fiber; determining a positioning of the ultrasound imaging probe based at least on the reflected light signals received from one or more of the second plurality of sensors; and causing rendering of an image on a display of the medical system in accordance with the positioning of the ultrasound imaging probe.
17/965,657 does not teach that the defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber.
However, Chopra teaches interventional system 100 of fig. 1 for defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber (paragraph 58 discloses defining an image reference frame, stating that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253. Optionally, an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed. The image may be of the distal end of the flexible body 254 superimposed on an image from the patient model”. The distal end 218 is wholly disposed within the patient as shown in figs. 3, hence, when paragraph 58 indicates the determination of the image reference frame based on a pose of the distal end, it means the determination is based on a distal end of the shape sensor disposed within the patient. The reference portion 251 of the shape sensor fiber 253 is disposed in a predefined reference shape according to paragraph 53);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure U.S. Patent No., 17/965,657 for defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber, as taught by Chopra, providing an improved navigation system for tracking interventional instruments in surgical environments (paragraph 3).
17/965,657 in view of Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure 17/965,657 as modified by Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3).
16. (Original) A method for detecting placement of a medical device within a patient body, the method comprising: providing an incident light signal to an optical fiber included within the medical device, wherein the optical fiber includes a one or more of core fibers, each of the one or more of core fibers including a plurality of reflective gratings distributed along a longitudinal length of a corresponding core fiber and each of the plurality of reflective gratings being configured to (i) reflect a light signal of a different spectral width based on received incident light, and (ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors; processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; defining a reference frame for displaying an image of the 3D shape; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of a system in accordance with the reference frame.
11. A method for detecting placement of a medical device within a patient, the method comprising:
providing an ultrasound imaging probe having a first optical fiber integrated therein, wherein the first optical fiber includes a first set of one or more of core fibers, each of the first set of one or more core fibers including a first plurality of sensors distributed along a longitudinal length of a corresponding core fiber and each sensor of the first plurality of sensors being configured to (i) reflect a light signal of a different spectral width based on received incident light, and (ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and providing a console optically coupled with the ultrasound imaging probe via a first elongate member that includes a second optical fiber having a second plurality of sensors, the console including one or more processors and a non- transitory computer-readable medium having stored thereon logic, when executed by the one or more processors, causes operations including: providing an incident light signal to the first optical fiber via the first elongate member; receiving reflected light signals of different spectral widths of the incident light from one or more of the first plurality of sensors or one or more of the second plurality of sensors; processing the reflected light signals associated with the first or second plurality of sensors to determine a first three-dimensional (3D) shape extending along a length including at least portions of the first optical fiber and the second optical fiber; determining a position of the ultrasound imaging probe based at least on the reflected light signals received from one or more of the second plurality of sensors; and causing rendering of an image on a display of the medical system in accordance with the position of the ultrasound imaging probe.
17/965,657 does not teach that the defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber.
However, Chopra teaches interventional system 100 of fig. 1 for defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber (paragraph 58 discloses defining an image reference frame, stating that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253. Optionally, an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed. The image may be of the distal end of the flexible body 254 superimposed on an image from the patient model”. The distal end 218 is wholly disposed within the patient as shown in figs. 3, hence, when paragraph 58 indicates the determination of the image reference frame based on a pose of the distal end, it means the determination is based on a distal end of the shape sensor disposed within the patient. The reference portion 251 of the shape sensor fiber 253 is disposed in a predefined reference shape according to paragraph 53);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure U.S. Patent No., 17/965,657 for defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber, as taught by Chopra, providing an improved navigation system for tracking interventional instruments in surgical environments (paragraph 3).
17/965,657 in view of Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure 17/965,657 as modified by Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3).
Claims 1-2, and 15-17 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 17, and 19 of copending Application No. 17/971873 (U.S. P.G. Pub. No. US 20230126813 A1) in view of Chopra, et al., US 20140275997 A1, and Flexman, et al., US 20200022764 A1. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application.
Instant Application
Copending Application 17/971873
1. (Original) A medical device system comprising: a medical device comprising an optical fiber having one or more of core fibers, each of the one or more core fibers including a plurality of sensors distributed along a longitudinal length of a corresponding core fiber, each sensor of the plurality of sensors configured to:
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and
a console including one or more processors and a non-transitory computer-readable medium having stored thereon logic, when executed by the one or more processors, causes operations including: providing an incident light signal to the optical fiber;
receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors;
processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; defining a reference frame for displaying an image of the 3D shape; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame.
1. (Original) A medical device system comprising: a medical device comprising an optical fiber having one or more of core fibers, each of the one or more core fibers including a plurality of sensors distributed along a longitudinal length of a corresponding core fiber, each sensor of the plurality of sensors configured to:
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and
a console including one or more processors and a non-transitory computer-readable medium having stored thereon logic, when executed by the one or more processors, causes operations including: providing an incident light signal to the optical fiber;
receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors;
processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; detecting a predetermined subshape of the 3D shape; and defining a reference plane in accordance with the predetermined subshape, wherein the reference plane defines a viewing perspective of the 3D shape; and orienting the reference plane n 3D space for rendering an image of the 3D shape on a display.
17/965,657 does not teach that the defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber.
However, Chopra teaches interventional system 100 of fig. 1 for defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber (paragraph 58 discloses defining an image reference frame, stating that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253. Optionally, an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed. The image may be of the distal end of the flexible body 254 superimposed on an image from the patient model”. The distal end 218 is wholly disposed within the patient as shown in figs. 3, hence, when paragraph 58 indicates the determination of the image reference frame based on a pose of the distal end, it means the determination is based on a distal end of the shape sensor disposed within the patient. The reference portion 251 of the shape sensor fiber 253 is disposed in a predefined reference shape according to paragraph 53);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure U.S. Patent No., 17/965,657 for defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber, as taught by Chopra, providing an improved navigation system for tracking interventional instruments in surgical environments (paragraph 3).
17/971873 in view of Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure 17/971873 as modified by Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3).
2. (Original) The system according to claim 1, wherein orienting the 3D shape includes: defining the reference plane according to a portion of the 3D shape; fixing the 3D shape to the reference plane; and orienting the reference plane with respect to the reference frame.
Claim 1 teaches this limitation.
15. (Original) The system according to claim 1, wherein the medical device is one of an introducer wire, a guidewire, a stylet, a stylet within a needle, a needle with the optical fiber inlayed into a cannula of the needle or a catheter with the optical fiber inlayed into one or more walls of the catheter.
17. (Original) The system according to claim 1, wherein the medical device is one of an introducer wire, a guidewire, a stylet, a stylet within a needle, a needle with the optical fiber inlayed into a cannula of the needle or a catheter with the optical fiber inlayed into one or more walls of the catheter.
16. (Original) A method for detecting placement of a medical device within a patient body, the method comprising: providing an incident light signal to an optical fiber included within the medical device, wherein the optical fiber includes a one or more of core fibers, each of the one or more of core fibers including a plurality of reflective gratings distributed along a longitudinal length of a corresponding core fiber and each of the plurality of reflective gratings being configured to (i) reflect a light signal of a different spectral width based on received incident light, and (ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors; processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; defining a reference frame for displaying an image of the 3D shape; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of a system in accordance with the reference frame.
19. (Original) A method for detecting placement of a medical device within a patient body, the method comprising: providing by a system, an incident light signal to an optical fiber included within the medical device, wherein the optical fiber includes one or more of core fibers, each of the one or more of core fibers including a plurality of reflective gratings distributed along a longitudinal length of a corresponding core fiber and each of the plurality of reflective gratings being configured to (i) reflect a light signal of a different spectral width based on received incident light, and (ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors; processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; detecting a predetermined subshape of the 3D shape; and defining a reference plane in accordance with the predetermined subshape, wherein the reference plane defines a viewing perspective of the 3D shape.
17/971873 does not teach that the defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber.
However, Chopra teaches interventional system 100 of fig. 1 for defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber (paragraph 58 discloses defining an image reference frame, stating that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253. Optionally, an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed. The image may be of the distal end of the flexible body 254 superimposed on an image from the patient model”. The distal end 218 is wholly disposed within the patient as shown in figs. 3, hence, when paragraph 58 indicates the determination of the image reference frame based on a pose of the distal end, it means the determination is based on a distal end of the shape sensor disposed within the patient. The reference portion 251 of the shape sensor fiber 253 is disposed in a predefined reference shape according to paragraph 53);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure U.S. Patent No., 17/971873 for defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber, as taught by Chopra, providing an improved navigation system for tracking interventional instruments in surgical environments (paragraph 3).
17/971873 in view of Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure 17/971873 as modified by Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3).
17. (Original) The method according to claim 16, wherein orienting the 3D shape includes: defining a reference plane according to a portion of the 3D shape; fixing the 3D shape to the reference plane; and orienting the reference plane with respect to the reference frame.
This limitation is taught by claim 19.
Claims 1 and 16 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 14 of copending Application No. 17/747903 (U.S. P.G. Pub. No. US 20220369934 A1) in view of Chopra, et al., US 20140275997 A1, and Flexman, et al., US 20200022764 A1. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application.
Instant Application
Copending Application 17/747903
1. (Original) A medical device system comprising: a medical device comprising an optical fiber having one or more of core fibers, each of the one or more core fibers including a plurality of sensors distributed along a longitudinal length of a corresponding core fiber, each sensor of the plurality of sensors configured to:
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and
a console including one or more processors and a non-transitory computer-readable medium having stored thereon logic, when executed by the one or more processors, causes operations including: providing an incident light signal to the optical fiber;
receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors;
processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; defining a reference frame for displaying an image of the 3D shape; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame.
1. (Original) A medical device system for detecting placement of a medical device within a patient body, the system comprising: the medical device comprising an optical fiber having one or more of core fibers, each of the one or more core fibers including a plurality of sensors distributed along a longitudinal length of a corresponding core fiber and each sensor of the plurality of sensors being configured to
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and
a console including one or more processors and a non-transitory computer- readable medium having stored thereon logic, when executed by the one or more processors, causes operations including: providing an incident light signal to the optical fiber;
receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors;
processing the reflected light signals associated with a distal subset of the plurality of sensors of the one or more of core fibers to identify wavelength shifts in the reflected light signals resulting in detection of fluctuations of a distal portion of the optical fiber; and determining a location of the distal portion of the optical fiber in the patient body at which the distal portion is disposed based on the detected fluctuations.
17/747903 does not teach defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame.
However, Chopra teaches defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber (paragraph 58 discloses defining an image reference frame, stating that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253. Optionally, an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed. The image may be of the distal end of the flexible body 254 superimposed on an image from the patient model”. The distal end 218 is wholly disposed within the patient as shown in figs. 3, hence, when paragraph 58 indicates the determination of the image reference frame based on a pose of the distal end, it means the determination is based on a distal end of the shape sensor disposed within the patient. The reference portion 251 of the shape sensor fiber 253 is disposed in a predefined reference shape according to paragraph 53);
orienting the 3D shape within the reference frame (paragraph 58 discloses that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253”) ; and
rendering an image of the 3D shape on a display of the system in accordance with the reference frame (paragraph 58 then states that “an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed”; paragraph 64; Figures 5 and 7).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure U.S. Patent No., 17/747903 for defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber, as taught by Chopra, providing an improved navigation system for tracking interventional instruments in surgical environments (paragraph 3).
17/747903 in view of Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure 17/747903 as modified by Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3).
16. (Original) A method for detecting placement of a medical device within a patient body, the method comprising: providing an incident light signal to an optical fiber included within the medical device, wherein the optical fiber includes a one or more of core fibers, each of the one or more of core fibers including a plurality of reflective gratings distributed along a longitudinal length of a corresponding core fiber and each of the plurality of reflective gratings being configured to (i) reflect a light signal of a different spectral width based on received incident light, and (ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors; processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; defining a reference frame for displaying an image of the 3D shape; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of a system in accordance with the reference frame.
14. (Original) A method for detecting placement of a medical device within a patient body, the method comprising: providing an incident light signal to an optical fiber included within the medical device, wherein the optical fiber includes a one or more of core fibers, each of the one or more of core fibers including a plurality of reflective gratings distributed along a longitudinal length of a corresponding core fiber and each of the plurality of reflective gratings being configured to (i) reflect a light signal of a different spectral width based on received incident light, and (ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors; processing the reflected light signals associated with the one or more of core fibers to detect fluctuations of a portion of the optical fiber; and determining a location of the portion of the optical fiber in the patient body at which the portion is disposed based on the detected fluctuations.
17/747903 does not teach processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame.
However, Chopra teaches processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber (paragraph 42 indicates that the data from the FBGs are used to reconstruct the shape of the fiber, paragraph 38 indicating that the shape is three-dimensional);
defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber (paragraph 58 discloses defining an image reference frame, stating that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253. Optionally, an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed. The image may be of the distal end of the flexible body 254 superimposed on an image from the patient model”. The distal end 218 is wholly disposed within the patient as shown in figs. 3, hence, when paragraph 58 indicates the determination of the image reference frame based on a pose of the distal end, it means the determination is based on a distal end of the shape sensor disposed within the patient. The reference portion 251 of the shape sensor fiber 253 is disposed in a predefined reference shape according to paragraph 53);
orienting the 3D shape within the reference frame (paragraph 58 discloses that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253”) ; and
rendering an image of the 3D shape on a display of the system in accordance with the reference frame (paragraph 58 then states that “an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed”; paragraph 64; Figures 5 and 7).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure U.S. Patent No., 17/747903 for processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame, as taught by Chopra, providing an improved navigation system for tracking interventional instruments in surgical environments (paragraph 3).
17/747903 in view of Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure 17/747903 as modified by Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3).
Claims 1 and 16 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 14 of copending Application No. 17/952645 (U.S. P.G. Pub. No. US 20230097431 A1) in view of Chopra, et al., US 20140275997 A1, and Flexman, et al., US 20200022764 A1. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application.
Instant Application
Copending Application 17/952645
1. (Original) A medical device system comprising:
a medical device comprising an optical fiber having one or more of core fibers, each of the one or more core fibers including a plurality of sensors distributed along a longitudinal length of a corresponding core fiber, each sensor of the plurality of sensors configured to:
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and
a console including one or more processors and a non-transitory computer-readable medium having stored thereon logic, when executed by the one or more processors, causes operations including: providing an incident light signal to the optical fiber; receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors; processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; defining a reference frame for displaying an image of the 3D shape; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame.
1. A medical device system, comprising: a medical device comprising: an elongate probe; and an optical fiber having one or more of core fibers extending along the elongate probe, each of the one or more core fibers including a plurality of sensors distributed along the longitudinal length and each sensor of the plurality of sensors being configured to
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and
a console including one or more processors and a non-transitory computer- readable medium having stored thereon logic, when executed by the one or more processors, causes operations including: determining a live three-dimensional (3D) shape of the elongate probe during insertion of the elongate probe within a patient body, wherein determining includes: providing an incident light signal to the optical fiber; receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors; and
processing the reflected light signals associated with the one or more of core fibers to determine the live 3D shape; capturing a reference shape, the reference shape including at least a portion of the live 3D shape; and defining a pathway for the live 3D shape, the pathway extending distally away from a distal end of the reference shape.
17/952645 does not teach processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame.
However, Chopra teaches processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber (paragraph 42 indicates that the data from the FBGs are used to reconstruct the shape of the fiber, paragraph 38 indicating that the shape is three-dimensional);
defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber (paragraph 58 discloses defining an image reference frame, stating that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253. Optionally, an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed. The image may be of the distal end of the flexible body 254 superimposed on an image from the patient model”. The distal end 218 is wholly disposed within the patient as shown in figs. 3, hence, when paragraph 58 indicates the determination of the image reference frame based on a pose of the distal end, it means the determination is based on a distal end of the shape sensor disposed within the patient. The reference portion 251 of the shape sensor fiber 253 is disposed in a predefined reference shape according to paragraph 53);
orienting the 3D shape within the reference frame (paragraph 58 discloses that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253”) ; and
rendering an image of the 3D shape on a display of the system in accordance with the reference frame (paragraph 58 then states that “an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed”; paragraph 64; Figures 5 and 7).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure U.S. Patent No., 17/952645 for processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame, as taught by Chopra, providing an improved navigation system for tracking interventional instruments in surgical environments (paragraph 3).
17/952645 in view of Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure 17/952645 as modified by Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3).
16. (Original) A method for detecting placement of a medical device within a patient body, the method comprising: providing an incident light signal to an optical fiber included within the medical device, wherein the optical fiber includes a one or more of core fibers, each of the one or more of core fibers including a plurality of reflective gratings distributed along a longitudinal length of a corresponding core fiber and each of the plurality of reflective gratings being configured to (i) reflect a light signal of a different spectral width based on received incident light, and (ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors; processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber; defining a reference frame for displaying an image of the 3D shape; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of a system in accordance with the reference frame.
14. A method for detecting placement of a medical device within a patient body, the method comprising: providing the medical device coupled with a medical device system, the medical device including an elongate probe configured for insertion within the patient body: determining a live three-dimensional (3D) shape of the elongate probe inserted within the patient body, wherein determining includes: providing an incident light signal to an optical fiber extending along the elongate probe, wherein the optical fiber includes a one or more of core fibers, each of the one or more of core fibers including a plurality of reflective gratings distributed along a longitudinal length of a corresponding core fiber and each of the plurality of reflective gratings being configured to (i) reflect a light signal of a different spectral width based on received incident light, and (ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors; and processing the reflected light signals associated with the one or more of core fibers to determine the three-dimensional shape of the elongate probe inserted within the patient body; capturing a reference shape, the reference shape including at least a portion of the live 3D shape; and defining a pathway for the live 3D shape, the pathway extending distally away from a distal end of the reference shape.
17/952645 does not teach processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame.
However, Chopra teaches processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber (paragraph 42 indicates that the data from the FBGs are used to reconstruct the shape of the fiber, paragraph 38 indicating that the shape is three-dimensional);
defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber (paragraph 58 discloses defining an image reference frame, stating that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253. Optionally, an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed. The image may be of the distal end of the flexible body 254 superimposed on an image from the patient model”. The distal end 218 is wholly disposed within the patient as shown in figs. 3, hence, when paragraph 58 indicates the determination of the image reference frame based on a pose of the distal end, it means the determination is based on a distal end of the shape sensor disposed within the patient. The reference portion 251 of the shape sensor fiber 253 is disposed in a predefined reference shape according to paragraph 53);
orienting the 3D shape within the reference frame (paragraph 58 discloses that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253”) ; and
rendering an image of the 3D shape on a display of the system in accordance with the reference frame (paragraph 58 then states that “an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed”; paragraph 64; Figures 5 and 7).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure U.S. Patent No., 17/952645 for processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame, as taught by Chopra, providing an improved navigation system for tracking interventional instruments in surgical environments (paragraph 3).
17/952645 in view of Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure 17/952645 as modified by Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3).
Claims 1 and 16 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 18 of copending Application No. 17/863211 (U.S. P.G. Pub. No. US 20240016425 A1) in view of Chopra, et al., US 20140275997 A1, and Flexman, et al., US 20200022764 A1. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations recited in the claims mentioned above of the instant application are also recited in the claims mentioned above of the copending application.
Instant Application
Copending Application 17/863211
1. (Original) A medical device system comprising:
a medical device comprising an optical fiber having one or more of core fibers, each of the one or more core fibers including a plurality of sensors distributed along a longitudinal length of a corresponding core fiber, each sensor of the plurality of sensors configured to:
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber; and
a console including one or more processors and a non-transitory computer-readable medium having stored thereon logic, when executed by the one or more processors, causes operations including: providing an incident light signal to the optical fiber; receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors;
processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber.
1. A medical system, comprising:
an elongate medical device configured for insertion within a blood vessel of a patient, the medical device comprising: an optical fiber extending along a longitudinal length the medical device to a distal end of the medical device, the optical fiber having a single core fiber extending along the optical fiber, the single core fiber disposed radially offset from a central axis of the optical fiber, the core fiber including a plurality of sensors distributed along the longitudinal length, each sensor of the plurality of sensors configured to
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on a state of the optical fiber; and
a console operatively coupled with the optical fiber, the console including a light source, an optical receiver, one or more processors, and a non-transitory computer-readable medium having stored thereon logic that, when executed by the one or more processors, causes operations including: projecting a light distally along the optical fiber; receiving at least one reflected light signal from the optical fiber;
processing the reflected light signal by the processor to determine a state of the optical fiber, based on the at least one reflected light signal; and communicating the state to a user.
17/863211 does not teach defining a reference frame for displaying an image of the 3D shape based on a curved a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame.
However, Chopra teaches interventional system 100 of fig. 1 for defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber (paragraph 58 discloses defining an image reference frame, stating that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253. Optionally, an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed. The image may be of the distal end of the flexible body 254 superimposed on an image from the patient model”. The distal end 218 is wholly disposed within the patient as shown in figs. 3, hence, when paragraph 58 indicates the determination of the image reference frame based on a pose of the distal end, it means the determination is based on a distal end of the shape sensor disposed within the patient. The reference portion 251 of the shape sensor fiber 253 is disposed in a predefined reference shape according to paragraph 53);
orienting the 3D shape within the reference frame (paragraph 58 discloses that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253”) ; and
rendering an image of the 3D shape on a display of the system in accordance with the reference frame (paragraph 58 then states that “an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed”; paragraph 64; Figures 5 and 7).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure U.S. Patent No., 17/863211 for defining a reference frame for displaying an image of the 3D shape based on a curved a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame, as taught by Chopra, providing an improved navigation system for tracking interventional instruments in surgical environments (paragraph 3).
17/863211 in view of Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure 17/863211 as modified by Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3).
16. (Original) A method for detecting placement of a medical device within a patient body, the method comprising:
providing an incident light signal to an optical fiber included within the medical device, wherein the optical fiber includes a one or more of core fibers, each of the one or more of core fibers including a plurality of reflective gratings distributed along a longitudinal length of a corresponding core fiber and each of the plurality of reflective gratings being configured to
(i) reflect a light signal of a different spectral width based on received incident light, and
(ii) change a characteristic of the reflected light signal based on strain experienced by the optical fiber;
receiving reflected light signals of different spectral widths of the incident light by one or more of the plurality of sensors;
processing the reflected light signals associated with the one or more of core fibers to determine a three-dimensional (3D) shape of the optical fiber
18. A method performed by a medical system, comprising:
projecting a light distally along an optical fiber of the system, the optical fiber disposed within a patient body, wherein: the optical fiber includes a single core fiber disposed radially offset from a central axis of the optical fiber, the single core fiber is configured to
(i) receive the light via an optical interface of the optical fiber coupled with a console of system and (ii) propagate the light distally along a longitudinal length of the single core fiber;
receiving a light signal from the optical fiber, the light signal propagating proximally along the single core fiber;
processing the reflected light signal by the processor to determine a state of the optical fiber, based on the at least one reflected light signal; and communicating the state to a user.
17/863211 does not teach defining a reference frame for displaying an image of the 3D shape based on a curved a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame.
However, Chopra teaches interventional system 100 of fig. 1 for defining a reference frame for displaying an image of the 3D shape based on a curved portion of the optical fiber (paragraph 58 discloses defining an image reference frame, stating that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253. Optionally, an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed. The image may be of the distal end of the flexible body 254 superimposed on an image from the patient model”. The distal end 218 is wholly disposed within the patient as shown in figs. 3, hence, when paragraph 58 indicates the determination of the image reference frame based on a pose of the distal end, it means the determination is based on a distal end of the shape sensor disposed within the patient. The reference portion 251 of the shape sensor fiber 253 is disposed in a predefined reference shape according to paragraph 53);
orienting the 3D shape within the reference frame (paragraph 58 discloses that “At 328, the pose of the distal end (or any other portion) of the shape sensor fiber 253 is determined in the image reference frame based on a registration between the image reference frame and the reference frame of the shape sensor 253”) ; and
rendering an image of the 3D shape on a display of the system in accordance with the reference frame (paragraph 58 then states that “an image from the image reference frame that corresponds to the pose of the distal end of the flexible body 254 is displayed”; paragraph 64; Figures 5 and 7).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure U.S. Patent No., 17/863211 defining a reference frame for displaying an image of the 3D shape based on a curved a curved portion of the optical fiber; orienting the 3D shape within the reference frame; and rendering an image of the 3D shape on a display of the system in accordance with the reference frame, as taught by Chopra, providing an improved navigation system for tracking interventional instruments in surgical environments (paragraph 3).
17/863211 in view of Chopra does not teach wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient.
However, within the same field of endeavor, Flexman teaches an optical shape sensing (OSS) guiding and monitoring system which employs an interventional device (40) including an integration of an OSS sensor (20) and one or more interventional tools (30), the OSS sensor (20) for generating shape sensing data informative of a shape of the OSS sensor (20) as the interventional device (40) is navigated within an anatomical region (see abstract), wherein according paragraphs 142-144, a reconstruction of a shape of the interventional device is accomplished by delineating a pose of the interventional device via the shape sensing data on a temporal frame basis within a coordinate system of an optical interrogator 71, or register a coordinate of the interrogator to an image coordinate. Flexman teaches “wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient”, since paragraph 83 states that “the inventions of the present provide for a detection of any folding and/or any twisting of an interventional device including an integration of an interventional tool and a OSS sensor as the interventional device is navigated within an anatomical region by a linear/curvilinear translation of the interventional device within the anatomical region and/or by an axial/non-axial rotation of the interventional device within the anatomical region”, where the anatomical region includes heart and blood vessels according to paragraph 89, and the portion to be navigated within the anatomical region comprises the distal region according to paragraph 119. That is, the distal end, comprising a distal end node, used in pushability detection and the torquability detection, as well as the creation of the coordinate system for the guiding and monitoring, includes folding and twisting segments during curvilinear translation of the distal end within the lumen.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure 17/863211 as modified by Chopra wherein: the curved portion is disposed within a patient, and a shape of the curved portion of the optical fiber is defined by a curved portion of a vasculature of the patient, as taught by Flexman, to improve tracking of a distal end of the interventional device including reducing any errors in pose (paragraphs 2-3).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Farouk A Bruce whose telephone number is (408)918-7603. The examiner can normally be reached Mon-Fri 8-5pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached on (571) 272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FAROUK A BRUCE/ Examiner, Art Unit 3797
/CHRISTOPHER KOHARSKI/ Supervisory Patent Examiner, Art Unit 3797