DETAILED ACTION
This office action is in response to the communication received on July 13, 2025 concerning application No. 18/416,063 filed on January 18, 2024.
Claims 1-7 and 12-20 are currently pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 07/13/2025 regarding the drawings objections rejection have been fully considered. The amendments to the specification and drawings have been entered and overcome the drawings objections of figs. 2, and 5-7 previously set forth. Examiner notes that applicant did not address the objection to reference character 60 in fig. 3, therefore the objection of fig. 3 stands.
Applicant's arguments filed 07/13/2025 regarding the claim objections have been fully considered. The amendments to the claims have been entered and overcome the claims objections of claim 1, 4, and 6 previously set forth.
Applicant's arguments filed 07/13/2025 regarding the 35 USC 101 rejection have been fully considered. The amendments to the claims have been entered and overcome the 35 USC 101 rejection previously set forth.
Applicant's arguments filed 07/13/2025 regarding the 35 USC 102 rejection have been fully considered. The amendments to the claims have been entered and overcome the 35 USC 102 rejection previously set forth.
Applicant's arguments filed 07/13/2025 regarding the 35 USC 103 rejection have been fully considered but they are not persuasive. In response to the applicant’s arguments that the prior art fails to teach “wherein the processor is further operable to compute an estimate of the optimal location and angle of an ultrasound scanner for obtaining the 2D image slices and to mitigate the airways' effects on the ultrasound images based on anatomical information obtained from the pre-acquired 3D image data, wherein the anatomical information comprises location of the target nodules and airways in the lung, lung motion, and position of the ribs or other bony structures; and wherein the processor is further operable with a display to show the suggested location and angle, and optionally alert the physician if the scanner is off-angle or location”, examiner respectfully disagrees. Examiner specifically argues on pg. 16, “Mine, however, does not compute a suggested location and angle of the ultrasound scanner based on target nodules in the lung, nor lung motion”. However, a set forth in the previous office action, [0080]-[0081] of Mine disclose “when the scan cross-sectional plane 21 found in the search is shadowed by gas (air) or bone, the evaluating function 162 is configured to correct (adjust) the scan cross-sectional plane 21…the scan controlling function 161 calculates position information indicating the position and the orientation of the ultrasound probe 101 for scanning the scan cross-sectional plane 21”. By calculating the position and orientation of the ultrasound probe so that it avoids air, the suggested location and angle of the probe is based on location of airways in the lung. [0079] additionally discloses the determined position is optimal for imaging the scan target. [0058] further discloses “the scan controlling function 161 moves the ultrasound probe 101 to the initial position…the initial position is position information…as well as the orientation of the ultrasound probe 101” and [0069] “the scan controlling function 161 is configured to correct the position of the ultrasound probe 101, on the basis of at least one selected from…respiratory phase information of the patient P”. The respiratory phase information is considered lung motion information and by basing the position on the respiratory phase the initial position (position and orientation) are being based on lung motion. Applicant further argues, ‘Mine does not alert the physician if the scanner is off-angle because the Mine patent is directed "to obtain images of constant quality without depending on examination manipulations of the operators.’” However, Examiner notes that the alerting of the physician limitation is recited as being optional within the claims and is therefore not a required limitation to be performed.
In response to the applicant’s arguments that the prior art fails to teach “a device tip computation module programmed and operable to detect and track a surgical device based on the real-time 2D ultrasound data”, examiner respectfully disagrees. See the rejection below for how Tang is being relied upon for teaching the argued limitation recited above.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 60 in fig. 3. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a device tip computation module” in claim 12.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. For the purpose of examination and this office action the device tip computation module is being interpreted as a computing device containing software or equivalent thereof for detecting and tracking the location of a device tip (see [0107]-[0109] and [0114] of the present applications specification).
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-2, 4-7, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable by Kadoury et al. (US 20140193053, hereinafter Kadoury) in view of Yang et al. (CN110279429A, hereinafter Yang) and Mine (US 20220087654).
Regarding claim 1, Kadoury teaches a system for registering real time two-dimensional (2D) ultrasound data of a lung of a patient to pre-acquired three-dimensional (3D) image data of the lung of the patient (system 100 in fig. 1. [0012] “an integrated framework provides an accurate and real-time updated transformation chain to connect free-hand two-dimensional (2D) ultrasound (US) images to a pre-acquired 3D reference volume from another modality. [0030] “the present principles are applicable to internal tracking procedures of biological systems, procedures in all areas of the body such as the lungs”, meaning the image data is of the lung of a patient), the system comprising:
a storage on which the pre-acquired 3D image data of the lung of the patient is saved ([0038] “imaging system 130 may be provided for collecting pre-operative imaging data…these 3D images 131 may be stored in memory 116”. [0030] “the present principles are applicable to internal tracking procedures of biological systems, procedures in all areas of the body such as the lungs”, meaning the image data is of the lung of a patient. also see [0071]);
a processor (the electronic circuitry of system 100 in fig. 1) programmed and operable to:
(a) reconstruct 3D image volumes of the lung from ultrasonically generated 2D image slices of the lung and position-time information for each of the 2D image slices ([0076] “the 3D US volume (708) can be generated by tracking a 2D probe, streaming to the workstation all 2D image frames and corresponding tracking data…reconstructing the 3D volume based on the acquired 2D images and tracking data”, the tracking data that corresponds to the image frames is considered the position-time information for each 2D image slice. Also see [0041]); and
(b) register the pre-acquired 3D image data of the lung to the 3D image volumes of the lung ([0078] “registers the 3D US volume (708)…to the 3D pre-operative volume (702)”);
wherein the processor is further operable to compute an estimate of the optimal location and angle of an ultrasound scanner for obtaining the 2D image slices ([0055]-[0056] discloses the workflow prompts a user to position the probe at certain locations with the probe pointing in an exact direction. The location is considered the suggested location and the direction is considered the suggested angle. The user is then prompted to obtain a sweep (images) of the area of interest); and
wherein the processor is further operable with a display to show the suggested location and angle, and optionally alert the physician if the probe is off-track or off-angle ([0048] discloses the user receives the prompts through the display 118. The prompts correspond to the suggested location and angle being displayed).
Kadoury does not specifically teach the 2D image slices are generated over a plurality of breathing cycles and where the processor is further operable to categorize the 2D image slices into different sets; and to select a first set of 2D image slices for reconstructing a first 3D image volume based on a point in the breathing cycle.
However,
Yang in a similar field of endeavor teaches obtaining 2D image slices that are generated over a plurality of breathing cycles, and preferably at least four breathing cycles (pg. 3, paras. 5-7 disclose “a plurality of different first two dimensional image sequences and respiratory signals corresponding to the two-dimensional images in the first two-dimensional image sequence may be acquired”, since a plurality of image sequences over a plurality of respiratory signals (cycles) are acquired, 2D image slices are generated over a plurality of breathing cycles), wherein a processor is operable to categorize the 2D image slices into different sets (pg. 3, para. 7, “the respiratory signals corresponding to all the two-dimensional images in all the first two-dimensional image sequences are grouped according to the magnitude of the signal value, and each group actually corresponds to one breathing phase…each breathing phase corresponds to a specific breathing state”. The groups of images are considered the different sets); and to select a first set of 2D image slices for reconstructing a first 3D image volume based on a point in the breathing cycle (pg. 4, para. 1 “for each respiratory state, reconstruct a three-dimensional volume data image in the respiratory state based on the second two-dimensional image sequence corresponding to the respiratory state”, the image sequence corresponding to the respiratory state used for reconstructing the 3D volume is considered the selected first set of 2D image slices. Also, pg. 6, para. 2, “the embodiment of the present invention uses the statistical average as a reference to select a corresponding two-dimensional image in each first two-dimensional image sequence to perform three-dimensional volume data reconstruction in each respiratory state).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the processor disclosed by Kadoury to generate the 2D image slices over a plurality of breathing cycles and where the processor is further operable to categorize the 2D image slices into different sets; and to select a first set of 2D image slices for reconstructing a first 3D image volume based on a point in the breathing cycle in order to improve the success rate of reconstruction and image quality of the imaging at a reduced cost, as recognized by Yang (pg. 3, para. 3).
Kadoury in view of Yang does not specifically teach the estimate of the optimal location and angle of the ultrasound scanner mitigates the airways’ effects on the ultrasound images based on anatomical information obtained from the pre-acquired 3D image data, wherein the anatomical information comprises location of the target nodules and airways in the lung, lung motion, and position of the ribs or other bony structures.
However,
Mine in a similar field of endeavor teaches computing the optimal location and angle of the ultrasound scanner mitigates the airways’ effects on the ultrasound images based on anatomical information obtained from the pre-acquired 3D image data, wherein the anatomical information comprises location of the target and airways in the lung ([0080]-[0081] “when the scan cross-sectional plane 21 found in the search is shadowed by gas (air) or bone, the evaluating function 162 is configured to correct (adjust) the scan cross-sectional plane 21…the scan controlling function 161 calculates position information indicating the position and the orientation of the ultrasound probe 101 for scanning the scan cross-sectional plane 21”. By calculating the position and orientation of the ultrasound probe so that it avoids air, the suggested location and angle of the probe is based on location of airways in the lung. [0079] further discloses the determined position is optimal for imaging the scan target), lung motion ([0058] “the scan controlling function 161 moves the ultrasound probe 101 to the initial position…the initial position is position information…as well as the orientation of the ultrasound probe 101” and [0069] “the scan controlling function 161 is configured to correct the position of the ultrasound probe 101, on the basis of at least one selected from…respiratory phase information of the patient P”. the respiratory phase information is considered lung motion information and by basing the position on the respiratory phase the initial position (position and orientation) are being based on lung motion), and position of the ribs or other bony structures ([0080] discloses the determined scan plane position is based on the shadow formed by a rib which corresponds to the position of the ribs).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system disclosed by Kadoury in view of Yang to have the estimate of the optimal location and angle of the ultrasound scanner mitigate the airways’ effects on the ultrasound images based on anatomical information obtained from the pre-acquired 3D image data, wherein the anatomical information comprises location of the target nodules and airways in the lung, lung motion, and position of the ribs or other bony structures in order to improve the quality of the ultrasound images by scanning the optimal location, as recognized by Mine ([0045]).
Regarding claim 2, Kadoury in view of Yang and Mine teaches the system of claim 1, as set forth above. Kadoury further teaches the ultrasound scanner for generating the 2D image slices of the lung (claims 18-19 disclose ultrasonic scanner probe 122 of fig. 1 is used for acquiring image planes. [0012] discloses the ultrasound images are 2D. [0030] “the present principles are applicable to internal tracking procedures of biological systems, procedures in all areas of the body such as the lungs”, meaning the image data is of the lung of a patient).
Regarding claim 4, Kadoury in view of Yang and Mine teaches the system of claim 2, as set forth above. Kadoury further teaches a tracker system ([0035] tracking system 120 in fig. 1), the tracker system comprising at least one tracking marker on the ultrasound transducer ([0036] “the probe 122 includes the sensor or sensors 123 employed by the tracking system 120”), at least one tracking marker on the body of the patient ([0025] “fiducial markers attached to the patient permit semi-automatic or fully automatic registration if a roadmap image including the fiducial markers can be obtained”, the fiducial markers are considered the tracking marker on the body), and a tracking sensor for detecting the position of each said tracking markers as a function of time ([0036] discloses the tracking system 120 (tracking sensor) shown in figs. 1-2 is used for tracking the sensors (markers) in real-time).
Regarding claim 5, Kadoury in view of Yang and Mine teaches the system of claim 4, as set forth above. Kadoury further teaches the tracking system is optical-based ([0036] “the position tracking system may include…optical or other tracking technology”).
Regarding claim 6, Kadoury in view of Yang and Mine teaches the system of claim 1, as set forth above. Kadoury further teaches the pre-acquired 3D image data is computed tomography (CT), magnetic resonance imaging (MRI), or cone-beam ([0026] the pre-operative image volumes are acquired using CT or MR).
Regarding claim 7, Kadoury in view of Yang and Mine teaches the system of claim 1, as set forth above. Kadoury further teaches registration is performed by a patch matching registration algorithm ([0077] “the 3D US volume (708) which was reconstructed from the liver organ is then processed to extract a surface (710) to be matched with the pre-acquired liver segmentation”, the extracted surface is considered the patch. The part method of extracting the surface and matching is considered the algorithm performed by the system).
Regarding claim 16, Kadoury in view of Yang and Mine teaches the system of claim 1, as set forth above. Kadoury further teaches the processor is further programmed and operable to calibrate the ultrasound probe location with the pre-operative 3D image data coordinate system ([0050] T_registration relates the coordinate system of the tracking system C_tracking to a coordinate system C_CT of the CT image. Once established, any pixel in the real-time ultrasound image 202 can be related to a voxel in the CT image 204”, the tracking system coordinates correspond to the ultrasound probe location).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kadoury in view of Yang and Mine as applied to claim 2 above, and further in view of Kang et al. (US 20140163377, hereinafter Kang).
Regarding claim 3, Kadoury in view of Yang and Mine teaches the system of claim 2, as set forth above. Kadoury does not specifically teach the ultrasound scanner comprises a linear or phase-array ultrasound transducer.
However,
Kang in a similar field of endeavor teaches an ultrasound scanner comprising a phase-array ultrasound transducer ([0018] “the disclosed probe, also referred to as an XY Phased Array Registration Block”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the ultrasound scanner disclosed by Kadoury in view of Yang and Mine to be a phase-array ultrasound transducer in order to reduce the need for the user to move the probe while performing the procedure, thereby reducing user error, as recognized by Kang ([0018]).
Claim(s) 12-15 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kadoury in view of Yang et al. (CN110279429A, hereinafter Yang) and Tang et al. (US 20200375664, hereinafter Tang).
Regarding claim 12, Kadoury teaches a system for registering real time two-dimensional (2D) ultrasound data of a lung of a patient to pre-acquired three-dimensional (3D) image data of the lung of the patient (system 100 in fig. 1. [0012] “an integrated framework provides an accurate and real-time updated transformation chain to connect free-hand two-dimensional (2D) ultrasound (US) images to a pre-acquired 3D reference volume from another modality. [0030] “the present principles are applicable to internal tracking procedures of biological systems, procedures in all areas of the body such as the lungs”, meaning the image data is of the lung of a patient), the system comprising:
a storage on which the pre-acquired 3D image data of the lung of the patient is saved ([0038] “imaging system 130 may be provided for collecting pre-operative imaging data…these 3D images 131 may be stored in memory 116”. [0030] “the present principles are applicable to internal tracking procedures of biological systems, procedures in all areas of the body such as the lungs”, meaning the image data is of the lung of a patient. also see [0071]);
a processor (the electronic circuitry of system 100 in fig. 1) programmed and operable to:
(a) reconstruct 3D image volumes of the lung from ultrasonically generated 2D image slices of the lung and position-time information for each of the 2D image slices ([0076] “the 3D US volume (708) can be generated by tracking a 2D probe, streaming to the workstation all 2D image frames and corresponding tracking data…reconstructing the 3D volume based on the acquired 2D images and tracking data”, the tracking data that corresponds to the image frames is considered the position-time information for each 2D image slice. Also see [0041]); and
(b) register the pre-acquired 3D image data of the lung to the 3D image volumes of the lung ([0078] “registers the 3D US volume (708)…to the 3D pre-operative volume (702)”).
Kadoury does not specifically teach the 2D image slices are generated over a plurality of breathing cycles and where the processor is further operable to categorize the 2D image slices into different sets; and to select a first set of 2D image slices for reconstructing a first 3D image volume based on a point in the breathing cycle.
However,
Yang in a similar field of endeavor teaches obtaining 2D image slices that are generated over a plurality of breathing cycles, and preferably at least four breathing cycles (pg. 3, paras. 5-7 disclose “a plurality of different first two dimensional image sequences and respiratory signals corresponding to the two-dimensional images in the first two-dimensional image sequence may be acquired”, since a plurality of image sequences over a plurality of respiratory signals (cycles) are acquired, 2D image slices are generated over a plurality of breathing cycles), wherein a processor is operable to categorize the 2D image slices into different sets (pg. 3, para. 7, “the respiratory signals corresponding to all the two-dimensional images in all the first two-dimensional image sequences are grouped according to the magnitude of the signal value, and each group actually corresponds to one breathing phase…each breathing phase corresponds to a specific breathing state”. The groups of images are considered the different sets); and to select a first set of 2D image slices for reconstructing a first 3D image volume based on a point in the breathing cycle (pg. 4, para. 1 “for each respiratory state, reconstruct a three-dimensional volume data image in the respiratory state based on the second two-dimensional image sequence corresponding to the respiratory state”, the image sequence corresponding to the respiratory state used for reconstructing the 3D volume is considered the selected first set of 2D image slices. Also, pg. 6, para. 2, “the embodiment of the present invention uses the statistical average as a reference to select a corresponding two-dimensional image in each first two-dimensional image sequence to perform three-dimensional volume data reconstruction in each respiratory state).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the processor disclosed by Kadoury to generate the 2D image slices over a plurality of breathing cycles and where the processor is further operable to categorize the 2D image slices into different sets; and to select a first set of 2D image slices for reconstructing a first 3D image volume based on a point in the breathing cycle in order to improve the success rate of reconstruction and image quality of the imaging at a reduced cost, as recognized by Yang (pg. 3, para. 3).
Kadoury in view of Yang does not specifically teach a device tip computation module programmed and operable to detect and track a surgical device being advanced in the lung, and to display the surgical device in the 3D reconstructed volume based on the real-time 2D ultrasound data.
However,
Tang in a similar field of endeavor teaches the a device tip computation module (the electronic circuitry of the computing device 180 in fig. 2) programmed and operable to detect and track a surgical device being advanced in the lung, and to display the surgical device in the 3D reconstructed volume based on the real-time 2D ultrasound data (claims 14 and 15 discloses displaying the identified portions of the percutaneous tool on the image data. The tool is considered the surgical device. [0045] discloses the tool is used in an area of the patient’s body such as the patient’s lung. [0047] discloses a EM tracking field generator is used for tracking the tool. [0078] discloses displaying the tool as it is being navigated to the target on the 3D model. The 3D model is considered the 3D reconstructed volume. [0043] further discloses the images includes 2D ultrasound images and [0050] discloses the location of the treatment tool is visualized using the ultrasound imaging).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the known technique of having a device tip computation module programmed and operable to detect and track a surgical device being advanced in the lung, and to display the surgical device in the 3D reconstructed volume based on the real-time 2D ultrasound data of Tang to the system of Kadoury in view of Yang to allow for the predictable results of improving guidance instructions being shown to the user, thereby making the procedure more accurate.
Regarding claim 13, Kadoury in view of Yang and Tang teaches the system of claim 12, as set forth above. Tang further teaches the processor is further programmed and operable to display a route to a target or region of interest in the 3D reconstructed volume (claim 5 discloses determining a path from the entry point to the target location and displaying the path on the image data. [0059] discloses the 3D model is considered the image data. Also see [0074], [0078] and fig. 6).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the known technique of having the processor be programmed and operable to display a route to a target or region of interest in the 3D reconstructed volume of Tang to the processor of Kadoury in view of Yang and Tang to allow for the predictable results of improving guidance instructions being shown to the user, thereby making the procedure more accurate.
Regarding claim 14, Kadoury in view of Yang and Tang teaches the system of claim 13, as set forth above. Tang further teaches the processor is further programmed and operable to compute said route in the 3D reconstructed volume (claim 5 discloses determining a path from the entry point to the target location and displaying the path on the image data. Also see [0074]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the known technique of having the processor be programmed and operable to compute said route in the 3D reconstructed volume of Tang to the processor of Kadoury in view of Yang and Tang to allow for the predictable results of streamlining the route determination, thereby making the procedure more efficient.
Regarding claim 15, Kadoury in view of Yang and Tang teaches the system of claim 12, as set forth above. Tang further teaches the surgical device is a transthoracic or transbronchial aspiration needle, or ablation probe or catheter ([0048] discloses the tool 130 is an ablation needle and is thereby considered an ablation probe. The percutaneous tool is also disclosed as being a catheter).
Regarding claim 17, Kadoury in view of Yang and Tang teaches the system of claim 12, as set forth above. Kadoury further teaches the processor is further programmed and operable to compute a suggested location, and optionally a suggested angle, for the ultrasound probe for generating the 2D image slices ([0055]-[0056] discloses the workflow prompts a user to position the probe at certain locations with the probe pointing in an exact direction. The location is considered the suggested location and the direction is considered the suggested angle. The user is then prompted to obtain a sweep (images) of the area of interest).
Claim(s) 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kadoury in view of Yang and Tang as applied to claim 17 above, and further in view of Mine (US 20220087654)
Regarding claim 18, Kadoury in view of Yang and Tang teaches the system of claim 17, as set forth above. Kadoury in view of Yang and Tang does not specifically teach the computing the suggested location and angle is based on location of airways in the lung.
However,
Mine in a similar field of endeavor teaches computing the suggested location and angle is based on location of airways in the lung ([0080]-[0081] “when the scan cross-sectional plane 21 found in the search is shadowed by gas (air) or bone, the evaluating function 162 is configured to correct (adjust) the scan cross-sectional plane 21…the scan controlling function 161 calculates position information indicating the position and the orientation of the ultrasound probe 101 for scanning the scan cross-sectional plane 21”. By calculating the position and orientation of the ultrasound probe so that it avoids air, the suggested location and angle of the probe is based on location of airways in the lung).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system disclosed by Kadoury in view of Yang and Tang to have the suggested location and angle be based on location of airways in the lung in order to improve the quality of the ultrasound images by scanning the optimal location, as recognized by Mine ([0080]).
Regarding claim 19, Kadoury in view of Yang, Tang and Mine teaches the system of claim 18, as set forth above. Mine further teaches the computing the suggested location and angle is further based on lung motion ([0058] “the scan controlling function 161 moves the ultrasound probe 101 to the initial position…the initial position is position information…as well as the orientation of the ultrasound probe 101” and [0069] “the scan controlling function 161 is configured to correct the position of the ultrasound probe 101, on the basis of at least one selected from…respiratory phase information of the patient P”. the respiratory phase information is considered lung motion information and by basing the position on the respiratory phase the initial position (position and orientation) are being based on lung motion).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system disclosed by Kadoury in view of Yang, Tang and Mine to have the suggested location and angle be further based on lung motion in order to improve the quality of the ultrasound images, as recognized by Mine ([0045]).
Regarding claim 20, Kadoury in view of Yang, Tang and Mine teaches the system of claim 19, as set forth above. Kadoury further teaches the system is operable to display the suggested location and angle, and optionally alert the physician if the probe is off-track or off-angle ([0048] discloses the user receives the prompts through the display 118. The prompts correspond to the suggested location and angle being displayed).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW BEGEMAN whose telephone number is (571)272-4744. The examiner can normally be reached Monday-Thursday 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at 5712701790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW W BEGEMAN/Examiner, Art Unit 3798
/KEITH M RAYMOND/Supervisory Patent Examiner, Art Unit 3798