DETAILED ACTION
This office action is responsive to original claims filed on 06/18/2025. Presently, Claims 1, 3-19 and 118-119 remain pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 118 is objected to because of the following informalities:
Claim 118, Line 3, recites “at least one anatomical markers”, which should be changed to “at least one anatomical marker”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 119 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 119, Line 2, recites “the at least one anatomical marker”, which lacks antecedent basis, either in Claim 119 or in Claim 1 that Claim 19 is dependent on. For present purposes of examination, Claim 119 is interpreted to be dependent on Claim 118, but not on Claim 1.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 6 and 13 are rejected under 35 U.S.C. 102(1)(1) as being anticipated by Aljuri et al (US 20210121251 A1; hereafter Aljuri).
With regard to Claim 1, Aljuri discloses a system for an endourological procedure (system 400, shown in Fig. 3A and 3B; also in Para 0059; “The presently disclosed methods and apparatus are well suited for combination with many types of surgery such as a prostate surgery and related methods and apparatus”), the system comprising:
a surgical system comprising at least one treatment probe configured to perform the endourological procedure (treatment probe 450) and at least one imaging probe configured to provide images of a site where the endourological procedure is performed (Aljuri, Para 0076; “The system 400 comprises a treatment probe 450 and may optionally comprise an imaging probe 460.”), the at least one treatment probe and the at least one imaging probe configured to be supported by at least one arm (Aljuri, Para 0076; “The treatment probe 450 is coupled to the base 440 with an arm 442. The imaging probe 460 is coupled to the base 440 with an arm 444.”); and
instructions stored on a computer readable medium (memory 421), which when executed by one or more processors (Aljuri, Para 0081; “The console 420 comprises a processor 423 having a memory 421. Communication circuitry 422 is coupled to processor 423 and controller 422.”), cause the one or more processors to:
provide a plurality of images obtained by the at least one imaging probe on an image display area (Aljuri, Para 0164; “The screen can provide an ultrasound image …”, as demonstrated in Fig. 10D) of a touchscreen display (display 425, which can be touch screen display, as disclosed in Para 0167; “… the user can hit the continue button with an input device such as a mouse or touch screen display” (see Fig. 10F as an example));
provide a control panel on the touchscreen display (Aljuri, Fig. 10D shows an example of the display including a control panel in the middle-bottom region, for selecting angle), the control panel comprising a plurality of features to receive a plurality of user inputs in response to a user touching the plurality of features (Aljuri, Fig. 10D-10R: user can input information in multiple steps, including selecting angle (Fig. 10D-10E) and identifying markers (10G-10H)); and
in response to receiving a user input via a feature of the plurality of features of the control panel, adjust operation of the at least one treatment probe (Aljuri, Fig. 10Q and R show the predicted treatment area (or “treatment guide”) after the multiple steps of user input, before treatment begins).
With regard to Claim 6, Aljuri discloses the system of claim 1, wherein the instructions cause the one or more processors to automatically select an image source for an image to display in the image display area, the image source and the image corresponding to a current mode and a current sub-mode of the system (Aljuri, FIG. 10L and 10Q show that a sagittal image from ultrasound is automatically selected to display in the step of “CUT” and also in the sub-steps of “CALIBRATION” and “PROFILE”).
With regard to Claim 13, Aljuri discloses the system of claim 6, wherein the instructions cause the one or more processors to automatically select one or more of an endoscope image, an ultrasound image, a transverse ultrasound image or a longitudinal image to display in the image display area (Aljuri, Fig. 10D shows that a transverse ultrasound image is automatically selected to display in the screen).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-5 and 7-12 are rejected under 35 U.S.C. 103 as being unpatentable over Aljuri, in view of Mantri et al (US 20200360100 A1; hereafter Mantri) and Duindam et al (US 20200078103 A1; hereafter Duindam).
With regard to Claim 3, Aljuri discloses the system of claim 1, wherein the instructions cause the one or more processors to select in sequence (Aljuri, Para 0193; “FIG. 11 shows a method 1100 of treating a patient in accordance with many embodiments.” Fig. 11 shows the method with the steps implemented in sequence): 1) a first mode or sub-mode to insert an ultrasound probe into a patient (Aljuri, Para 0194; “With a step 1102, an imaging probe is provided having an imaging probe axis.” Fig. 9A shows the inserted imaging probe with axis 461); 2) an ultrasound image as an image source for an image display area in a second mode or sub-mode to insert a treatment probe comprising an endoscope into the patient (Aljuri, Para 0195; “With a step 1104, a treatment probe is provided having a treatment probe axis.” In aligning the treatment probe with the inserted imaging probe, ultrasound images are acquired and displayed, as disclosed in Para 0156, “The user can use images of the treatment probe obtained with the imaging probe to align the treatment probe with the imaging probe.”); and 3) the ultrasound image source as an image source for an image display area for each of a plurality of subsequent modes or sub-modes (Aljuri, Para 0199-0216, list all the subsequent steps, which use displayed ultrasound images to guide treatment. See Fig. 10A-10T for graphical demonstration).
Aljuri does not clearly and explicitly disclose:
selecting an ultrasound image source as the primary image source for the primary image display area in the mode or sub-mode to insert an ultrasound probe into the patient,
displaying endoscope images source as the primary image source for the primary image display area in the mode or sub-mode to insert a treatment probe comprising an endoscope into the patient, and
selecting an endoscope image source as the secondary image source for the secondary image display area for each of a plurality of subsequent modes or sub-modes.
Mantri in the same field of endeavor discloses selecting an ultrasound image source as the primary image source for the primary image display area in the mode or sub-mode to insert an ultrasound probe into the patient (Mantri, Para 0115; “With a step 705, an imaging probe is inserted into the patient … may comprise a TRUS probe … is inserted concurrently with or before the treatment probe. … may provide one or more images along the sagittal plane which may be generated as the imaging probe … is advanced and/or retracted.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri, as suggested by Mantri, in order to acquire and display ultrasound images when inserting the ultrasound probe. One of ordinary skill in the art would have been motivated to make the modification for the benefit of ensuring the ultrasound probe to be inserted into patient’s body safely and efficiently.
Aljuri and Mantri as discussed above do not clearly and explicitly disclose:
displaying endoscope images source as the primary image source for the primary image display area in the mode or sub-mode to insert a treatment probe comprising an endoscope into the patient, and
selecting an endoscope image source as the secondary image source for the secondary image display area for each of a plurality of subsequent modes or sub-modes.
Duindam in the same field of endeavor discloses:
displaying endoscope images source as the primary image source for the primary image display area in the mode or sub-mode to insert a treatment probe comprising an endoscope into the patient (Duindam, Para 0062; “… the traversal mode may be suitable for display when the elongate device is being navigated through the patient's body over substantial distances …”; Para 0066; “In the traversal mode, as depicted in FIG. 4A, multi-modal graphical user interface 400 may display a camera view window 410, a virtual endoscopic view window 420 … the camera data may include a live video feed captured by an endoscope …”. According to these disclosures, during the insertion of treatment probe, endoscope images are displayed in a primary display area (an upper-left region in the screen of Fig. 4A). Note that both camera view 410 and virtual endoscopic view 420 are acquired from endoscope), and
selecting an endoscope image source as the secondary image source for the secondary image display area for each of a plurality of subsequent modes or sub-modes (Duindam, Para 0062; “The alignment mode may be suitable for display during adjustment of the pose and/or small changes in insertion of the elongate device, such as when collecting a biopsy sample and/or performing laser ablation at the target location.”; Para 0073; “In the alignment mode, as depicted in FIG. 4B, multi-modal graphical user interface 400 may display a target guidance view window 460, virtual global view windows 470 and 480, camera view window 410 …”. According to these disclosures, during the subsequent steps (such as biopsy or ablation step), camera view can also be displayed, but in a secondary display area, as shown in Fig. 4B).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri and Mantri, as suggested by Duindam, in order to display endoscope images in the primary display area when inserting the treatment probe, and to display endoscope images in a secondary display area in subsequent steps such as treatment. One of ordinary skill in the art would have been motivated to make the modification for the benefit of ensuring the treatment probe to be precisely navigated to the target tissue site (Duindam, Para 0069; “When windows 410-430 are displayed concurrently, the images displayed in windows 410-430 advantageously allow the operator to concurrently monitor and/or visualize the vicinity of the distal end of the elongate device (via camera view window 410 and/or virtual endoscope view window 420) as well as the broader pose of the elongate device (via virtual global view window 430) in relation to patient anatomy.”), and enabling accurate fine adjustment (such as pose or orientation) of the distal end of the treatment probe for optimal treatment outcome (Duindam, Para 0063; “the alignment mode is used when the distal end of the elongate device is near the target location and fine adjustments to the pose of the elongate device are expected (e.g., when manually and/or automatically orienting the distal end of the elongate device at an optimal angle and/or distance from the target location for successful delivery of a needle).”).
With regard to Claim 4, Aljuri, Mantri and Duindam disclose the system of claim 3. Aljuri further discloses wherein the plurality of subsequent modes or sub-modes comprises a planning mode or sub-mode (Aljuri, Para 0204; “With a step 1158, the user interface may allow the user to select the target angle of the treatment probe when performing the cutting procedure.” In this disclosure, selecting the target angle is a planning mode) and a treatment mode or sub-mode (Aljuri, Para 0215; “With a step 1180, the treatment probe tip may perform the final cut. … The treatment probe may be paused and un-paused during the cutting process.”).
With regard to Claim 5, Aljuri, Mantri and Duindam disclose the system of claim 3. Aljuri further discloses wherein the plurality of subsequent modes or sub-modes comprises three or more of a probe alignment mode or sub-mode, an angle and depth mode or sub-mode, a registration mode or sub-mode, a profile mode or sub-mode (Aljuri, Para 0199-0216, list multiple planning steps that matches the claimed modes or sub-modes), or a treatment mode or sub-mode (Aljuri, Para 0215; “With a step 1180, the treatment probe tip may perform the final cut. … The treatment probe may be paused and un-paused during the cutting process.”).
With regard to Claim 7, Aljuri, Mantri and Duindam disclose the system of claim 3. Aljuri further discloses wherein the instructions cause the one or more processors to select the primary image source for the primary image display area in response to a current mode and a current sub-mode of the system (Aljuri, Para 0164; “FIG. 10C shows a prompt for the user to confirm that the ultrasound is in transverse view.” Para 0165; “FIG. 10D shows an angle select input screen.”. In these disclosures, a transverse ultrasound image is displayed in response to the step of angle selection).
Aljuri, Mantri and Duindam as discussed above do not clearly and explicitly disclose selecting the secondary image source corresponding to the secondary image display area in response to a current mode and a current sub-mode of the system.
Duindam further discloses selecting the secondary image source corresponding to the secondary image display area in response to a current mode and a current sub-mode of the system (Duindam, Para 0064; “… multi-modal graphical user interface 400 may transition between the traversal and alignment modes manually and/or automatically.”. For each of the two disclosed modes, different multi-modal GUI is used, with examples shown in Fig. 4A and 4B where more than two images are displayed concurrently in each setting). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri, Mantri and Duindam, as further suggested by Duindam, in order to select a secondary image source for a secondary image display area. One of ordinary skill in the art would have been motivated to make the modification for the benefit of providing more than one view perspective for surgeon to ensure safety and efficiency in performing each surgical step (Duindam, Para 0069; “When windows 410-430 are displayed concurrently, the images displayed in windows 410-430 advantageously allow the operator to concurrently monitor and/or visualize the vicinity of the distal end of the elongate device (via camera view window 410 and/or virtual endoscope view window 420) as well as the broader pose of the elongate device (via virtual global view window 430) in relation to patient anatomy.”).
With regard to Claim 8, Aljuri, Mantri and Duindam disclose the system of claim 7 as discussed above, but do not disclose wherein the instructions cause the one or more processors to select the ultrasound image source as the primary image source in a mode corresponding to insertion of an ultrasound probe into the patient and the endoscope image source for the primary image display area in a mode corresponding to insertion of a surgical treatment probe into the patient.
Mantri further discloses selecting the ultrasound image source as the primary image source in a mode corresponding to insertion of an ultrasound probe into the patient (Mantri, Para 0115; “With a step 705, an imaging probe is inserted into the patient … may comprise a TRUS probe … is inserted concurrently with or before the treatment probe. … may provide one or more images along the sagittal plane which may be generated as the imaging probe … is advanced and/or retracted.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri, Mantri and Duindam, as further suggested by Mantri, in order to acquire and display ultrasound images when inserting the ultrasound probe. One of ordinary skill in the art would have been motivated to make the modification for the benefit of ensuring the ultrasound probe to be inserted safely and efficiently.
Aljuri, Mantri and Duindam as discussed above do not explicitly and clearly disclose selecting the endoscope image source for the primary image display area in a mode corresponding to insertion of a surgical treatment probe into the patient.
Duindam further discloses selecting the endoscope image source for the primary image display area in a mode corresponding to insertion of a surgical treatment probe into the patient (Duindam, Para 0062; “… the traversal mode may be suitable for display when the elongate device is being navigated through the patient's body over substantial distances …”; Para 0066; “In the traversal mode, as depicted in FIG. 4A, multi-modal graphical user interface 400 may display a camera view window 410, a virtual endoscopic view window 420 … the camera data may include a live video feed captured by an endoscope …”. According to these disclosures, during the insertion of treatment probe, endoscope images are displayed in a primary display area (an upper-left region in the screen of Fig. 4A). Note that both camera view 410 and virtual endoscopic view 420 are acquired from endoscope). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri, Mantri and Duindam, as further suggested by Duindam, in order to display endoscope images in the primary display area when inserting the treatment probe. One of ordinary skill in the art would have been motivated to make the modification for the benefit of ensuring the treatment probe to be precisely navigated to the target tissue site (Duindam, Para 0069; “When windows 410-430 are displayed concurrently, the images displayed in windows 410-430 advantageously allow the operator to concurrently monitor and/or visualize the vicinity of the distal end of the elongate device (via camera view window 410 and/or virtual endoscope view window 420) as well as the broader pose of the elongate device (via virtual global view window 430) in relation to patient anatomy.”).
With regard to Claim 9, Aljuri, Mantri and Duindam disclose the system of claim 8 as discussed above. Aljuri further discloses wherein the instructions cause the one or more processors to select the ultrasound probe as the primary image source for a plurality of modes subsequent to the mode corresponding to insertion of the treatment probe into the patient (Aljuri, Para 0199-0216, list all the subsequent steps, which use displayed ultrasound images to guide treatment. See Fig. 10A-10T for graphical demonstration).
Aljuri, Mantri and Duindam as discussed above do not clearly and explicitly disclose selecting an endoscope image source as the secondary image source for a plurality of modes subsequent to the mode corresponding to insertion of the treatment probe into the patient.
Duindam further discloses selecting an endoscope image source as the secondary image source for a plurality of modes subsequent to the mode corresponding to insertion of the treatment probe into the patient (Duindam, Para 0062; “The alignment mode may be suitable for display during adjustment of the pose and/or small changes in insertion of the elongate device, such as when collecting a biopsy sample and/or performing laser ablation at the target location.”; Para 0073; “In the alignment mode, as depicted in FIG. 4B, multi-modal graphical user interface 400 may display a target guidance view window 460, virtual global view windows 470 and 480, camera view window 410 …”. According to these disclosures, during the subsequent steps (such as biopsy or ablation step), camera view can also be displayed, but in a secondary display area, as shown in Fig. 4B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri, Mantri and Duindam, as further suggested by Duindam, in order to display endoscope images in a secondary display area in subsequent steps such as treatment. One of ordinary skill in the art would have been motivated to make the modification for the benefit of enabling accurate fine adjustment (such as pose or orientation) of the distal end of the treatment probe for optimal treatment outcome (Duindam, Para 0063; “the alignment mode is used when the distal end of the elongate device is near the target location and fine adjustments to the pose of the elongate device are expected (e.g., when manually and/or automatically orienting the distal end of the elongate device at an optimal angle and/or distance from the target location for successful delivery of a needle).”).
With regard to Claim 10, Aljuri, Mantri and Duindam disclose the system of claim 9 as discussed above. Aljuri further discloses wherein the plurality of modes subsequent to the mode of inserting the treatment probe comprises a planning mode and a treatment mode (Aljuri, Para 0204; “With a step 1158, the user interface may allow the user to select the target angle of the treatment probe when performing the cutting procedure.” In this disclosure, selecting the target angle is a planning mode; Para 0215; “With a step 1180, the treatment probe tip may perform the final cut. … The treatment probe may be paused and un-paused during the cutting process.”).
With regard to Claim 11, Aljuri, Mantri and Duindam disclose the system of claim 10 as discussed above. Aljuri further discloses wherein the plurality of modes subsequent to the mode of inserting the treatment probe comprises two or more of an alignment mode, an angle and depth mode, a registration mode, a profile mode or the treatment mode (Aljuri, Para 0197-0216, list multiple steps, including probe alignment (step 1110), angle selection (step 1158), treatment (step 1172), and other relevant steps).
With regard to Claim 12, Aljuri, Mantri and Duindam disclose the system of claim 10 as discussed above. Aljuri further discloses wherein the plurality of modes subsequent to inserting the treatment probe comprises at least four modes in which the primary image source comprises the ultrasound probe (Aljuri, Fig. 10D (step of “ANGLE”), 10H (step of “SCALE”), 10M (step of “CALIBRATION”), and 10p (step of “PROFILE”) show using ultrasound images as primary image source).
Aljuri, Mantri and Duindam as discussed above do not clearly and explicitly disclose wherein at least four modes subsequent to inserting the treatment probe uses an endoscope image source as the secondary image source.
Duindam further discloses wherein at least four modes subsequent to inserting the treatment probe uses an endoscope image source as the secondary image source (Duindam, Para 0062; “The alignment mode may be suitable for display during adjustment of the pose and/or small changes in insertion of the elongate device, such as when collecting a biopsy sample and/or performing laser ablation at the target location.” The disclosed “alignment mode” shows a secondary image source comprising endoscope (see Fig. 4B), and based on the disclosure cited here, includes all steps subsequent to insertion of treatment probe, such as biopsy collection or laser ablation). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri, Mantri and Duindam, as further suggested by Duindam, in order to display endoscope images in a secondary display area in subsequent steps such as treatment. One of ordinary skill in the art would have been motivated to make the modification for the benefit of enabling accurate fine adjustment (such as pose or orientation) of the distal end of the treatment probe for optimal treatment outcome (Duindam, Para 0063; “the alignment mode is used when the distal end of the elongate device is near the target location and fine adjustments to the pose of the elongate device are expected (e.g., when manually and/or automatically orienting the distal end of the elongate device at an optimal angle and/or distance from the target location for successful delivery of a needle).”).
Claims 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over Aljuri, in view of Duindam.
With regard to Claim 14, Aljuri discloses the system of claim 6, and further discloses wherein the instructions cause the one or more processors to display a primary image from a first image source (a sagittal ultrasound image from ultrasound probe) in the image display area (Aljuri, Fig. 10L shows an example of screen, where a sagittal ultrasound image from ultrasound probe is displayed in an upper-central area of the screen).
Aljuri does not clearly and explicitly disclose displaying a secondary image from a secondary image source in the image display area.
Duindam in the same field of endeavor discloses displaying a secondary image from a secondary image source in the image display area (Duindam, Para 0073; “In the alignment mode, as depicted in FIG. 4B, multi-modal graphical user interface 400 may display a target guidance view window 460, virtual global view windows 470 and 480, camera view window 410 …”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri, as suggested by Duindam, in order to display endoscope images concurrently with a primary image in the image display area. One of ordinary skill in the art would have been motivated to make the modification for the benefit of providing more than one view perspective for surgeon to ensure safety and efficiency in performing each surgical step (Duindam, Para 0069; “When windows 410-430 are displayed concurrently, the images displayed in windows 410-430 advantageously allow the operator to concurrently monitor and/or visualize the vicinity of the distal end of the elongate device (via camera view window 410 and/or virtual endoscope view window 420) as well as the broader pose of the elongate device (via virtual global view window 430) in relation to patient anatomy.”).
With regard to Claim 15, Aljuri and Duindam disclose the system of claim 14. Aljuri further discloses wherein the primary image source corresponds to the current mode and the current sub-mode of the mode (Aljuri, FIG. 10L and 10Q show that images of ultrasound as image source are selected to display in current step of “CUT” and also its sub-steps of “CALIBRATION” and “PROFILE”).
Aljuri and Duindam as discussed above do not clearly and explicitly disclose selecting a secondary image source for the current mode and the current sub-mode of the mode.
Duindam further discloses selecting a secondary image source for the current mode and the current sub-mode of the mode (Duindam, Para 0073; “In the alignment mode, as depicted in FIG. 4B, multi-modal graphical user interface 400 may display a target guidance view window 460, virtual global view windows 470 and 480, camera view window 410 …”; Para 0062; “The alignment mode may be suitable for display during adjustment of the pose and/or small changes in insertion of the elongate device, such as when collecting a biopsy sample and/or performing laser ablation at the target location.” The disclosed “alignment mode” shows a secondary image source comprising endoscope (see Fig. 4B), and based on the disclosure cited here, includes all steps subsequent to insertion of treatment probe, such as biopsy collection or laser ablation). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri and Duindam, as further suggested by Duindam, in order to select a secondary image source such as endoscope for a current step and a current sub-step. One of ordinary skill in the art would have been motivated to make the modification for the benefit of providing more than one view perspective for surgeon to ensure safety and efficiency in performing each surgical step (Duindam, Para 0069; “When windows 410-430 are displayed concurrently, the images displayed in windows 410-430 advantageously allow the operator to concurrently monitor and/or visualize the vicinity of the distal end of the elongate device (via camera view window 410 and/or virtual endoscope view window 420) as well as the broader pose of the elongate device (via virtual global view window 430) in relation to patient anatomy.”).
With regard to Claim 16, Aljuri and Duindam disclose the system of claim 14. Aljuri further discloses wherein the instructions cause the one or more processors
to automatically display a real time ultrasound image from an ultrasound device in a secondary image display area (Aljuri, Para 0178; “FIG. 10M shows the image of the calibration cut in real time on the screen. The probe is automatically advanced … and a display window indicates that the probe is advancing” In the step of calibration cut, a real-time ultrasound image is displayed) in a first mode (the step of calibration cut) and
to automatically display the real time ultrasound image in the primary image display area (Aljuri, Para 0186; “FIG. 10D shows an angle select input screen.” Fig. 10D shows the real time image in which the treatment angle is determined) in a second mode (the step of PLAN).
Aljuri and Duindam as discussed above do not clearly and explicitly disclose:
automatically displaying a real time endoscope image from an endoscope in a primary image display area in a first mode, and
automatically displaying the real time endoscope image in secondary image display area in a second mode.
Duindam further discloses:
automatically displaying a real time endoscope image from an endoscope in a primary image display area in a first mode (Duindam, Fig. 4A shows a screen where a camera view (from real time endoscope) is shown in a primary display area in a step of advancing treatment probe into patient’s body), and
automatically displaying the real time endoscope image in secondary image display area in a second mode (Duindam, Para 0073; “In the alignment mode, as depicted in FIG. 4B, multi-modal graphical user interface 400 may display a target guidance view window 460, virtual global view windows 470 and 480, camera view window 410 …”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri and Duindam, as further suggested by Duindam, in order to automatically display a real time endoscope image in a primary image display area in two modes. One of ordinary skill in the art would have been motivated to make the modification for the benefit of ensuring the treatment probe to be precisely navigated to the target tissue site and providing more than one view perspective for surgeon to ensure safety and efficiency in performing each surgical step (Duindam, Para 0069; “When windows 410-430 are displayed concurrently, the images displayed in windows 410-430 advantageously allow the operator to concurrently monitor and/or visualize the vicinity of the distal end of the elongate device (via camera view window 410 and/or virtual endoscope view window 420) as well as the broader pose of the elongate device (via virtual global view window 430) in relation to patient anatomy.”).
With regard to Claim 17, Aljuri and Duindam disclose the system of claim 16. Aljuri further discloses wherein in the second mode the instructions cause the one or more processors to automatically display a real time transverse ultrasound image in a first sub-mode of the second mode (Aljuri, Fig. 10D and 10E: in the sub-step of ANGLE (under the step of PLAN), a transverse ultrasound image is displayed) and to automatically display a real time longitudinal ultrasound image in a second sub-mode (Aljuri, Fig. 10G-10K: in the sub-step of SCALE (under the same step of PLAN), a sagittal ultrasound image is displayed) of the second mode.
With regard to Claim 18, Aljuri and Duindam disclose the system of claim 17. Aljuri further discloses wherein the instructions cause the one or more processors to toggle between the real time transverse ultrasound image and the real time longitudinal ultrasound image in the first sub-mode of the second mode and between the real time longitudinal ultrasound image and the real time transverse image in the second sub-mode of the second mode (Aljuri, Para 0335; “The user interface may comprise transverse interface 1700 and sagittal interface 1800 to plan the treatment profile in three dimensions for 3D volumetric tissue removal.” Fig. 17 and Fig. 18 shows the GUI including transverse and longitudinal ultrasound images, respectively, and should be toggled for planning 3D treatment profile, for example, using the step-control buttons on the left-lower corner of the screen).
With regard to Claim 19, Aljuri and Duindam disclose the system of claim 17. Aljuri further discloses wherein the real time transverse ultrasound image and the real time longitudinal ultrasound image shown in the image display area remain above the control panel when toggled (Aljuri, Fig. 10D-10R, show that in proceeding with the different steps, a control panel remains below the displayed images. Images for two of the steps are cited below for demonstration, in which the red-rectangle region is the control panel).
PNG
media_image1.png
620
1038
media_image1.png
Greyscale
PNG
media_image2.png
630
1026
media_image2.png
Greyscale
Claims 118-119 are rejected under 35 U.S.C. 103 as being unpatentable over Aljuri, in view of a different embodiment of Aljuri.
With regard to Claim 118, Aljuri discloses the system of claim 1, wherein the instructions cause the one or more processors to: with an artificial intelligence process (Aljuri, Para 0376; “… a method 2200 of training and using an artificial intelligence or machine learning classifier …”), identify at least one anatomical markers in the site (Aljuri, Para 0376; “… a classifier is trained to recognize anatomical landmarks and a plurality of resection profiles associated with the anatomical landmarks.”).
Aljuri in this embodiment does not clearly and explicitly disclose including the at least one anatomical marker in the plurality of images.
Aljuri in a second embodiment discloses including the at least one anatomical marker in the plurality of images (Aljuri, Para 0337; “The system can present an overlay of information over the ultrasound imaging information, which may include anatomical portions of organs … In the illustrated user interface 1800, the overlay identifies areas corresponding to the median lobe zone 1810, the Bladder Neck Zone 1812, and the Mid-Prostate Zone 1814.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri, as suggested by a second embodiment of its disclosure, in order to include the identified anatomical markers in the displayed images. One of ordinary skill in the art would have been motivated to make the modification for the benefit of showing important anatomical markers or zones so that a user can safely and efficiently adjust treatment profile or plan by including or excluding a specific region or zone.
With regard to Claim 119, Aljuri discloses the system of claim 118, wherein the site comprises prostate.
Aljuri in this embodiment does not clearly and explicitly disclose wherein the at least one anatomical marker comprises one or more of veru or bladder neck.
Aljuri in a second embodiment further discloses wherein the at least one anatomical marker comprises one or more of veru or bladder neck (Aljuri, Para 0378; “The anatomical landmarks may comprise one or more delicate tissue structures as described herein, such as a verumontanum …”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aljuri, as further suggested by a second embodiment of its disclosure, in order to include veru as an anatomical marker. One of ordinary skill in the art would have been motivated to make the modification for the benefit of improved safety and effectiveness by applying treatment close to the veru region (Aljuri, Para 0331; “The aggressiveness of the prostate resection treatment is related to proximity to the verumontanum. If tissue is resected closer to the veru, the effectiveness of the prostate treatment for benign prostatic hyperplasia can increase. However, the risk of male sexual dysfunction may also increase.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEI ZHANG whose telephone number is (571)272-7172. The examiner can normally be reached Monday-Friday 8am-5pm E.T..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/L.Z./Examiner, Art Unit 3798
/PASCAL M BUI PHO/Supervisory Patent Examiner, Art Unit 3798