Prosecution Insights
Last updated: April 19, 2026
Application No. 18/798,322

Dynamic Visualization For Device Delivery

Non-Final OA §102§103§112
Filed
Aug 08, 2024
Examiner
KIM, KAITLYN EUNJI
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Lightlab Imaging Inc.
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
7 granted / 12 resolved
-11.7% vs TC avg
Strong +66% interview lift
Without
With
+65.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
37 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
42.2%
+2.2% vs TC avg
§102
21.4%
-18.6% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-61 are pending in this application, and claims 1-61 have been examined on the merits. Election/Restrictions The species restrictions in the Office Action (11/06/2025) have been withdrawn by the examiner after reconsideration. Claims 1-61 have been examined on the merits. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the “virtual boxes” must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Applicant is advised that should claim 15 be found allowable, claim 17 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Further, Claim 27 is objected to because of the following informalities: Claim 27 line 1 “a AI model” should be “an AI model” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7, 8, 11-17, 31-38 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 7 recites the limitation of a “working vessel”. It is unclear if the vessel refers to a blood vessel or a portion of the guide catheter. For purposes of examination, the limitation will be construed as a part of the catheter. However, further clarification is required. Claim 8 recites the limitation “detect… a guide wire of the intravascular device”. It is unclear whether the guide wire for detection is the same guide wire that would be used in the segmentation or if this is a guide wire for a different device. For purposes of examination, the limitation will be construed as the same guide wire, which is for an intravascular device. However, further clarification is required. Claim 11 recites the limitation “vessel level motion” Claim 14 recites “determine absolute spatial information and relative spatial information”. It is unclear what spatial information is being determined. For purposes of examination, the limitation will be construed as determining the absolute spatial information and relative spatial information of the wire tip. However, further clarification is required. Claim 31 recites the limitation “a treatment device landing zone, a balloon device zone, a vessel prep device zone, or a lesion related zone”. It is unclear what is meant by the landing zone or the device zone for each of these limitations. For purpose of examination, the limitation will be construed as a placement area for the surgical tool or instrument used. However, further clarification is required. Claim 38 recites the limitation “update, based on the received annotations, a second one of the at least one first extraluminal image or the second extraluminal images”. It is unclear what is being updated, and what the update comprises of. For purposes of examination, the limitation will be construed as updating one of the extraluminal images with a location or further information based on the annotations. However, further clarification is required. The remaining claims are rejected due to their dependency. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1,4, 6, 9-14, 16, 18, 31-34, and 39-44 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lang (US20210137634A1). Regarding Claim 1, Lang teaches one or more processors, the one or more processors (corresponding disclosure in at least [0005], where there are one or more processors “Some embodiments relate to a system comprising an optical head mounted display, and a computer system with one or more processors”) configured to: receive at least one first extraluminal image (corresponding disclosure in at least [0032], where there is at least one first extraluminal image (angiogram) “wherein the one or more computer processors are configured to display an intra-operative angiogram of the patient”); receive second extraluminal images captured during delivery of an intravascular device (corresponding disclosure in at least [0032], where there is at least a second extraluminal image captured “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor or the virtual pre-operative vascular 3D image displayed by the optical head mounted display” and further in [0534], where a second image is captured during a drug delivery (a registration process, which involves receiving another image set, is completed during drug injection) “If a placement of a medical implant component, a trial implant, a tissue graft, a tissue matrix, a transplant, a catheter, a surgical instrument or an injection of cells or a drug is performed, the registration procedure can be repeated after the surgical step or surgical alteration has been performed”); detect motion features in the at least one first extraluminal image and the second extraluminal images (corresponding disclosure in at least [0032], where the motion feature (tracked instrument) is detected in the image “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor”); correlate, based the detected motion features, the at least one first extraluminal image and the second extraluminal images (corresponding disclosure in at least [0035], where the images are correlated (registered) “the pre-operative vascular 3D image is registered with the intra-operative angiogram using a 3D-2D registration” and further in [0032], where the motion feature (the tracked instrument) is aligned with another image “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor or the virtual pre-operative vascular 3D image”); and provide for output real-time visualization of a position of the intravascular device on the at least one first extraluminal image or one of the second extraluminal images including the intravascular device (corresponding disclosure in at least [0098], where there is real-time visualization of the extraluminal images (angiograms) “the computer system can maintain the display of the virtual data or virtual images superimposed onto and/or aligned with the corresponding anatomic structures, tissues and/or organs both inside the physical patient and/or in virtual data acquired, for example, (e.g. in real-time) from the physical patient, e.g. an intra-operative angiogram”). Regarding Claim 4, Lang further teaches wherein the intravascular device is at least one of a stent delivery device, a balloon device, an intravascular imaging probe, a vessel prep device, or a pressure wire (corresponding disclosure in at least [0039], where there is an intravascular device or a stent “, the device is an intravascular or endoluminal device or wherein the instrument is an intravascular or endoluminal instrument. In some embodiments, the device is one of a catheter, catheter tip, guidewire, sheath, stent, coil, implant, or vascular prosthesis”). (corresponding disclosure in at least [0096], where the vessels Regarding Claims 6 and 10, Lang further teaches wherein the one or more processors are further configured to automatically detect, using an artificial intelligence (AI) model, a working vessel (corresponding disclosure in at least [0324], where AI is used for detection “A virtual surgical plan 67 can be utilized. Optionally, the native anatomy of the patient including after a first surgical alteration can be displayed by the OHMD 68. The OHMD can optionally display digital holograms of subsequent surgical steps… image processing techniques, pattern recognition techniques or deep learning/artificial neural-network based techniques can be used to match virtual patient data and live patient data. Optionally, image processing and/or pattern recognition algorithms can be used to identify certain features”). Regarding Claim 9, Lang teaches a system, comprising, one or more processors (corresponding disclosure in at least [0005], where there are one or more processors “Some embodiments relate to a system comprising an optical head mounted display, and a computer system with one or more processors”), the one or more processors configured to: receive at least one first extraluminal image (corresponding disclosure in at least [0032], where there is at least one first extraluminal image (angiogram) “wherein the one or more computer processors are configured to display an intra-operative angiogram of the patient”); receive second extraluminal images captured during delivery of an intravascular device (corresponding disclosure in at least [0032], where there is at least a second extraluminal image captured “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor or the virtual pre-operative vascular 3D image displayed by the optical head mounted display” and further in [0534], where a second image is captured during a drug delivery (a registration process, which involves receiving another image set, is completed during drug injection) “If a placement of a medical implant component, a trial implant, a tissue graft, a tissue matrix, a transplant, a catheter, a surgical instrument or an injection of cells or a drug is performed, the registration procedure can be repeated after the surgical step or surgical alteration has been performed”); detect motion features in the at least one first extraluminal image and the second extraluminal images (corresponding disclosure in at least [1044] where the images feature the device “Catheter or device tracking systems can be set up to determine the 2D or 3D position and orientation of the catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device), preferably with the ability to update the information in real time”), wherein the motion features include at least one of a guide catheter tip, a distal endpoint of a working vessel, an optical flow at the guide catheter tip, or an optical flow at the distal endpoint of the working vessel (corresponding disclosure in at least [0039], where the motion feature is a catheter tip “the device is an intravascular or endoluminal device or wherein the instrument is an intravascular or endoluminal instrument. In some embodiments, the device is one of a catheter, catheter tip, guidewire”); correlate, based the detected motion features, the at least one first extraluminal image and the second extraluminal images (corresponding disclosure in at least [0035], where the images are correlated (registered) “the pre-operative vascular 3D image is registered with the intra-operative angiogram using a 3D-2D registration” and further in [0032], where the motion feature (the tracked instrument) is aligned with another image “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor or the virtual pre-operative vascular 3D image”); and provide for output real-time visualization of a position of the intravascular device on the at least one first extraluminal image or one of the second extraluminal images including the intravascular device (corresponding disclosure in at least [0098], where there is real-time visualization of the extraluminal images (angiograms) “the computer system can maintain the display of the virtual data or virtual images superimposed onto and/or aligned with the corresponding anatomic structures, tissues and/or organs both inside the physical patient and/or in virtual data acquired, for example, (e.g. in real-time) from the physical patient, e.g. an intra-operative angiogram”). Regarding Claim 11, Lang further teaches wherein when correlating the at least one first extraluminal image and the second extraluminal images the one or more processors are further configured to: determine vessel level motion (corresponding disclosure in at least [0511], where the vessel motion is determined (vascular flow data) “different types of data, e.g. anatomic, motion, kinematic, metabolic, functional, temperature and/or vascular flow data can be used alone or in combination for registered virtual and live data of the patient”); determine wire tip level motion (corresponding disclosure in at least [1044], where the tip (catheter tip) motion is determined (tracked) “Catheter or device tracking systems can be set up to determine the 2D or 3D position and orientation of the catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device), preferably with the ability to update the information in real time. Different methods are available for catheter tracking”); and determine vessel pixel level motion (corresponding disclosure in at least [1030], where the motion (blood flow) is determined on a pixel level “the thresholding and seed growing techniques described above can also be applied to the pixels of the 2D projection images generated by the x-ray angiography, e.g. uni-planar and/or biplanar angiography and/or 3D angiography. In 2D seed growing, a 2D neighborhood around the current pixel can assessed for classification as blood vessel or flowing blood. This 2D neighborhood can include all 4-connected, 6-connected, or 8-connected pixels of the current pixel or any other number of connected pixels of the current pixel”). Regarding Claim 12, Lang further teaches wherein when determining the vessel level motion the one or more processors are further configured to determine a two-dimensional translation vector (corresponding disclosure in at least [1004], where a translational vector is determined (points and vectors of the plan) “The interface allows a dual or multiple display mode of AP and lateral views, as well as any oblique views obtained. Using a standard mouse or track ball, the interface allows the surgeon or interventionalist to define entry points and vectors of instruments and pedicle screws. 3D coordinates of points and vectors of the plan are determined using a minimum of 2 approximately perpendicular fluoroscopy views”). Regarding Claim 13, Lang further teaches wherein the two-dimensional translation vector corresponds to two-dimensional translation at an n-th frame with respect to a first frame (corresponding disclosure in at least [1035], where there is a translation process with respect to a first frame (there is a first frame based on the registered tracked information, which includes a translational component) “An optional offset can be added by including an additional translation component to vary the distance of the 3D model from the 2D images. This information can be used to create an overlay display or superimposition of the 3D model displayed by one or more OHMDs with a 2D or 3D angiogram or other vascular imaging study displayed by a separate or standalone computer monitor or display. his information can also be used to create an overlay display or superimposition of the 3D model displayed by one or more OHMDs with a 2D or 3D angiogram or other vascular imaging study co-displayed by the one or more OHMDs. Thus, a 3D model, e.g. including virtual data on a catheter, guidewire, stent, coil, implant or other intra-vascular or endoluminal device inside the vessel(s) or lumen including, optionally, tracking information for a tracked catheter, guidewire…”). Regarding Claim 14, Lang further teaches wherein when determining the wire tip level motion the one or more processors are further configured to determine absolute spatial information and relative spatial information (corresponding disclosure in at least [0388], where the spatial information is determined “an image and/or video capture system can detect the optical marker and its location, position and/or orientation can be used to determine the location, position, and/or orientation of the surgical instrument, e.g. a pin, including its tip or frontal portion inside the patient due to their defined spatial relationship and due to the known geometry of the surgical instrument”). Regarding Claim 16, Lang further teaches wherein when determining vessel pixel level motion the one or more processors are further configured to determine absolute spatial information and relative spatial information for a plurality of points on a working vessel (corresponding disclosure in at least [0484], where there are multiple points which are registered for the spatial information “the accuracy of registration can optionally be improved by using multiple registration points, patterns, planes or surfaces. In general, the accuracy of registration will improve with an increasing number of registration points, patterns, planes or surfaces. These may, in some embodiments, not exceed the spatial resolution of the image and/or video capture system” and further in [1045], where the spatial information is determined on multiple points (2D coordinate system, meaning there are multiple points for any of the devices, including the working vessel) “Since the spatial configuration of the imaging planes is known, the separate 2D coordinates can be transformed back into 3D space, for example using a backprojection algorithm, which can result in position and orientation, e.g. coordinates in x, y, and/or z-direction, of the catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device) in 3D space”). Regarding Claim 18, Lang teaches a system, comprising: one or more processors (corresponding disclosure in at least [0005], where there are one or more processors “Some embodiments relate to a system comprising an optical head mounted display, and a computer system with one or more processors”), the one or more processors configured to: receive at least one first extraluminal image (corresponding disclosure in at least [0032], where there is at least one first extraluminal image (angiogram) “wherein the one or more computer processors are configured to display an intra-operative angiogram of the patient”); receive second extraluminal images captured during delivery of an intravascular device (corresponding disclosure in at least [0032], where there is at least a second extraluminal image captured “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor or the virtual pre-operative vascular 3D image displayed by the optical head mounted display” and further in [0534], where a second image is captured during a drug delivery (a registration process, which involves receiving another image set, is completed during drug injection) “If a placement of a medical implant component, a trial implant, a tissue graft, a tissue matrix, a transplant, a catheter, a surgical instrument or an injection of cells or a drug is performed, the registration procedure can be repeated after the surgical step or surgical alteration has been performed”); detect motion features in the at least one first extraluminal image and the second extraluminal images (corresponding disclosure in at least [0032], where the motion feature (tracked instrument) is detected in the image “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor”); determine, based on the detected motion features, a heartbeat period of a patient (corresponding disclosure in at least [1048], where there is detection of the heartbeat “the 3D models of the vasculature and/or heart and/or lungs and/or neurovascular anatomy derived from preoperative or intra-operative images by one or more OHMDs, compensated for cardiac movement due to heartbeat and organ motion during respiration and patient movement. A reference sensor attached to the patient's body can also provide information about the spatial relationship between the patient and the tracking field”) and further in [1218], where heartbeat is determined based on (with respect to) the images (the images contain the motion features) “ the overlaying and/or aligning and/or superimposing can also be performed by registering the pre-procedural images with the intra-procedural images with regard to cardiac cycle/cardiac gating and/or respiratory cycle/respiratory gating”); and provide for output real-time visualization of a position of the intravascular device on the at least one first extraluminal image or one of the second extraluminal images including the intravascular device (corresponding disclosure in at least [0098], where there is real-time visualization of the extraluminal images (angiograms) “the computer system can maintain the display of the virtual data or virtual images superimposed onto and/or aligned with the corresponding anatomic structures, tissues and/or organs both inside the physical patient and/or in virtual data acquired, for example, (e.g. in real-time) from the physical patient, e.g. an intra-operative angiogram”). Regarding Claim 31, Lang teaches a system, comprising: one or more processors (corresponding disclosure in at least [0005], where there are one or more processors “Some embodiments relate to a system comprising an optical head mounted display, and a computer system with one or more processors”), the one or more processors configured to: receive at least one first extraluminal image (corresponding disclosure in at least [0032], where there is at least one first extraluminal image (angiogram) “wherein the one or more computer processors are configured to display an intra-operative angiogram of the patient”); receive second extraluminal images captured during delivery of an intravascular device (corresponding disclosure in at least [0032], where there is at least a second extraluminal image captured “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor or the virtual pre-operative vascular 3D image displayed by the optical head mounted display” and further in [0534], where a second image is captured during a drug delivery (a registration process, which involves receiving another image set, is completed during drug injection) “If a placement of a medical implant component, a trial implant, a tissue graft, a tissue matrix, a transplant, a catheter, a surgical instrument or an injection of cells or a drug is performed, the registration procedure can be repeated after the surgical step or surgical alteration has been performed”); detect motion features in the at least one first extraluminal image and the second extraluminal images (corresponding disclosure in at least [0032], where the motion feature (tracked instrument) is detected in the image “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor”); correlate, based the detected motion features, the at least one first extraluminal image and the second extraluminal images (corresponding disclosure in at least [0035], where the images are correlated (registered) “the pre-operative vascular 3D image is registered with the intra-operative angiogram using a 3D-2D registration” and further in [0032], where the motion feature (the tracked instrument) is aligned with another image “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor or the virtual pre-operative vascular 3D image”); provide for output real-time visualization of a position of the intravascular device on the at least one first extraluminal image or one of the second extraluminal images including the intravascular device (corresponding disclosure in at least [0098], where there is real-time visualization of the extraluminal images (angiograms) “the computer system can maintain the display of the virtual data or virtual images superimposed onto and/or aligned with the corresponding anatomic structures, tissues and/or organs both inside the physical patient and/or in virtual data acquired, for example, (e.g. in real-time) from the physical patient, e.g. an intra-operative angiogram”); and provide for output a treatment zone on at least one of the second extraluminal images or the at least one first extraluminal image (corresponding disclosure in at least [0692], were there is a treatment zone in the images (there is a virtual plan developed which shows where to place the surgical tools, or the treatment zone, using virtual data, or the images) “A virtual surgical plan using, for example, virtual data of the patient, can be used to develop or determine any of the following for placing or directing a surgical tool, a surgical instrument, a trial implant component, a trial implant, an implant component, an implant, a device including any type of biological treatment or implant or matrix known in the art”). Regarding Claim 32, Lang further teaches wherein the treatment zone is at least one of a treatment device landing zone, a balloon device zone, a vessel prep device zone, or a lesion related zone (corresponding disclosure in at least [0692], where there is a treatment device landing zone (the plan determines where to place an implant or a treatment) “A virtual surgical plan using, for example, virtual data of the patient, can be used to develop or determine any of the following for placing or directing a surgical tool, a surgical instrument, a trial implant component, a trial implant, an implant component, an implant, a device including any type of biological treatment or implant or matrix known in the art”). Regarding Claim 33, Lang further teaches the lesion related zone is at least one of calcification frames, lipid frames, or dissected frames (corresponding disclosure in at least [1122], where the lesion zone is a calcified frame (area) or lipid area “e.g. a pre-operative ultrasound, CTA or MRA, available, for example, for display by the one or more OHMDs. Illustrative, non-limiting examples of anatomic landmarks, features, surfaces, dimensions, shapes, and/or geometries and/or other features that can be detected and/or recognized, using one or more computer processors configured for detection of image features, include, for example… Calcified portion of a vascular plaque , Dimensions of calcified portion of a vascular plaque, Shape of calcified portion of a vascular plaque, Volume of calcified portion of a vascular plaque”), and the lesion related zone is identified from another imaging modality and co-registered, by the one or processors, to the first extraluminal image (corresponding disclosure in at least [1122], where the zone is determined from a different imaging modality “the standalone or separate computer or display monitor can display data from an intra-procedural imaging study of the patient, e.g. a CT, a 2D or 3D angiogram, a digital subtraction angiogram, a flow study, a bolus tracking study, a bolus chase study, including, for example, imaging during movement of the patient and/or surgical/interventional radiologic table and/or movement of one or more OHMDs” and further in [1121], where the other modality is registered “s, the data and/or images displayed by the OHMD and the data and/or images displayed by the standalone or separate computer or display monitor can be cross-registered and, for example, registered in a shared or common coordinate system with use of an image and/or video capture system and/or 3D scanner integrated into, attached to, or separate from the OHMD, using one or more computer processors”). Regarding Claim 34, Lang further teaches wherein the other imaging modality is an intravascular imaging modality (corresponding disclosure in at least [1122], where the other imaging modality is an intravascular imaging modality “intra-procedural imaging study of the patient, e.g. a CT, a 2D or 3D angiogram, a digital subtraction angiogram”). Regarding Claim 39, Lang teaches a system, comprising: one or more processors (corresponding disclosure in at least [0005], where there are one or more processors “Some embodiments relate to a system comprising an optical head mounted display, and a computer system with one or more processors”), the one or more processors configured to: receive at least one first extraluminal image (corresponding disclosure in at least [0032], where there is at least one first extraluminal image (angiogram) “wherein the one or more computer processors are configured to display an intra-operative angiogram of the patient”); receive second extraluminal images captured during delivery of an intravascular device (corresponding disclosure in at least [0032], where there is at least a second extraluminal image captured “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor or the virtual pre-operative vascular 3D image displayed by the optical head mounted display” and further in [0534], where a second image is captured during a drug delivery (a registration process, which involves receiving another image set, is completed during drug injection) “If a placement of a medical implant component, a trial implant, a tissue graft, a tissue matrix, a transplant, a catheter, a surgical instrument or an injection of cells or a drug is performed, the registration procedure can be repeated after the surgical step or surgical alteration has been performed”); detect motion features in the at least one first extraluminal image and the second extraluminal images (corresponding disclosure in at least [0032], where the motion feature (tracked instrument) is detected in the image “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor”); correlate, based the detected motion features, the at least one first extraluminal image and the second extraluminal images (corresponding disclosure in at least [0035], where the images are correlated (registered) “the pre-operative vascular 3D image is registered with the intra-operative angiogram using a 3D-2D registration” and further in [0032], where the motion feature (the tracked instrument) is aligned with another image “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor or the virtual pre-operative vascular 3D image”); provide for output real-time visualization of a position of the intravascular device on the at least one first extraluminal image or one of the second extraluminal images including the intravascular device (corresponding disclosure in at least [0098], where there is real-time visualization of the extraluminal images (angiograms) “the computer system can maintain the display of the virtual data or virtual images superimposed onto and/or aligned with the corresponding anatomic structures, tissues and/or organs both inside the physical patient and/or in virtual data acquired, for example, (e.g. in real-time) from the physical patient, e.g. an intra-operative angiogram”); and automatically capture a screen capture of the real-time visualization of the position of the intravascular device (corresponding disclosure in at least [0069], where there is capturing of the position of the device (video capturing is completed of instruments inside the cavity or lumen) “Tracking of the one or more image capture system, video capture system, image or video capture system, image and/or video capture system, and/or optical imaging system can, for example, be advantageous when the one or more 3D scanners are integrated into or attached to an instrument, an endoscope, and/or when they are located internal to any structures, e.g. inside a cavity or a lumen, e.g. a vascular lumen” and further in [0515]-[0516], where there is automatic capturing (capturing of steps based on the surroundings, including capturing placement of the instrument) “The matching, superimposing and/or registering of the live data of the patient and the virtual data of the patient after the surgical tissue alteration can be manual, semi-automatic or automatic using information about the surgically altered tissue or tissue surface or tissue contour or tissue perimeter or tissue volume or other tissue features… The surgical alteration or surgical steps can include, but are not limited to the listed in the following… placement of a registration marker or calibration phantom on the tissue surface or inside the tissue, placement of a surgical instrument, placement of a device or a component thereof, placement of a tissue graft, placement of a tissue matrix”). Regarding Claim 40, Lang further teaches wherein the screen capture is automatically captured when the intravascular device is within a threshold distance of a region of interest (corresponding disclosure in at least [0515]-[0516], where there is automatic capturing of a region of interest (capturing of steps based on the surroundings, including capturing placement of the instrument, which is construed as the regio of interest) “The matching, superimposing and/or registering of the live data of the patient and the virtual data of the patient after the surgical tissue alteration can be manual, semi-automatic or automatic using information about the surgically altered tissue or tissue surface or tissue contour or tissue perimeter or tissue volume or other tissue features… The surgical alteration or surgical steps can include, but are not limited to the listed in the following… placement of a registration marker or calibration phantom on the tissue surface or inside the tissue, placement of a surgical instrument, placement of a device or a component thereof, placement of a tissue graft, placement of a tissue matrix”) and further in [0398], where the information is within threshold “Once the accuracy and/or the reproducibility and/or the precision of performing distance measurements and/or angle measurements and/or area measurements and/or volume measurements and/or coordinate measurements using one or more image and/or video capture system integrated into, attached to or separate from the OHMD has been determined, threshold values can, for example, be defined that can indicate when the system is operating outside a clinically acceptable performance range”). Regarding Claim 41, Lang further teaches wherein the region of interest is a treatment zone (corresponding disclosure in at least [0515]-[0516], where there is a region of interest, or treatment zone (area where surgical steps are being completed) “The matching, superimposing and/or registering of the live data of the patient and the virtual data of the patient after the surgical tissue alteration can be manual, semi-automatic or automatic using information about the surgically altered tissue or tissue surface or tissue contour or tissue perimeter or tissue volume or other tissue features… The surgical alteration or surgical steps can include, but are not limited to the listed in the following… placement of a registration marker or calibration phantom on the tissue surface or inside the tissue, placement of a surgical instrument, placement of a device or a component thereof, placement of a tissue graft, placement of a tissue matrix”). Regarding Claim 42, Lang further teaches wherein the treatment zone is at least one of a treatment device landing zone, a balloon device zone, a vessel prep device zone, or a lesion related zone (corresponding disclosure in at least [0515]-[0516], where the treatment zone is the treatment device landing zone (area where surgical steps are being completed) “The matching, superimposing and/or registering of the live data of the patient and the virtual data of the patient after the surgical tissue alteration can be manual, semi-automatic or automatic using information about the surgically altered tissue or tissue surface or tissue contour or tissue perimeter or tissue volume or other tissue features… The surgical alteration or surgical steps can include, but are not limited to the listed in the following… placement of a registration marker or calibration phantom on the tissue surface or inside the tissue, placement of a surgical instrument, placement of a device or a component thereof, placement of a tissue graft, placement of a tissue matrix”). Regarding Claim 43, Lang further teaches wherein determining the threshold distance comprises at least one of: determining a number of pixels between an outer boundary of the region of interest and at least one detected marker on the intravascular device, or determining a spatial distance between the at least one detected marker on the intravascular device and the region of interest (corresponding disclosure in at least [0522], where the threshold difference is based on the region of interest (there is a threshold distance between the plan (defined region of interest) “If the differences are deemed to be insignificant, for example, if they fall below an, optionally predefined, threshold in distance or angular deviation, the surgical procedure and subsequent surgical steps can continue as originally planned, e.g. in the virtual surgical plan. If the differences are deemed to be significant, for example, if they fall above an, optionally predefined, threshold in distance or angular deviation” and further in [0966]-[0967], where there are markers on the device (instrument) and the threshold distance can be determined between the marker and the boundary (the measured difference) “the tracking data obtained outside the patient's body can be compared with the tracking data obtained inside the patient's body. Any differences in measured coordinates of the one or more pointers or pointing devices and any other instruments can be determined. If these differences exceed, for example, a threshold value, e.g. greater than 0.5, 1.0, 1.5, 2.0 mm or degrees in x, y and/or z-direction or angular orientation, it can trigger an alert. An alert can, for example, suggest to repeat the registration outside the patient's body, inside the patient's body or both. Any differences in coordinates of the one or more pointers or pointing devices and any other instruments including measured inside the patient's body as compared to measured outside the patient's body can optionally also be reconciled using, for example, statistical methods, e.g. using means, weighted means, medians, standard deviations etc. of measured coordinates”). Regarding Claim 44, Lang further teaches wherein the spatial distance is a Euclidean distance or a geodesic distance (corresponding disclosure in at least [0966], where the spatial distance is a Euclidian distance (the distance is between points within the x-y-z direction) “Any differences in measured coordinates of the one or more pointers or pointing devices and any other instruments can be determined. If these differences exceed, for example, a threshold value, e.g. greater than 0.5, 1.0, 1.5, 2.0 mm or degrees in x, y and/or z-direction or angular orientation”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) in view of Zaharchuk (US20210241458A1). Regarding Claim 2, Lang teaches the limitations of Claim 1 and further teaches the at least one first extraluminal image and the x-ray angiogram (Corresponding disclosure in at least [0032]), but does not teach a high dose contrast angiogram. Zaharchuk, in a similar field of endeavor, teaches a similar concept (contrast images) of a high dose contrast angiogram (corresponding disclosure in at least [0027], where there is a high dose contrast “ An additional dose (e.g., 90%) of contrast is then administered to total a full 100% dose, and a full-dose image 204 is then acquired.”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated a high dose contrast angiogram as taught by Zaharchuk. One of the ordinary skill in the art would have been motivated to incorporate this because a high dose angiogram provides a more dynamic image with greater visualization of the target vessels and device. Regarding Claim 3, Lang teaches the limitations of Claim 1 and further teaches the at least one first extraluminal image and the x-ray angiogram (Corresponding disclosure in at least [0032]), but does not teach low dose contrast angiograms. Zaharchuk, in a similar field of endeavor, teaches a similar concept of low dose contrast angiograms (corresponding disclosure in at least [0025], where there are low contrast images “After a pre-contrast (zero-dose) image 200 is acquired, a low dose (e.g., 10%) of contrast is administered and a low-dose image 202 is acquired”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated low dose contrast angiograms as taught by Zaharchuk. One of the ordinary skill in the art would have been motivated to incorporate this because the low dose angiograms require less contrast, which minimizes the negative side effects of the contrast agent on the patient. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) in view of Florent (US20130116551A1). Regarding Claim 5, Lang teaches the limitations of Claim 1 and further teaches the one or more processors and the at least one first extraluminal image ([0032]), but does not teach the generation of a vessel map. Florent, in a similar field of endeavor, teaches a similar concept (angiograms) of the generation of a vessel map (corresponding disclosure in at least [0049], where vessel maps are generated “ 3D+t techniques deliver state of art vessel maps with a high quality and image information contents”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the vessel maps as taught by Florent. One of the ordinary skill in the art would have been motivated to incorporate this because the vessel maps provide fine detailed information regarding the structure of the vessels from the angiogram. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) in view of Amadou (US20230316544A1) and in further view of Fatouhi (US20250049516A1). Regarding Claim 7, Lang teaches the limitations of Claim 1 and further teaches wherein when training the AI model to automatically detect the working vessel (corresponding disclosure in at least [0324], where AI is used for detection “A virtual surgical plan 67 can be utilized. Optionally, the native anatomy of the patient including after a first surgical alteration can be displayed by the OHMD 68. The OHMD can optionally display digital holograms of subsequent surgical steps… image processing techniques, pattern recognition techniques or deep learning/artificial neural-network based techniques can be used to match virtual patient data and live patient data. Optionally, image processing and/or pattern recognition algorithms can be used to identify certain features”), the one or more processors are further configured to: annotate pre-contrast extraluminal images as line strips following a trajectory of at least one of a guide wire or a guide catheter (corresponding disclosure in at least [1088], where the pre-contrast image (pre-procedural) is annotated with a line strip (a line) following the trajectory “ Similarly, the OHMD display can optionally display some virtual data, e.g. pre-procedural images and/or image reconstructions, of the patient in 3D), while it can display other virtual data, e.g. aspects or components of the virtual plan, e.g. an intended guide wire trajectory, in 2D, e.g. as a line, or in 3D, e.g. as a 3D trajectory”); label a path of the working vessel in post-contrast extraluminal images (corresponding disclosure in at least [0637], where the path is labeled (displayed) “. A virtual plane or path or axis 149 (e.g. for placing a device (e.g. a catheter)) can be displayed by the OHMD 145 and, using a virtual interface 150, the plane or path or axis, as well as optionally virtual implants or instruments”), but does not teach providing the annotated pre-contrast extraluminal images and labeled post-contrast extraluminal images as training data to the AI model and training the AI model to predict a working vessel trajectory. Amadou, in a similar field of endeavor, teaches a similar concept (image tracking with AI), of providing the annotated pre-contrast extraluminal images and labeled post-contrast extraluminal images as training data to the AI model (corresponding disclosure in at least [0005], where there is an annotated pre/post image (first and last image) for training “A first image and a last image in the sequence of training images may be annotated with a ground truth location of the particular object. The location of the object of interest in the first input medical image may be manually annotated by a user or automatically annotated using a machine learning based network” and further in [0020], where the images are inputs for training “The first input medical image comprises an annotation of a location of an object of interest. The object of interest may be any suitable object of interest, such as, e.g., a medical instrument (e.g., catheter), an anatomical landmark, a lesion, etc”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated providing annotated images as training data to the AI model as taught by Amadou. One of the ordinary skill in the art would have been motivated to incorporate this because annotated images being used provides greater accuracy through supervision for the AI model. Lang and Amadou do not teach train the AI model to predict a working vessel trajectory. Fotouhi, in a similar field of endeavor, teaches a similar concept (visualizing trajectories) of training the AI model to predict a working vessel trajectory (corresponding disclosure in at least [0031], where the trajectory is predicted “the predicted trajectory and associated uncertainty may be determined using a neural network module, which has been trained using previously acquired images of the interventional device 146 together with corresponding previous control inputs used to position the interventional device”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated training the AI model to predict a working vessel trajectory as taught by Fotouhi. One of the ordinary skill in the art would have been motivated to incorporate this because the use of AI for predictions is a more efficient and effective way to determine placement and next movements for the device. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) in view of Amadou (US20230316544A1), Fatouhi (US20250049516A1), and in further view of Zaharchuk (US20210241458A1). Regarding Claim 8, Lang teaches the limitations of Claim 6 and further teaches wherein the one or more processors are further configured to: receive, as input into the AI model, at least one pre-contrast extraluminal image; detect, by executing the AI model and based on the at least one pre-contrast extraluminal image, a guide wire of the intravascular device (corresponding disclosure in at least [1044], where the guidewire is detected “Catheter or device tracking systems can be set up to determine the 2D or 3D position and orientation of the catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device)” and further in [0264], where these detection methods are through AI “Registration of virtual data with live data can be performed using a variety of techniques know in the art. These include, but are not limited to, surface registration algorithms such as the Iterative Closest Point algorithm, statistical models, Active Shape Models, mutual information-based or other volume registration algorithms, object recognition, pattern recognition or computer vision techniques, deep learning or other artificial intelligence methods”); propagate, on a frame by frame basis by executing the AI model and based on the detected guide wire, wire information (corresponding disclosure in at least [1048], where wire information is propagated frame by frame (the guidewire location and projection are projected in real time, which is frame by frame “The tracking system can be co-registered with fluoroscopy imaging, for example by installing the transmitter unit within the fluoroscopy detector of the x-ray imaging system. Using this or similar hardware setups, fluoroscopic imaging and electromagnetic sensor tracking can be pre-aligned and auto-registered. This can allow for 3-dimensional real-time tracking of a sensor-equipped catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device) and projection of the catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device)”), but does not teach train, by augmenting high-dose extraluminal images into low-dose extraluminal images as input and automatically predict, by executing the AI model based on the propagated wire information, a working vessel trajectory. Amadou, in a similar field of endeavor, teaches a similar concept of train, by augmenting extraluminal images into extraluminal images as input (corresponding disclosure in at least [0039] of Amadou, where the images are augmented for training “ the sequences of training images may be augmented to make feature extraction more robust to noise in images and to various degrees of image quality. FIG. 4 shows augmented images for training a machine learning based location predictor network, in accordance with one or more embodiments. Image 402 is an original image and images 404-410 are augmented images generated from image” and further in [0040], where the images are high and low dose (with or without contrast agent, which is completed to decrease the amount of contrast agent used [0002] “Embodiments described herein work with x-ray images acquired with and without contrast agents without extra human intervention or extra data”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated augmenting images as inputs as taught by Amadou. One of the ordinary skill in the art would have been motivated to incorporate this because augmenting the training images makes feature extraction more robust to noise in images ([0039] of Amadou). Lang and Amadou do not teach where the AI model to segment at least one of a guide catheter, guide wire, stent marker, or balloon marker on the low-dose extraluminal images and automatically predict, by executing the AI model based on the propagated wire information, a working vessel trajectory. Fatouhi, in a similar field of endeavor, teaches a similar concept of training the AI model to segment at least one of a guide catheter, guide wire, stent marker, or balloon marker on the extraluminal images (corresponding disclosure in at least [0061], where segmentation of the device is provided “the shape of the predicted trajectory may be captured using known image analysis techniques to provide a segmented predicted trajectory, and comparing segmented predicted trajectory to a desired trajectory for navigating the interventional device” and further in [0029], where it’s further described the device is a guide wire “The interventional device 146 may be any compatible medical instrument capable of robotic control, such as a catheter, a guidewire”); and automatically predict, by executing the AI model based on the propagated wire information, a working vessel trajectory (corresponding disclosure in at least [0031] of Fatouhi, where the trajectory is predicted “the predicted trajectory and associated uncertainty may be determined using a neural network module, which has been trained using previously acquired images of the interventional device 146 together with corresponding previous control inputs used to position the interventional device”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated segmenting the guide wire on the images and predict the trajectory of the working vessel using AI as taught by Fatouhi. One of the ordinary skill in the art would have been motivated to incorporate this because the prediction of trajectories provides optimal inputs with decreased failed attempts. The combined references of Lang, Amadou, and Fatouhi do not teach the low dose extraluminal images. Zaharchuk, in a similar field of endeavor, teaches a similar concept of high dose extraluminal images and low dose extraluminal images (corresponding disclosure in at least [0025], where there are low contrast images “After a pre-contrast (zero-dose) image 200 is acquired, a low dose (e.g., 10%) of contrast is administered and a low-dose image 202 is acquired” and further in [0027], where there is a high dose contrast “ An additional dose (e.g., 90%) of contrast is then administered to total a full 100% dose, and a full-dose image 204 is then acquired”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated low and high dose images as taught by Zaharchuk. One of the ordinary skill in the art would have been motivated to incorporate this because the high contrast images provide better dynamic visualization while the low contrast decreases the amount of dose is injected into a patient. The high contrast is a strong baseline to be input into a model with the low dose images being used for training. Claims 15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) in view of Ayvali (US20230225802A1). Regarding Claim 15, Lang teaches the limitations of Claim 14, and further teaches the absolute spatial information ([0388]), but does not teach the optical flow between adjacent frames. Ayvali, in a similar field of endeavor, teaches a similar concept (navigation of medical instruments) of optical flow between adjacent image frames (corresponding disclosure in at least [0177], where optical flow is used (the optical flow is used for a video sequence, or image frames) “the localization component 1314 can identify circular geometries in pre-operative model data that correspond to anatomical lumens and track the change of those geometries to determine which anatomical lumen was selected, as well as the relative rotational and/or translational motion of the medical instrument. Use of a topological map can also enhance vision-based algorithms or techniques. Furthermore, the localization component 1314 can use optical flow, another computer vision-based technique, to analyze displacement and/or translation of image pixels in a video sequence in vision data to infer camera movement”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated optical flow between adjacent frames as taught by Ayvali. One of the ordinary skill in the art would have been motivated to incorporate this because optical flow comparisons between each frames determine the movement and location of the instrument. Regarding Claim 17, Lang teaches the limitations of Claim 14, and further teaches the absolute spatial information ([0388]), but does not teach the optical flow between adjacent frames. Ayvali, in a similar field of endeavor, teaches a similar concept (navigation of medical instruments) of optical flow between adjacent image frames (corresponding disclosure in at least [0177], where optical flow is used (the optical flow is used for a video sequence, or image frames) “the localization component 1314 can identify circular geometries in pre-operative model data that correspond to anatomical lumens and track the change of those geometries to determine which anatomical lumen was selected, as well as the relative rotational and/or translational motion of the medical instrument. Use of a topological map can also enhance vision-based algorithms or techniques. Furthermore, the localization component 1314 can use optical flow, another computer vision-based technique, to analyze displacement and/or translation of image pixels in a video sequence in vision data to infer camera movement”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated optical flow between adjacent frames as taught by Ayvali. One of the ordinary skill in the art would have been motivated to incorporate this because optical flow comparisons between each frames determine the movement and location of the instrument. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) in view of Butler (US20200245965A1). Regarding Claim 19, Lang teaches the limitations of Claim 18 and further teaches the one or more processors (corresponding disclosure in at least [0005], where there are one or more processors “Some embodiments relate to a system comprising an optical head mounted display, and a computer system with one or more processors”), detected motion features (corresponding disclosure in at least [0032], where the motion feature (tracked instrument) is detected in the image “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor”), and at least one first extraluminal image and at least one of the second extraluminal images (corresponding disclosure in at least [0032], where there is at least a first and second extraluminal image captured “wherein the optical head mounted display is configured to display a virtual 3D image of the tracked instrument or device aligned with at least one of the corresponding intra-operative angiogram of vascular structures of the patient displayed on the computer monitor or the virtual pre-operative vascular 3D image displayed by the optical head mounted display” and further in [0534], where a second image is captured during a drug delivery (a registration process, which involves receiving another image set, is completed during drug injection) “If a placement of a medical implant component, a trial implant, a tissue graft, a tissue matrix, a transplant, a catheter, a surgical instrument or an injection of cells or a drug is performed, the registration procedure can be repeated after the surgical step or surgical alteration has been performed”), but does not teach a spatial-temporal phase match between the images. Butler, in a similar field of endeavor, teaches a similar concept (detection and image features) of a spatial-temporal phase match between the images (corresponding disclosure in at least [0083], where there is spatial matching between images using motion features (cardiac information) “Therefore, spatial matching from the AP and lateral projections may be employed to infer the spatial configuration of an anatomic structure in three dimensions. The spatial mismatch error is reduced by utilizing physiological coherence to perform the spatial matching. Thus, cardiac frequency angiographic phenomena (e.g., and in particular, the phase information) as extracted by the cardiac phenomena transform (e.g., wavelet angiography), allows reconstruction with reduced spatial mismatch error”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated a spatial-temporal phase match between the images as taught by Butler. One of the ordinary skill in the art would have been motivated to incorporate this because it enhances image quality particularly when movement may be involved. Claims 20-30 are rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) and Butler (US20200245965A1) as applied in Claim 19 and in further view of Ma (“Dynamic Analysis of X-ray Angiography for Image-Guided Coronary Interventions”, 2020, Erasmus University Rotterdam, disclosed in applicant IDS). Regarding Claim 20, Lang and Butler teach the limitations of Claim 19, and further teaches wherein the one or more processors are further configured to: resample the detected motion features at a common frame rate (corresponding disclosure in at least [0079] of Butler, where the images are resampled at a common frame rate (the image processing steps are repeated and are synchronized based on the cardiac frequency or the motion features) “Processing may be repeated for each set of frames, e.g., such that frames corresponding to angle θ1, angle θ2, angle θN may be processed using an inverse Penrose transform… Here, it is assumed that each sequence comprising cardiac frequency angiographic phenomena has been synchronized, e.g., based on phase and/or magnitude of the cardiac frequency angiographic phenomena. In some aspects, interpolation may be used when aligning sequences of cardiac frequency angiographic phenomena. Thus, these techniques provide for visualization of a 3D vascular pulse wave as a function of time”); determine a maximum correlation coefficient for pairs of motion features in the at least one first extraluminal image and the second extraluminal images (corresponding disclosure in at least [1248] of Lang, where “Once a correlation between the imaging data and the respiratory gating system(s) has been established, e.g. specific image coordinates of the kidney or renal pelvis for a given or each phase of the respiratory cycle, the operator can optionally terminate the imaging procedure, while the respiratory gating and related data acquisition and analysis can continue”), but do not teach determine a maximum correlation coefficient, determine, based on the maximum correlation coefficient, a time shift. Ma, in a similar field of endeavor, teaches a similar concept (image analysis) of determine, based on the maximum correlation coefficient, a time shift (corresponding disclosure in at least [pg. 88, 6.3 “ECG Matching for Roadmap Selection”], where there is a correlation coefficient (highest correlation score “The two ECG signals are first cross-correlated for every possible position on the signals, resulting in a 1D vector of correlation scores. The candidate frame for dynamic overlay is then selected as the one associated with the point on the ECG of the angiographic sequence that is corresponding to the highest correlation score”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated a maximum correlation coefficient and determining a time shift as taught by Ma. One of the ordinary skill in the art would have been motivated to incorporate this because it helps in alignment of different images for image registration. Regarding Claim 21, the combined references noted above teach the limitations of Claim 20, and Butler further teaches wherein the one or more processors are further configured to: determine a drift between detected motion features in the at least one first extraluminal image; and adjust, based on the determined drift, the time shift (corresponding disclosure in at least [1048], where there is an adjustment (compensation for movement) based on the motion features “the 3D models of the vasculature and/or heart and/or lungs and/or neurovascular anatomy derived from preoperative or intra-operative images by one or more OHMDs, compensated for cardiac movement due to heartbeat and organ motion during respiration and patient movement”). Regarding Claim 22, the combined references noted above teach the limitations of Claim 20, and Butler further teaches wherein the one or more processors are further configured to: iteratively predict the time shift; and update, based on the iteratively predicted time shift, the time shift to a corrected time shift (corresponding disclosure in at least [0056], where there is an update of the time shift (cross correlation is completed to normalize or correct the shift) “a cross-correlation analysis may be performed on the data from the sensors 300, 400 producing a matched frequency of the two signals with time-shift producing definitive positive correlation based on set a criteria. In this case, the cross-correlation may be the sum of the product of the two signals shifted relative to each other over a period of not less than one complete cycle of the longer period waveform. In yet another embodiment auto-correlation may be used to normalize the values for better threshold detection comparison of the cross-correlation peak value”). Regarding Claim 23, the combined references noted above teach the limitations of Claim 20, and Ma further teaches wherein the one or more processors are further configured to tune the spatial-temporal phase match (corresponding disclosure in at least [6.3 “ECG Matching for Roadmap Selection”], where there is a phase match (temporal mapping and imaging/spatial matching is also completed) “Roadmap selection in this work is achieved by comparing the ECG signal associated with the fluoroscopic image and the ECG of the angiographic sequence, such that the most suitable candidate roadmap is selected where the best match of the ECG signals is found. The selected roadmap has the same (or very similar) cardiac phase with the X-ray fluoroscopic image, which compensates the difference of vessel shape and pose induced by cardiac motion… To select roadmaps images based on ECG, a temporal mapping between X-ray images and ECG signal points needs to be built first. We assume that ECG signals and X-ray images are well synchronized during acqusition.”). Regarding Claim 24, the combined references noted above teach the limitations of Claim 23, and Ma further teaches wherein when tuning the spatial-temporal phase match the one or more processors are further configured to: identify another first extraluminal image different than the at least one of the first extraluminal images; and tune, based on the other first extraluminal image, the spatial-temporal phase match (corresponding disclosure in at least [6.3 “ECG Matching for Roadmap Selection”], where there is a phase match (temporal mapping and imaging/spatial matching is also completed) “Roadmap selection in this work is achieved by comparing the ECG signal associated with the fluoroscopic image and the ECG of the angiographic sequence, such that the most suitable candidate roadmap is selected where the best match of the ECG signals is found. The selected roadmap has the same (or very similar) cardiac phase with the X-ray fluoroscopic image, which compensates the difference of vessel shape and pose induced by cardiac motion… To select roadmaps images based on ECG, a temporal mapping between X-ray images and ECG signal points needs to be built first. We assume that ECG signals and X-ray images are well synchronized during acqusition.”). Regarding Claim 25, the combined references noted above teach the limitations of Claim 23, and Ma further teaches wherein the one or more processors are further configured to update based on the tuning, the real-time visualization to include the other first extraluminal image (corresponding disclosure in at least [pg. 108, 6.6.4 “Contributions”], where the process is completed with real-time visualization “The particle filtering(PF) step, which consists of the optical flow estimation, sample propagation, sample weight update and normalization, prediction and resampling, takes on average 23/ms frame… The total average time of the proposed DCR including roadmap selection ,catheter tip tracking and roadmap transformation is still less than the acquisition time of our data (66.7 ms / frame, 15 fps), indicating that the proposed DCR method would run in real-time with our setup” and further in [pg. 115, 6.7 “Discussion”] (the steps are completed after the tuning or the offline phase) “From a practical point of view, the proposed DCR approach could easily fit into the clinical workflow of PCI. The offline phase of the method can be done efficiently by a technical assistant of the procedures: selecting and creating roadmaps from an angiography acquisition, annotating the catheter tip (one point) in the images. This phase is typically done before a fluoroscopy acquisition during which the guidewire advancement and stent placement are performed. In the online phase, when a fluoroscopic image is acquired, the proposed system selects the most suitable roadmap, tracks the catheter tip and transforms the roadmap to prospectively show a vessel overlay on the fluorosocpic image. The online updated coronary roadmap can provide real-time visual guidance to cardiologists to manipulate interventional tools during the procedure without the need of administering extra contrast agent”). Regarding Claim 26, the combined references noted above teach the limitations of Claim 21, and Ma further teaches wherein the one or more processors are further configured to detect the intravascular device (corresponding disclosure in at least [pg. 93, 6.4.4 “Summary”], where the device is detected (catheter is being tracked) “The overall catheter tip tracking using a deep learning based Bayesian filtering method is summarized in Algorithm”). Regarding Claim 27, the combined references noted above teach the limitations of Claim 26, and Ma further teaches wherein when detecting the intravascular device in the second extraluminal images the one or more processors are further configured to execute a AI model (corresponding disclosure in at least [pg. 93, 6.4.4 “Summary”], where the methods execute an AI model (deep learning method) “The overall catheter tip tracking using a deep learning based Bayesian filtering method is summarized in Algorithm”). Regarding Claim 28, the combined references noted above teach the limitations of Claim 27, and Ma further teaches wherein the one or more processors are further configured to train the AI model, wherein when training the AI model the one or more processors are further configured to: provide as input to the AI model a co-registration dataset comprising a plurality of intraluminal images and extraluminal images, wherein the plurality of intraluminal and extraluminal images are annotated images (corresponding disclosure in at least [pg. 87, 6.2.1. “Offline Phase”], where there are annotated images of a plurality of extraluminal images (XA sequence) “In this work we manually annotated the catheter tip in the offline XA sequence. In real clinical scenarios, the annotation work can be done easily and efficiently by a technician who typically sits in front of monitors outside the catheterization lab to assist the procedure”); and train the AI model to predict a position of the intravascular device (corresponding disclosure in at least [pg. 98 , 6.6.1 “Training the Deep Neural Network”], where AI predicts the position of the device (catheter) “The training and validation data for detection mentioned in Section 6.5.2 were used for training the deep neural network. The evaluation metric mentioned in Section 6.5.3, the mean Euclidean distance between the ground truth and the predicted tip location averaged over all validation frames” and further in [pg. 121, 7.2 “Future Perspectives”], where there is a co-registration dataset (the CTA and XA images) “. The approach developed in Chapter 5 for contrast inflow detection assists automating the workflow of image guidance tasks, e.g. knowing when to apply registration of preoperative coronary model from CTA to XA images”). Regarding Claim 29, the combined references noted above teach the limitations of Claim 28, and Lang further teaches one or more intravascular device markers (corresponding disclosure in at least [0304] of Lang, where there are intravascular device markers) “a first virtual instrument can be displayed on a computer monitor which is a representation of a physical instrument tracked with navigation markers, e.g. infrared or RF markers, and the position and/or orientation of the first virtual instrument”), but does not teach wherein the annotated images include annotations. Ma, in a similar field of endeavor, teaches a similar concept of the annotated images (corresponding disclosure in at least [pg. 87, 6.2.1. “Offline Phase”], where there are annotated images of the instrument “the annotation work can be done easily and efficiently by a technician who typically sits in front of monitors outside the catheterization lab to assist the procedure”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have included annotations for the images. One of the ordinary skill in the art would have been motivated to incorporate this because the annotated images are used in training the machine learning model for accurate predictions. Regarding Claim 30, the combined references noted above teach the limitations of Claim 26, and Ma further teaches wherein the one or more processors are further configured to: detect an optical flow of the intravascular device (corresponding disclosure in at least [pg. 93, 6.4 “Bayesian Filtering for Catheter Tip Tracking”], where there is optical flow detected of the device (catheter tip) “we estimated the motion from adjacent frames using an optical flow method, as this approach 1) takes into account of the observation zk, which results in a better guess of the catheter tip motion”); determine, based on the detected optical flow, a position of the intravascular device in a first frame of the second extraluminal images (corresponding disclosure in at least [pg. 93, 6.4 “Bayesian Filtering for Catheter Tip Tracking”], where the position is determined (using optical flow) “The final decision on catheter tip location in frame k can then be computed”); and predict, based on the detected optical flow, the position of the intravascular device in a subsequent frame of the second extraluminal images (corresponding disclosure in at least [pg. 100, 6.6.2.1 “Tuning Optical Flow Parameters”], where there is a prediction of the position in subsequent frames (using optical flow estimation of the catheter between frames) “The above parameters were tuned independently of the deep neural network, as optical flow directly estimates the catheter tip motion between two frames”). Claims 35-38 are rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) in view of Amadou (US20230316544A1). Regarding Claim 35, Lang teaches the limitations of Claim 31 and further teaches the one or more processors (corresponding disclosure in at least [0005], where there are one or more processors “Some embodiments relate to a system comprising an optical head mounted display, and a computer system with one or more processors”), but does not teach to receive annotations of the at least one first extraluminal image or the second extraluminal images. Amadou, in a similar field of endeavor, teaches a similar concept () of receive annotations of the at least one first extraluminal image or the second extraluminal images (corresponding disclosure in at least [0025], where annotations of images are received “ a location of the object of interest in the second input medical image is determined using a machine learning based location predictor network based on the annotation of the location of the object of interest in the first input medical image and the extracted features from the first and the second input medical images”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated annotations as taught by Amadou. One of the ordinary skill in the art would have been motivated to incorporate this because the annotations provide structural information and detail regarding the image. Regarding Claim 36, Lang and Amadou teach the limitations of Claim 35 and Amadou further teaches wherein when receiving the annotations the one or more processors are further configured to: receive one or more inputs from a user corresponding to the annotations; or automatically determine, based on vessel data, the annotations (corresponding disclosure in at least [0005], where the annotations are from inputs from a user “The location of the object of interest in the first input medical image may be manually annotated by a user or automatically annotated using a machine learning based network”). Regarding Claim 37, Lang and Amadou teach the limitations of Claim 35 and Amadou further teaches wherein the annotations include one or more of a plaque burden, fractional flow reserve (“FFR”) measurements at one or more locations along a vessel, calcium angles, EEL detections, calcium detections, proximal frames, distal frames, EEL-based metrics, stent decisions, scores, recommendations for debulking, recommendations for subsequent procedures, stent placement zone, treatment device landing zone, balloon device zone, vessel prep device zone, or lesion related zone (corresponding disclosure in at least [0020], where the annotations include a lesion related zone “The first input medical image comprises an annotation of a location of an object of interest. The object of interest may be any suitable object of interest, such as, e.g., a medical instrument (e.g., catheter), an anatomical landmark, a lesion, etc. The location of the object of interest may be manually annotated in the first input medical image by a user (e.g., a clinical) or may be automatically annotated, e.g., by an upstream system in the clinical workflow”). Regarding Claim 38, Lang and Amadou teach the limitations of Claim 35 and Amadou further teaches wherein the one or more processors are further configured to update, based on the received annotations, a second one of the at least one first extraluminal image or the second extraluminal images (corresponding disclosure in at least [0025], where the image is updated with the location based on annotations “a location of the object of interest in the second input medical image is determined using a machine learning based location predictor network based on the annotation of the location of the object of interest in the first input medical image and the extracted features from the first and the second input medical images”). Claims 45-51 are rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) in view of Mino (US20230123739A1). Regarding Claim 45, Lang teaches the limitations of Claim 39, and further teaches the one or more processors and real-time visualization, but does not teach automatically zooming a portion of the real-time visualization. Mino, in a similar field of endeavor, teaches a similar concept (surgical visualization) of automatically zooming a portion of the real-time visualization (corresponding disclosure in at least [0099], where there is automatic zooming of the visualized image “the displayed region of the reconstructed or integrated 3D images can be automatically adjusted in accordance with the endoscope navigation plan. In addition, the display 543 may display the real-time 3D image of the patent's anatomy during the procedure. Further, the output unit 542 may automatically zoom in or zoom out a region in the image of the patient anatomy based on a position or direction of a distal end of the endoscope relative to an anatomical target. ”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the automatic zooming as taught by Mino. One of the ordinary skill in the art would have been motivated to incorporate this because the zoom provides more structural details of the region of interest. Regarding Claim 46, Lang and Mino teach the limitations of Claim 45, and Mino further teaches wherein the portion of the real-time visualization is automatically zoomed when the intravascular device is within a threshold distance of a region of interest (corresponding disclosure in at least [0099], where there is automatic zooming when reaching a threshold distance of a region of interest (the zooming occurs as it gets closer to the structure) “e. Further, the output unit 542 may automatically zoom in or zoom out a region in the image of the patient anatomy based on a position or direction of a distal end of the endoscope relative to an anatomical target. For example, the output unit 542 may automatically zoom in an image as the endoscope tip gets closer to duodenal papilla to show more structural details”). Regarding Claim 47, Lang and Mino teach the limitations of Claim 46, and wherein the portion of the real-time visualization corresponds to the region of interest (corresponding disclosure in at least [0099], where there is automatic zooming of a region of interest (the zooming occurs as it gets closer to the structure) “e. Further, the output unit 542 may automatically zoom in or zoom out a region in the image of the patient anatomy based on a position or direction of a distal end of the endoscope relative to an anatomical target. For example, the output unit 542 may automatically zoom in an image as the endoscope tip gets closer to duodenal papilla to show more structural details”). Regarding Claim 48, Lang and Mino teach the limitations of Claim 47, and Mino further teaches wherein the region of interest is a treatment zone or a location of the intravascular device (corresponding disclosure in at least [0099], where there is automatic zooming of a region of interest (the zooming occurs as it gets closer to the structure) “e. Further, the output unit 542 may automatically zoom in or zoom out a region in the image of the patient anatomy based on a position or direction of a distal end of the endoscope relative to an anatomical target. For example, the output unit 542 may automatically zoom in an image as the endoscope tip gets closer to duodenal papilla to show more structural details” and further in [0009], where it is further mentioned that the region of interest is a treatment zone (the duodenal papilla is the area of interest where treatment is focused) “ In some patients, stricture ahead of pancreas can compress the stomach and part of duodenum, making it difficult to navigate the duodenoscope in a limited lumen of the compressed duodenum and to navigate the cholangioscope to reach the duodenal papilla, the point where the dilated junction of the pancreatic duct and the bile duct (ampulla of Vater) enter the duodenum”). Regarding Claim 49, Lang and Mino teach the limitations of Claim 48, and Mino further teaches wherein the treatment zone is at least one of a treatment device landing zone, a balloon device zone, a vessel prep device zone, or a lesion related zone (corresponding disclosure in at least [0099], where the treatment zone is a treatment device landing zone (the treatment or region of interest area is where the device will be, or the landing zone (the endoscope) “, the output unit 542 may automatically zoom in an image as the endoscope tip gets closer to duodenal papilla to show more structural details”). Regarding Claim 50, Lang and Mino teach the limitations of Claim 46, and Lang further teaches wherein determining the threshold distance comprises at least one of: determining a number of pixels between an outer boundary of the region of interest and at least one detected marker on the intravascular device, or determining a spatial distance between the at least one detected marker on the intravascular device and the region of interest (corresponding disclosure in at least [0522], where the threshold difference is based on the region of interest (there is a threshold distance between the plan (defined region of interest) “If the differences are deemed to be insignificant, for example, if they fall below an, optionally predefined, threshold in distance or angular deviation, the surgical procedure and subsequent surgical steps can continue as originally planned, e.g. in the virtual surgical plan. If the differences are deemed to be significant, for example, if they fall above an, optionally predefined, threshold in distance or angular deviation” and further in [0966]-[0967], where there are markers on the device (instrument) and the threshold distance can be determined between the marker and the boundary (the measured difference) “the tracking data obtained outside the patient's body can be compared with the tracking data obtained inside the patient's body. Any differences in measured coordinates of the one or more pointers or pointing devices and any other instruments can be determined. If these differences exceed, for example, a threshold value, e.g. greater than 0.5, 1.0, 1.5, 2.0 mm or degrees in x, y and/or z-direction or angular orientation, it can trigger an alert. An alert can, for example, suggest to repeat the registration outside the patient's body, inside the patient's body or both. Any differences in coordinates of the one or more pointers or pointing devices and any other instruments including measured inside the patient's body as compared to measured outside the patient's body can optionally also be reconciled using, for example, statistical methods, e.g. using means, weighted means, medians, standard deviations etc. of measured coordinates”). Regarding Claim 51, Lang and Mino teach the limitations of Claim 50, and Lang further teaches wherein the spatial distance is a Euclidean distance or a geodesic distance (corresponding disclosure in at least [0966], where the spatial distance is a Euclidian distance (the distance is between points within the x-y-z direction) “Any differences in measured coordinates of the one or more pointers or pointing devices and any other instruments can be determined. If these differences exceed, for example, a threshold value, e.g. greater than 0.5, 1.0, 1.5, 2.0 mm or degrees in x, y and/or z-direction or angular orientation”). Claim 52 is rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) and Mino (US20230123739A1) as applied in Claim 46, and in further view of Rico (US20100158332A1). Regarding Claim 52, Lang and Mino teach the limitations of Claim 46, and Mino further teaches the automatically zooming (corresponding disclosure in at least [0099]), but does not teach sorting pixel values of the real-time visualization by their intensity, normalizing pixel intensity values lower than a predetermined threshold, and applying a median filter to the normalized pixel intensity values of the real time visualization. Rico, in a similar field of endeavor, teaches a similar concept (image analysis) of sorting pixel values of the real-time visualization by their intensity (corresponding disclosure in at least [0100], where there are pixel values being sorted by intensity “MeanLesion is the mean grey value of pixels inside the lesion and MeanBackground is the mean grey value of pixels of the background region(s)” and further in [0102], where there is a sorting step of the pixel values “After an analysis of features computed or extracted from the image as described above and with false positives removed, at a final step (step 324), all blobs having likelihood values above a threshold value are reported, for example, by showing them on a display device, for further study by a radiologist, or forwarded to additional CAD modules for further automated analysis. They may be reported after sorting”) ; normalizing pixel intensity values lower than a predetermined threshold (corresponding disclosure in at least [0037], where pixels are normalized “Next step is to normalize (step 130) image intensities. As is known to those skilled in the art, intensities of pixels produced by hardware devices of most medical imaging modalities generally suffer from inconsistency introduced by variations in image acquisition hardware”, and further in [0039], where the pixels are normalized to a lower value (the pixels are based on a particular threshold) “The normalized image is next processed to detect distinct areas of contiguous pixels, or “blobs”, that have consistent or similar internal intensity characteristics (step 140). Different methods of detecting blobs may be employed. In general, one first generates a parameter map, i.e., spatial variation of parameter values at each pixel of the image, for a selected parameter. Then, contiguous pixels having the parameter values satisfying certain criteria, such as exceeding a threshold value, below a threshold value or within a pre-determined range, and forming distinct areas are identified as belonging to blobs, with each distinct area being a detected blob”), and applying a median filter to the normalized pixel intensity values of the real time visualization (corresponding disclosure in at least [0118], where a median filter is applied “ the morphological filter can be replaced by a median filter to produce a similar result”). Claims 53-56 and 58 are rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) in view of Kunio (US20220346885A1). Regarding Claim 53, Lang teaches a system comprising: one or more processors, the one or more processors configured to: receive extraluminal images captured during delivery of an intravascular device, wherein the intravascular device has a radio-opaque marker (corresponding disclosure in at least [1045], where there are images captured of the device (catheter), which has the radio-opaque markers “the catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device) can contain one or more radiopaque markers that can be visible on the x-ray images”); detect a plurality of device marker candidates, wherein at least one of the device marker candidates corresponds to the radio-opaque marker of the intravascular device (corresponding disclosure in at least [1045], where there are a plurality of markers, (radiopaque markers) which are visible in the image (detected) “ The catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device) can contain one or more radiopaque markers that can be visible on the x-ray images. To locate the catheter and/or catheter tip or other device”); Lang does not teach automatically detect, using an artificial intelligence (AI) model, a working vessel using a plurality of virtual boxes and at least one of the plurality of device marker candidates, wherein at least one of the plurality of virtual boxes contains the at least one of the plurality of device marker candidates and select at least one of the plurality of the virtual boxes that includes a region of interest of the working vessel that contains the at least one of the plurality of device markers. Kunio, in a similar field of endeavor, teaches a similar concept (radiopaque markers) of automatically detect, using an artificial intelligence (AI) model, a working vessel using a plurality of virtual boxes and at least one of the plurality of device marker candidates (corresponding disclosure in at least [0027], where there is detection of markers using AI “training methods may include or have one or more of the following conditions: (i) the parameters include one or more hyper-parameters; (ii) the saved, trained model is used as a created detection system for identifying or detecting a marker(s) or radiopaque marker(s) in angiography image data” and further in [0088], where there are radiopaque markers, which track the working vessel with virtual boxes (region of interest) “The catheter 120 may include a probe tip, one or more radiopaque markers, an optical fiber, and a torque wire. The probe tip may include one or more data collection systems. The catheter 120 may be threaded in an artery of the patient 106 to obtain images of the coronary artery. The patient interface unit no may include a motor M inside to enable pullback of imaging optics during the acquisition of intravascular image frames. The imaging pullback procedure may obtain images of the blood vessel. The imaging pullback path may represent the co-registration path, which may be a region of interest or a targeted region of the vessel”); wherein at least one of the plurality of virtual boxes contains the at least one of the plurality of device marker candidates (corresponding disclosure in at least [0105], where there are multiple virtual boxes (frames) which contain the device markers “one example of a marker detection success rate is to calculate the number of frames for which the predicted and the true radiopaque marker locations are considered the same (e.g., when the distance between predicted and true marker positions is within a certain tolerance or below a pre-defined distance threshold, which is defined by a user or pre-defined in the system (e.g., the distance threshold may be set at 1.0 mm); etc.) divided by the total number of frames obtained, received, or imaged during the OCT pullback. According to a first method where a user specifies a pullback region on one frame, according to a second method where a user points out marker location on several or multiple frames, and according to a third method where a user specifies a pullback region on multiple frames”); and select at least one of the plurality of the virtual boxes that includes a region of interest of the working vessel that contains the at least one of the plurality of device markers (corresponding disclosure in at least [0105], where there is a region of interest included in the virtual box (frame) “one example of a marker detection success rate is to calculate the number of frames for which the predicted and the true radiopaque marker locations are considered the same (e.g., when the distance between predicted and true marker positions is within a certain tolerance or below a pre-defined distance threshold, which is defined by a user or pre-defined in the system (e.g., the distance threshold may be set at 1.0 mm); etc.) divided by the total number of frames obtained, received, or imaged during the OCT pullback. According to a first method where a user specifies a pullback region on one frame, according to a second method where a user points out marker location on several or multiple frames, and according to a third method where a user specifies a pullback region on multiple frames”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated detection of the vessel using AI and selecting boxes containing the region of interest as taught by Kunio. One of the ordinary skill in the art would have been motivated to incorporate this because the area of interest is further narrowed down and the interested structure is pinpointed with more accuracy. Regarding Claim 54, Lang and Kunio teach the limitations of Claim 53, and further teaches wherein the one or more processors are further configured to: predict center points of each of the plurality of virtual boxes, pair the detected device marker candidates with the nearest of the plurality of boxes (corresponding disclosure in at least [0111], where there are marker candidates which are predicted “For the object detection model (also referred to as the regression model or keypoint detection model as aforementioned) architecture, one or more embodiments may use an angio image or images as an input and may predict the marker location in a form of a spatial coordinate. This approach/architecture has advantages over semantic segmentation because the object detection model predicts”); filter the device marker candidates that are beyond boundaries of the plurality of virtual boxes; update the center points of each of the plurality of virtual boxes using the filtered device marker candidates; determine displacement of the predicted center points and updated center points (corresponding disclosure in at least [0121], where there is a displacement (difference) between predicted and updated (ground truth) “Considering the movement of the marker from one frame to one after as a vector, the difference of pullback paths can be evaluated in terms of the differences of the magnitude (i.e., length) of the vectors (in ground truth and in prediction) and the angle differences of the vectors”); and repositioning, based on the determined displacement, the plurality of virtual boxes by the determined displacement (corresponding disclosure in at least [0122], where there is a repositioning based on the displacement (the system will assess the prediction vs the ground truth and move accordingly) “, evaluation may be performed by assessing the movement of the detected/predicted marker location over a certain period of time. Since the marker should move in a certain direction, which can be defined by a user and/or with a given prior knowledge of anatomy of the vessel (from distal to proximal of the vessel), if the detected/predicted marker location does not move the appropriate direction, a model can be penalized. For example, if frame-by-frame prediction is performed, the movement of the detected/predicted marker location can be assessed by comparing the detected/predicted location in a certain number of frames prior to the frame that is currently used for training. If a model that uses a sequence of frames as input, the movement can be evaluated by comparing the detected/predicted marker locations at the first and the last frames of the sequence”). Kunio discloses the claimed invention except for virtual boxes. It would have been an obvious matter of design choice to use marker points rather than boxes since such a modification would have involved a mere change in the form or shape of a component. A change in form or shape is generally recognized as being within the level of ordinary skill in the art. In re Dailey, 149 USPQ 47 (CCPA 1976). Regarding Claim 55, Lang and Kunio teach the limitations of Claim 54, and Kunio further teaches wherein the updating the center point includes approximating the center point of the virtual box using the device marker candidate (corresponding disclosure in at least [0105], where there is an approximation of the marker (estimation) “by improving estimation of the marker location, the success rate of the marker detection may be improved and likewise the success rate of coregistration may be improved”). Regarding Claim 56, Lang and Kunio teach the limitations of Claim 55, and Kunio further teaches wherein more than one device marker candidate is within at least one of the plurality of virtual boxes (corresponding disclosure in at least [0106], where the marker is within the box (there is a classification method determining if the marker is present within the region) “ By way of at least one example, a segmentation may involve classifying a given area or region within an image into one of two classes (foreground and background). By way of a non-limiting, non-exhaustive embodiment example, the two classes may indicate whether a target (e.g., a pixel, an area of an image, a target object in an image, etc.) represents a radiopaque marker (first class, foreground, etc.) or does not represent a marker (second class, background, etc.). In one or more output examples, each pixel may be classified into either representing a marker or not representing a marker”). Regarding Claim 58, Lang and Kunio teach the limitations of Claim 53, and Lang further teaches wherein the extraluminal images are live x-ray angiographs or fluoroscopy images (corresponding disclosure in at least [0032], where the image is an x-ray angiograph “wherein the one or more computer processors are configured to display an intra-operative angiogram of the patient on the computer monitor” and further in [1031], where x-ray angiography is specified “This registration can determine the optimal rotation, translation, scaling/magnification/minification and projection parameters that rigidly map and project 3D coordinates from the preoperative scan to 2D image coordinates of the x-ray angiogram”). Claims 57, and 59-60 are rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) and Kunio (US20220346885A1) as applied in Claim 53 and in further view of Ma (“Dynamic Analysis of X-ray Angiography for Image-Guided Coronary Interventions”, 2020, Erasmus University Rotterdam). Regarding Claim 57, Lang and Kunio teach the limitations of Claim 53, but does not teach wherein the AI model is trained using annotations on high dose x-ray angiographs. Ma, in a similar field of endeavor, teaches a similar concept of wherein the AI model is trained using annotations on high dose x-ray angiographs (corresponding disclosure in at least [pg. 87, 6.2.1. “Offline Phase”], where there are high dose x-ray angiographs with annotations “In this work we manually annotated the catheter tip in the offline XA sequence. In real clinical scenarios, the annotation work can be done easily and efficiently by a technician who typically sits in front of monitors outside the catheterization lab to assist the procedure” and further in [pg. 119, 7.1 “Summary”], where annotated images are used for the AI model “The proposed tracking method achieved a tracking accuracy with an av erage error of 1.3 mm on 34 clinical X-ray sequences and has been shown superior to detection without temporal information using the CNN and tracking with only the motion estimation using optical flow. The roadmapping with the proposed track ing algorithm achieved an average error about 2 mm on 409 frames with guidewire annotations from the 34 sequences”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated annotations on the x-ray angiographs for the AI model as taught by Ma. One of the ordinary skill in the art would have been motivated to incorporate this because the annotations provide structured data for the model to understand the visual image. Regarding Claim 59, Lang and Kunio teach the limitations of Claim 53, but does not teach wherein the one or more processors are further configured to track the region of interest during a percutaneous coronary intervention procedure. Ma, in a similar field of endeavor, teaches a similar concept of wherein the one or more processors are further configured to track the region of interest during a percutaneous coronary intervention procedure (corresponding disclosure in at least [pg. 122, 7.2 “Future Perspectives”], where the procedure is done during a pectinaceous coronary intervention “In conclusion, in this thesis we have developed and evaluated novel dynamic image analysis approaches towards an improved image guidance for percutaneous coronary interventions”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated tracking the region of interest during percutaneous coronary intervention procedure as taught by Ma. One of the ordinary skill in the art would have been motivated to incorporate this because tracking a region of interest during a procedure ensures that the appropriate area is being targeted. Regarding Claim 60, Lang and Kunio teach the limitations of Claim 53, but does not wherein the one or more processors are further configured to enhance the region of interest using local contrast stretching by selectively enhancing the local contrast between the detected device marker candidate and surrounding regions. Ma, in a similar field of endeavor, teaches a similar concept wherein the one or more processors are further configured to enhance the region of interest using local contrast stretching by selectively enhancing the local contrast between the detected device marker candidate and surrounding regions (“pg. 13, 2.3 Experiments “This mask can be used to assess the local contrast around vessels in XA. In mask 2, as shown in Fig. 2.3 column 3, everything outside the foreground is considered background, which thus also evaluates the removal of the diaphragm, guiding catheters, etc.”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated enhancing the region of interest with local interest as taught by Ma. One of the ordinary skill in the art would have been motivated to incorporate this because increasing the amount of contrast in a region of interest provides greater visibility of the structures. Claim 61 is rejected under 35 U.S.C. 103 as being unpatentable over Lang (US20210137634A1) and Kunio (US20220346885A1) as applied in Claim 53, and in further view of Mino (US20230123739A1) and Rico (US20100158332A1) Regarding Claim 61, Lang and Kunio teach the limitations of Claim 53, and further teaches the one or more processors (corresponding disclosure in at least [0005], where there are one or more processors “Some embodiments relate to a system comprising an optical head mounted display, and a computer system with one or more processors”), but does not teach automatically zoom the region of interest, sorting pixel values of the real-time visualization by their intensity, normalizing pixel intensity values lower than a predetermined threshold and applying a median filter to the normalized pixel intensity values of the real time visualization. Mino, in a similar field of endeavor, teaches a similar concept of wherein the one or more processors are further configured to automatically zoom the region of interest (corresponding disclosure in at least [0099] of Mino, where there is automatic zooming of the visualized image “the displayed region of the reconstructed or integrated 3D images can be automatically adjusted in accordance with the endoscope navigation plan. In addition, the display 543 may display the real-time 3D image of the patent's anatomy during the procedure. Further, the output unit 542 may automatically zoom in or zoom out a region in the image of the patient anatomy based on a position or direction of a distal end of the endoscope relative to an anatomical target”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the automatic zooming as taught by Mino. One of the ordinary skill in the art would have been motivated to incorporate this because the zoom provides more structural details of the region of interest. Lang, Kunio, and Mino do not teach sorting pixel values of the real-time visualization by their intensity, normalizing pixel intensity values lower than a predetermined threshold and applying a median filter to the normalized pixel intensity values of the real time visualization. Rico, in a similar field of endeavor teaches a similar concept of sorting pixel values of the real-time visualization by their intensity (corresponding disclosure in at least [0100], where there are pixel values being sorted by intensity “MeanLesion is the mean grey value of pixels inside the lesion and MeanBackground is the mean grey value of pixels of the background region(s)” and further in [0102], where there is a sorting step of the pixel values “After an analysis of features computed or extracted from the image as described above and with false positives removed, at a final step (step 324), all blobs having likelihood values above a threshold value are reported, for example, by showing them on a display device, for further study by a radiologist, or forwarded to additional CAD modules for further automated analysis. They may be reported after sorting”); normalizing pixel intensity values lower than a predetermined threshold (corresponding disclosure in at least [0037], where pixels are normalized “Next step is to normalize (step 130) image intensities. As is known to those skilled in the art, intensities of pixels produced by hardware devices of most medical imaging modalities generally suffer from inconsistency introduced by variations in image acquisition hardware”, and further in [0039], where the pixels are normalized to a lower value (the pixels are based on a particular threshold) “The normalized image is next processed to detect distinct areas of contiguous pixels, or “blobs”, that have consistent or similar internal intensity characteristics (step 140). Different methods of detecting blobs may be employed. In general, one first generates a parameter map, i.e., spatial variation of parameter values at each pixel of the image, for a selected parameter. Then, contiguous pixels having the parameter values satisfying certain criteria, such as exceeding a threshold value, below a threshold value or within a pre-determined range, and forming distinct areas are identified as belonging to blobs, with each distinct area being a detected blob”), and applying a median filter to the normalized pixel intensity values of the real time visualization (corresponding disclosure in at least [0118], where a median filter is applied “ the morphological filter can be replaced by a median filter to produce a similar result”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated sorting pixel intensity, normalizing the pixel intensity, and applying a median filter as taught by Rico. One of the ordinary skill in the art would have been motivated to incorporate this because these steps ensure enhancing the image quality and reduces noise in the image. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN KIM whose telephone number is (571)272-1821. The examiner can normally be reached Monday-Friday 6-2 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached at (571) 270-0552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.E.K./ Examiner, Art Unit 3797 /SERKAN AKAR/ Primary Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Aug 08, 2024
Application Filed
Dec 05, 2025
Interview Requested
Mar 07, 2026
Non-Final Rejection — §102, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+65.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month