Prosecution Insights
Last updated: April 19, 2026
Application No. 18/619,745

IMAGE PROCESSING DEVICE, IMAGE PROCESSING SYSTEM, IMAGE DISPLAY METHOD, AND IMAGE PROCESSING PROGRAM

Non-Final OA §103§112
Filed
Mar 28, 2024
Examiner
LE, SARAH
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Terumo Kabushiki Kaisha
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
172 granted / 258 resolved
+4.7% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
22 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 258 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Title The title of the invention is not descriptive. A new tile is required that is clearly indicative of the invention to which the claims are directed. Claim Objections Claims 10, 12 and 13 are objected to because of the following informalities: Claim 10 depends from claim 4 which depends from claim 1. Claim 10 recites the limitations “a three-dimensional image that is the image” in lines 4-5 while claim 1 recites “an image” in line 3 and “the image” in line 4. It is understandable but using of multiple labels easily give rise to confusing situations. Examiner recommends rephrasing to remove the double labeling. Claim 12 depends from claim 11 which depends from claim 1. Claim 12 recites the limitations “a three-dimensional image that is the image” in lines 4-5 while claim 1 recites “an image” in line 3 and “the image” in line 4. It is understandable but using of multiple labels easily give rise to confusing situations. Examiner recommends rephrasing to remove the double labeling. Claim 13 depends from claim 1 and claim 13 recites the limitations “a three-dimensional image that is the image” in lines 2-3 while claim 1 recites “an image” in line 3 and “the image” in line 4. It is understandable but using of multiple labels easily give rise to confusing situations. Examiner recommends rephrasing to remove the double labeling. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “controller unit configured to” in claims 1-14. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 10 and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites the limitation "a position" in line 3. There is insufficient antecedent basis for this limitation in the claim. Claim 13 recites the limitation “a position” in line 6 and line 9. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 1. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Strommer et al, IDS, U.S Patent Application Publication No.20060058647 (“Strommer”) in view of CHAO et al., IDS, U.S Patent Application Publication No.20200129142 (“CHAO”) further in view of Jiang, U.S Patent Application Publication No. 2014/0039294 (“Jiang”) Regarding independent claim 1, Strommer teaches an image processing device ([0057] The disclosed technique overcomes the disadvantages of the prior art by graphically designating on an image of the lumen, the position where a medical device (e.g., a PCI device, a dilation balloon, a stent delivery system) has to be delivered, and indicating when the medical device has reached the selected position. The medical device is attached to the tip of a catheter.”) configured to cause a display to display, based on tomographic data acquired by a sensor moving in a lumen of a biological tissue ([0057] The disclosed technique overcomes the disadvantages of the prior art by graphically designating on an image of the lumen, the position where a medical device (e.g., a PCI device, a dilation balloon, a stent delivery system) has to be delivered, and indicating when the medical device has reached the selected position. The medical device is attached to the tip of a catheter. A medical positioning system (MPS) sensor constantly detects the position of the medical device relative to the selected position. This position is represented on a real-time image (e.g., live fluoroscopy), a pseudo-real-time image (e.g., previously recorded cine-loop) or a previously recorded still image frame of the lumen, thereby obviating the need to radiate the inspected organ of the patient repeatedly, neither or to repeatedly inject contrast agent to the body of the patient. The medical staff can either guide the catheter manually according to feedback from an appropriate user interface, such as display, audio output, and the like, or activate a catheter guiding system which automatically guides the catheter toward the selected position.” [0092] With reference to FIGS. 3A and 3B, while the catheter is being maneuvered through lumen 108, each of two-dimensional image 104 and three-dimensional image 106, is displayed relative to the coordinate system of lumen 108 (i.e., relative to the MPS sensor which is attached to the catheter, and which constantly moves together with lumen 108). When the stent reaches the selected position (i.e., front end of the stent is substantially aligned with mark 120 and the rear end thereof is substantially aligned with mark 116), a user interface (e.g., audio, visual, or tactile device--not shown) announces the event to the operator.”), an image representing the biological tissue (see at least [0068] Two-dimensional image 104 can be a still image of the lumen system (i.e., one of the images among a plurality of images in a cine-loop, which the operator selects). In this case, the selected two-dimensional image can be an image whose contrast for example, is better (e.g., the difference in the brightness of the dark pixels and the bright pixels in the image, is large) than all the rest, and which portrays the lumen system in a manner which is satisfactory for the operator either to designate the selected location of the medical device, or to view a real-time representation of the stent, as the medical device is being navigated within the lumen system. [0069] With reference to FIG. 1B, GUI 102 includes a three-dimensional image 106 of a lumen (referenced 108) of the lumen system displayed in GUI 100, through which the catheter is being maneuvered. Three-dimensional image 106 is reconstructed from a plurality of two-dimensional images which are detected by a two-dimensional image acquisition device, during an image acquisition stage, by a technique known in the art.”) [0092] With reference to FIGS. 3A and 3B, while the catheter is being maneuvered through lumen 108, each of two-dimensional image 104 and three-dimensional image 106, is displayed relative to the coordinate system of lumen 108 (i.e., relative to the MPS sensor which is attached to the catheter, and which constantly moves together with lumen 108). When the stent reaches the selected position (i.e., front end of the stent is substantially aligned with mark 120 and the rear end thereof is substantially aligned with mark 116), a user interface (e.g., audio, visual, or tactile device--not shown) announces the event to the operator.” and display a first element on a screen same as the image (see at least [0074] An MPS sensor (not shown) is firmly attached to the tip of the catheter. Three-dimensional image 106 is registered with two-dimensional image 104, such that each point in two-dimensional image 104 corresponds to a respective point in three-dimensional image 106. In this manner, the coordinates of each point in three-dimensional image 106 can be projected onto two-dimensional image 104. Alternatively, each point in two-dimensional image 104 can be transferred to three-dimensional image 106 (e.g., by acquiring a series of two-dimensional images from different viewing angles). A real-time representation 110 (FIG. 1A) of the MPS sensor is superimposed on lumen 108, as described herein below in connection with FIG. 6C. A real-time representation 112 (FIG. 1B) of the MPS sensor is superimposed on three-dimensional image 106.”; [0087] During the medical operation, following the planning session, a catheter which includes a stent (not shown), is maneuvered within lumen 108 toward marks 116, 118 and 120. An MPS sensor (not shown) is attached to the catheter in the vicinity of the stent. With reference to FIGS. 2A and 2B, the position of the front end and of the rear end of the stent are represented in real-time, by features 128 and 130, respectively, on two-dimensional image 104, and by features 132 and 134, respectively, on three-dimensional image 106. In the example set forth in FIGS. 2A and 2B, each of features 128 and 130 is in form of a rectangle with longitudinal lines 136 and 138, respectively, dividing each rectangle in two. The actual trajectory of the catheter is represented by a feature 140 (FIG. 2B) superimposed on three-dimensional image 106. The actual trajectory of the catheter can be represented by another feature (not shown) superimposed on two-dimensional image 104.”), the first element representing a position of the sensor and being displaced as the sensor moves (see at least [0087] During the medical operation, following the planning session, a catheter which includes a stent (not shown), is maneuvered within lumen 108 toward marks 116, 118 and 120. An MPS sensor (not shown) is attached to the catheter in the vicinity of the stent. With reference to FIGS. 2A and 2B, the position of the front end and of the rear end of the stent are represented in real-time, by features 128 and 130, respectively, on two-dimensional image 104, and by features 132 and 134, respectively, on three-dimensional image 106. In the example set forth in FIGS. 2A and 2B, each of features 128 and 130 is in form of a rectangle with longitudinal lines 136 and 138, respectively, dividing each rectangle in two. The actual trajectory of the catheter is represented by a feature 140 (FIG. 2B) superimposed on three-dimensional image 106. The actual trajectory of the catheter can be represented by another feature (not shown) superimposed on two-dimensional image 104.”; [0092] With reference to FIGS. 3A and 3B, while the catheter is being maneuvered through lumen 108, each of two-dimensional image 104 and three-dimensional image 106, is displayed relative to the coordinate system of lumen 108 (i.e., relative to the MPS sensor which is attached to the catheter, and which constantly moves together with lumen 108). When the stent reaches the selected position (i.e., front end of the stent is substantially aligned with mark 120 and the rear end thereof is substantially aligned with mark 116), a user interface (e.g., audio, visual, or tactile device--not shown) announces the event to the operator.”), the image processing device comprising: a control unit ([0168] Moving mechanism 586 can include a pair of angular movement rollers 604A and 604B, and a pair of linear movement rollers 606A and 606B, and respective moving elements (not shown) such as electric motors, actuators, and the like. However, moving mechanism 586 can include other, alternative or additional elements, as long as it imparts to catheter 596 the necessary motions described herein below (e.g., piezoelectric motors which transfer linear movement through friction). Optionally, moving mechanism 586 can be disposable in order to keep it sterile. Controller 584 includes a processor (not shown) and a storage unit (not shown) for storing information respective of a path 608, which catheter 596 should move according to, within lumen 108 (FIG. 1A)”) configured to cause the display to display, upon receiving a user operation of requesting marking of the position of the sensor (see at least [0079] With reference to FIG. 2A, during a planning session, the operator graphically designates a plurality of marks 116, 118, and 120 on two-dimensional image 104, as a selected position within lumen 108, which a medical device (not shown) is to be delivered to. The operator performs the marking either on a frozen two-dimensional image of lumen 108, or on a frozen reconstructed three-dimensional model of lumen 108. The operator performs the marking in different manners, such as manually, according to an automated two-dimensional or three-dimensional quantitative cardiac assessment (QCA), and the like.”; [0084] For simplicity, the medical device in the example set forth in FIGS. 2A, 2B, 3A, and 3B, is a stent. In this case, each of marks 116, 118, and 120 is a substantially straight line, which is substantially perpendicular to lumen 108. For example, marks 116 and 120 designate the two ends of the stent, while mark 118 designates the middle of the stent. Marks 116, 118, and 120 define the location of the stent in lumen 108, as well as the orientation thereof. The marking is performed via a user interface (not shown), such as a joystick, push button, pointing device (e.g., a mouse, stylus and digital tablet, track-ball, touch pad), and the like.), a second element together with the first element, the second element being fixed at a position same as a position of the first element at time of the user operation (see at least [0087] During the medical operation, following the planning session, a catheter which includes a stent (not shown), is maneuvered within lumen 108 toward marks 116, 118 and 120. An MPS sensor (not shown) is attached to the catheter in the vicinity of the stent. With reference to FIGS. 2A and 2B, the position of the front end and of the rear end of the stent are represented in real-time, by features 128 and 130, respectively, on two-dimensional image 104, and by features 132 and 134, respectively, on three-dimensional image 106. In the example set forth in FIGS. 2A and 2B, each of features 128 and 130 is in form of a rectangle with longitudinal lines 136 and 138, respectively, dividing each rectangle in two. The actual trajectory of the catheter is represented by a feature 140 (FIG. 2B) superimposed on three-dimensional image 106. The actual trajectory of the catheter can be represented by another feature (not shown) superimposed on two-dimensional image 104.”;[ 0088] During the medical operation, the system superimposes features 128 and 130 together with marks 116, 118 and 120, while the catheter is being maneuvered through lumen 108, either on a real-time two-dimensional image of lumen 108 (e.g., angiogram), on a two-dimensional cine-loop of lumen 108, or on a frozen two-dimensional image of lumen 108. Additionally, the system superimposes features 132 and 134 together with marks 122, 124 and 126, while the catheter is being maneuvered through lumen 108, either on a real-time three-dimensional image of lumen 108, on a still three-dimensional image of lumen 108, or on a cine-loop of lumen 108. Further additionally, the system superimposes features 132 and 134 together with marks 122, 124 and 126, on the real-time two-dimensional image of lumen 108, as well as one or more navigation images of lumen 108 (e.g., virtual IVUS image--either a still image or a cine-loop), acquired from viewing angles different than that of the real-time two-dimensional image.”). Strommer is understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, CHAO teaches an image processing device (Fig.1, item 100) configured to cause a display (Fig.1, item 108) to display, based on tomographic data ([0093] FIG. 11B illustrates screen display 1100 of a live view during a pullback procedure in accordance with at least one embodiment of the present disclosure. A virtual venogram 500, acting as a roadmap in the live view 1100, automatically shows where the transducer array 124 is located within the body. In some embodiments, a co-registered X-ray, CAT scan, or fluoroscopy image may be used as a roadmap instead of or in addition to the virtual venogram 500. The screen display 1100 also includes a live tomographic IVUS image 1010. In addition, the screen display 1100 includes image setting controls 1120 (e.g., gain, field of view, etc.).) acquired by a sensor (Fig.1, item 124; 0093] FIG. 11B illustrates screen display 1100 of a live view during a pullback procedure in accordance with at least one embodiment of the present disclosure. A virtual venogram 500, acting as a roadmap in the live view 1100, automatically shows where the transducer array 124 is located within the body. In some embodiments, a co-registered X-ray, CAT scan, or fluoroscopy image may be used as a roadmap instead of or in addition to the virtual venogram 500. The screen display 1100 also includes a live tomographic IVUS image 1010. In addition, the screen display 1100 includes image setting controls 1120 (e.g., gain, field of view, etc.).) moving in a lumen of a biological tissue (Figure 1, item 120), an image representing the biological tissue and display a first element on a screen same as the image ([0094] FIG. 12 illustrates a screen display 1100 during pullback, e.g., during recording of the IVUS data, in accordance with at least one embodiment of the present disclosure. A current frame indicator 1215 shows where on the cartoon roadmap or virtual venogram 500 of the vasculature the transducer array 124 of the catheter 510 is presently located. Label presets 1220 are also provided (e.g., vasculature segment abbreviations such as CIV, EIV, CFV, etc.). The IVUS frames are automatically labeled based on image analysis. In this example, the current position of the transducer array has been identified as the exterior iliac vein 550, and so the EIV label preset 1220 is highlighted or illuminated. A pullback speed indicator 1230 provides guidance to the clinician or other user for a stable pullback speed. The pullback speed indicator 1230 can be a series of blocks that are filled based on the speed (e.g., more blocks indicate faster speed and fewer blocks indicate slower speed). A tomographic IVUS image 1010 shows the current frame, and an automatic label 1240 can be generated using image analysis with the label presets described with respect to the current frame indicator 1215, e.g., by the vasculature segment abbreviation. Bookmark thumbnails 1250 appear when the user presses the bookmark option and/or the label preset option. A direction indicator 1260 is also included, showing, e.g., the orientation or direction of movement of the transducer array. Anterior (A), posterior (P), medial (M), lateral (L), and/or other suitable direction labels can be used. The direction indicator can include a compass arrow that moves based on the direction of movement. Interesting anatomy 1270 (e.g., thrombus) within the IVUS image 1010 can be colored, shaded, and/or highlighted.”) the first element representing a position of the sensor and being displaced as the sensor moves (see at least [0094] FIG. 12 illustrates a screen display 1100 during pullback, e.g., during recording of the IVUS data, in accordance with at least one embodiment of the present disclosure. A current frame indicator 1215 shows where on the cartoon roadmap or virtual venogram 500 of the vasculature the transducer array 124 of the catheter 510 is presently located. Label presets 1220 are also provided (e.g., vasculature segment abbreviations such as CIV, EIV, CFV, etc.). The IVUS frames are automatically labeled based on image analysis. In this example, the current position of the transducer array has been identified as the exterior iliac vein 550, and so the EIV label preset 1220 is highlighted or illuminated. A pullback speed indicator 1230 provides guidance to the clinician or other user for a stable pullback speed. The pullback speed indicator 1230 can be a series of blocks that are filled based on the speed (e.g., more blocks indicate faster speed and fewer blocks indicate slower speed). A tomographic IVUS image 1010 shows the current frame, and an automatic label 1240 can be generated using image analysis with the label presets described with respect to the current frame indicator 1215, e.g., by the vasculature segment abbreviation. Bookmark thumbnails 1250 appear when the user presses the bookmark option and/or the label preset option. A direction indicator 1260 is also included, showing, e.g., the orientation or direction of movement of the transducer array. Anterior (A), posterior (P), medial (M), lateral (L), and/or other suitable direction labels can be used. The direction indicator can include a compass arrow that moves based on the direction of movement. Interesting anatomy 1270 (e.g., thrombus) within the IVUS image 1010 can be colored, shaded, and/or highlighted.”), the image processing device (Figure 1, 100) comprising: a control unit configured to cause the display ([0063] “The controller or processing system 106 may include a processing circuit having one or more processors in communication with memory and/or other suitable tangible computer readable storage media. The controller or processing system 106 may be configured to carry out one or more aspects of the present disclosure. In some embodiments, the processing system 106 and the monitor 108 are separate components. In other embodiments, the processing system 106 and the monitor 108 are integrated in a single component. For example, the system 100 can include a touch screen device, including a housing having a touch screen display and a processor. The system 100 can include any suitable input device, such as a touch sensitive pad or touch screen display, keyboard/mouse, joystick, button, etc., for a user to select options shown on the monitor 108. The processing system 106, the monitor 108, the input device, and/or combinations thereof can be referenced as a controller of the system 100. The controller can be in communication with the device 102, the PIM 104, the processing system 106, the monitor 108, the input device, and/or other components of the system 100.”) to display, upon receiving a user operation of requesting marking of the position of the sensor, a second element, the second element being fixed at a position at time of the user operation (see at least [0094] FIG. 12 illustrates a screen display 1100 during pullback, e.g., during recording of the IVUS data, in accordance with at least one embodiment of the present disclosure. A current frame indicator 1215 shows where on the cartoon roadmap or virtual venogram 500 of the vasculature the transducer array 124 of the catheter 510 is presently located. Label presets 1220 are also provided (e.g., vasculature segment abbreviations such as CIV, EIV, CFV, etc.). The IVUS frames are automatically labeled based on image analysis. In this example, the current position of the transducer array has been identified as the exterior iliac vein 550, and so the EIV label preset 1220 is highlighted or illuminated. A pullback speed indicator 1230 provides guidance to the clinician or other user for a stable pullback speed. The pullback speed indicator 1230 can be a series of blocks that are filled based on the speed (e.g., more blocks indicate faster speed and fewer blocks indicate slower speed). A tomographic IVUS image 1010 shows the current frame, and an automatic label 1240 can be generated using image analysis with the label presets described with respect to the current frame indicator 1215, e.g., by the vasculature segment abbreviation. Bookmark thumbnails 1250 appear when the user presses the bookmark option and/or the label preset option. A direction indicator 1260 is also included, showing, e.g., the orientation or direction of movement of the transducer array. Anterior (A), posterior (P), medial (M), lateral (L), and/or other suitable direction labels can be used. The direction indicator can include a compass arrow that moves based on the direction of movement. Interesting anatomy 1270 (e.g., thrombus) within the IVUS image 1010 can be colored, shaded, and/or highlighted.” [0107] FIG. 22 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. The screen display 2200 includes a live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, user instruction 2210, and labeling button 2220. In this example, the user instruction 2210 is instructing the user to click the labeling button 2220 when the pullback of the ultrasound transducer array 124 reaches the start of the common iliac vein. In some embodiments, this selection is optional, as the IVUS pullback virtual venogram system identifies the start and end of different vasculature segments automatically. In other embodiments, the IVUS pullback virtual venogram system permits the clinician or other user to select the marking of the start or end of a vasculature segment through voice, gesture, or other touch-free command, such that a non-sterile staff member is not needed to operate a keyboard, mouse, joystick, or other non-sterile input device.” [0109] FIG. 23 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. Visible are the live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, one-line user instruction 2210, labeling button 2220, artery 2230 (no longer bifurcating but now joined into a single lumen), and bifurcating vein 2240. In this example, the common iliac vein (CIV) 540 has been marked and highlighted on the virtual venogram, indicating that this is the segment of the patient's vasculature presently occupied by the ultrasound imaging array 124. In this example, the right external iliac vein (EIV) 550 is marked in a different color (e.g., light gray) to indicate this is the next segment the imaging array 124 will enter. The rest of the right-leg vasculature 1720 is marked with dotted lines, to show that it is not currently involved in the pullback procedure, while the left leg vasculature is grayed out (e.g., displayed with a gray color close to the background color) to indicate that it will not be involved in the pullback procedure at all.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the system of displaying trajectory of the catheter withing the lumen as seen in Strommer with displaying marks which have been made by user during operation as seen in CHAO because this modification would indicate the segment of the patient's vasculature presently occupied by the ultrasound imaging array ([0109] of CHAO) Both Strommer and CHAO are understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, Jiang teaches display a second element together with the first element, the second element being fixed at a position same as a position of the first element at time of the operation ([0044] as shown in Fig 2. “The blood vessel 40 examined in the underlying example by means of an imaging system 10 is shown in FIG. 2. FIG. 2 shows the situation after the image data 36 and the position data 34 have been obtained by means of the catheter 12. The catheter 12 was moved along a course or a path or a track 42 through the blood vessel 40. At a number of different positions 44 along the track 42 image data 36 is created by means of the ultrasound unit 20 for a cross-section 46 of the vessel 40 by the ultrasound unit 20. In FIG. 2 the cross-sections 46 are illustrated in each case as the sectional set of the points which was produced between the plane in which the cross-section 46 was obtained and the blood vessel 40. In the image data of each cross-section 46 the blood present in the vessel 40 around the catheter 12, an internal surface 48 of the vessel 40, a vessel wall 50 of the vessel 40 itself and if necessary also a part of the body tissue surrounding the vessel wall 50 are visible. [0045] Signals are also generated in each case by the positioning unit 22 for the individual positions 44, from which the positioning module 30 creates position data 34 for the positions 44. In addition there can be provision for a spatial orientation 52 of the positioning unit 52 to be created by the positioning module 30 from the signals of the positioning unit 22 and thus the plane of the cross-section 46. This can likewise be transferred as orientation data 54 from the localization module 30 to the graphic module 32. The spatial orientation 52 is represented in each case by a normal vector of the cross-sectional plane in FIG. 2,” where 52 and 44 are the same fixed position) PNG media_image1.png 472 306 media_image1.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the system of displaying trajectory of the catheter withing the lumen as seen in Strommer and displaying marks which have been made by user during operation as seen in CHAO with displaying cross sections, positions of catheter with a spatial orientation moving along the track as seen in Jiang because this modification would obtain images for the artery wall from inside the vessel ([0003] of Jiang) Thus, the combination of Strommer, CHAO and Jiang teaches an image processing device configured to cause a display to display, based on tomographic data acquired by a sensor moving in a lumen of a biological tissue, an image representing the biological tissue and display a first element on a screen same as the image, the first element representing a position of the sensor and being displaced as the sensor moves, the image processing device comprising: a control unit configured to cause the display to display, upon receiving a user operation of requesting marking of the position of the sensor, a second element together with the first element, the second element being fixed at a position same as a position of the first element at time of the user operation. Regarding claim 2, Strommer, CHAO and Jiang teaches the image processing device according to claim 1, wherein the control unit is configured to set a color of the second element to a color different from a color of the first element (see at least [0091]of Strommer “ It is further noted that the operator can direct the system to either turn on or turn off the display of superposition of any of the marks, the representation of the position of the stent, the trajectory, or a combination thereof, via the user interface. Any attribute can be selected to represent the marks and the representation of the stent, as long as they are different, such as color, shape, size, and the like. However, a mark or a stent representation is displayed by the same attribute both in two-dimensional image 104 and three-dimensional image 106. For example, marks 116, 118, 120, 122, 124, and 126 are represented in green, features 128, 130, 132, and 134 are represented in blue, and trajectory 140 is represented in red.” [0074] of CHAO “ FIGS. 5-9 illustrate screen displays providing the guidance to the clinician during a IVUS pullback in peripheral vasculature. The screen displays advantageously provide a user with additional clarity to more clearly visualize aspects of deep venous disease. The screen displays perform several functions, including highlighting the segments of the vasculature, labeling the segments, and color coding or otherwise highlighting/distinguishing the segments and/or neighboring anatomy. The screen displays also automatically provide reference and compression measures (e.g., cross-sectional lumen area, diameter, etc.) within each of the segments. Segments meeting certain criteria (e.g., greater than or equal to 50% difference between reference and compression measures) are colored, highlighted, bolded, or marked differently (e.g., colored red) to indicate a segment of clinical interest or concern. Additionally, the screen displays provide real time feedback for the user about pullback speed. The GUIs can also provide for image quality improvement by provided the ability to adjust contrast, gain, focus, and/or other image settings. Image quality can also be improved based on providing feedback to the user to reach the correct pullback speed to obtain sufficient amount of high quality IVUS data. The screen displays provide: map to anatomy directly, immediate live values (reference, compression measurements), color coded segment highlights, pullback speed gauge (guidance). where color codes are used for various elements; [0083] In this example, a reference value 746 and compression value 748 associated with the CIV segment 540 are automatically provided on the screen display as the transducer array 124 moves within the vasculature. For example, the compression value 748 may be a numerical value of the cross-sectional lumen area for the particular patient, or a % compression value. In that regard, the compression value is automatically calculated based on the obtained IVUS data and then output to the screen display adjacent to the virtual venogram 500. In this example, the CIV segment 540 is colored based on the comparison between the reference value and the compression value. For example, comparison can be a ratio of the compression value 748 and the reference value 746 (e.g., compression value divided by reference value). In this example, the CIV segment 540 is colored differently than the IVC segment 540. For example, when the compression value 748 is less than 50% of the reference value 746, the segment can be colored in a second color (e.g., green) to indicate that the amount of compression is potentially harmful to the patient. Different colorings, shadings, highlighting can be used for the comparison of the reference value 746 and compression value 748 (e.g., different colors for greater than 50%, less than 50%, between 0% and 25%, between 25 and 50%, between 50% and 75%, between 75% and 100% ); [0011] of Jiang). In addition, the same motivation is used as the rejection for claim 1. Regarding claim 3, Strommer, CHAO and Jiang teach the image processing device according to claim 1, wherein the control unit is configured to move the sensor to a position corresponding to a position of the second element upon receiving an operation of requesting movement of the sensor to the position corresponding to the position of the second element (see at least [0083] of Strommer During the planning session, a respective one of the displays displays marks 116, 118 and 120 articulated by the user interface on an image of lumen 108. The operator can move marks 116, 118 and 120 together along the full length of the trajectory (e.g., trajectory 114 of FIG. 1B). Mark 118 designates the middle of the medical device, while marks 116 and 120 designate the rear end and the front end of the medical device, respectively. The system determines the distance between marks 116 and 120, according to the type (e.g., the size of stent) which the operator has selected. Marks 116, 118 and 120 together, are locked-on to the trajectory, while being operative to travel along the trajectory. The operator designates the position of mark 118 along the trajectory where the medical device is to be delivered to. [0211] of Strommer “In procedure 846, a representation respective of the selected position is superimposed on the real-time navigation image, thereby enabling an operator to visually navigate the medical device toward the selected position. With reference to FIGS. 13 and 15A, processor 666 produces real-time superimposed two-dimensional image 760, by superimposing a representation of each of marks 808, 810, and 812 on a real-time two-dimensional image of lumen 722, of catheter 732, and of medical device 762. Thus, the operator can visually navigate medical device 762 toward the selected position, according to real-time superimposed two-dimensional image 760. [0212] According to another aspect of the disclosed technique, different trajectories of an MPS catheter within the lumen is determined, corresponding to different activity states of an organ of the patient, by moving the MPS catheter within the lumen. Each trajectory is defined in a three-dimensional MPS coordinate system, and is time-tagged with the corresponding activity state. Each trajectory is superimposed on a real-time two-dimensional image of the lumen, according to the activity state associated with the real-time two-dimensional image. This superimposed real-time two-dimensional which is associated with the organ timing signal detected by an organ timing signal monitor, is displayed on the display, thereby enabling the operator to mark the selected position on the superimposed real-time two-dimensional image. The operator, navigates the medical device to the selected position, either automatically or manually by employing the method of FIG. 5, as described herein above. Alternatively, the operator navigates the medical device to the selected position, visually, by employing the method of FIG. 17, as described herein above.”; [0044-0045] as shown in Fig.2 of Jiang) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 4, Strommer, CHAO and Jiang teach the image processing device according to claim 1, wherein the control unit is configured to cause the display to display, upon receiving the user operation again, a third element together with the first element and the second element, the third element being fixed at a position same as a position of the first element at time of the user operation performed again (see at least [0079] With reference to FIG. 2A, during a planning session, the operator graphically designates a plurality of marks 116, 118, and 120 on two-dimensional image 104, as a selected position within lumen 108, which a medical device (not shown) is to be delivered to. The operator performs the marking either on a frozen two-dimensional image of lumen 108, or on a frozen reconstructed three-dimensional model of lumen 108. The operator performs the marking in different manners, such as manually, according to an automated two-dimensional or three-dimensional quantitative cardiac assessment (QCA), and the like.”; [0084] For simplicity, the medical device in the example set forth in FIGS. 2A, 2B, 3A, and 3B, is a stent. In this case, each of marks 116, 118, and 120 is a substantially straight line, which is substantially perpendicular to lumen 108. For example, marks 116 and 120 designate the two ends of the stent, while mark 118 designates the middle of the stent. Marks 116, 118, and 120 define the location of the stent in lumen 108, as well as the orientation thereof. The marking is performed via a user interface (not shown), such as a joystick, push button, pointing device (e.g., a mouse, stylus and digital tablet, track-ball, touch pad), and the like.); [0088] During the medical operation, the system superimposes features 128 and 130 together with marks 116, 118 and 120, while the catheter is being maneuvered through lumen 108, either on a real-time two-dimensional image of lumen 108 (e.g., angiogram), on a two-dimensional cine-loop of lumen 108, or on a frozen two-dimensional image of lumen 108. Additionally, the system superimposes features 132 and 134 together with marks 122, 124 and 126, while the catheter is being maneuvered through lumen 108, either on a real-time three-dimensional image of lumen 108, on a still three-dimensional image of lumen 108, or on a cine-loop of lumen 108. Further additionally, the system superimposes features 132 and 134 together with marks 122, 124 and 126, on the real-time two-dimensional image of lumen 108, as well as one or more navigation images of lumen 108 (e.g., virtual IVUS image--either a still image or a cine-loop), acquired from viewing angles different than that of the real-time two-dimensional image.; see at least[0107] - [0109] of CHAO “FIG. 23 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. Visible are the live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, one-line user instruction 2210, labeling button 2220, artery 2230 (no longer bifurcating but now joined into a single lumen), and bifurcating vein 2240. In this example, the common iliac vein (CIV) 540 has been marked and highlighted on the virtual venogram, indicating that this is the segment of the patient's vasculature presently occupied by the ultrasound imaging array 124. In this example, the right external iliac vein (EIV) 550 is marked in a different color (e.g., light gray) to indicate this is the next segment the imaging array 124 will enter. The rest of the right-leg vasculature 1720 is marked with dotted lines, to show that it is not currently involved in the pullback procedure, while the left leg vasculature is grayed out (e.g., displayed with a gray color close to the background color) to indicate that it will not be involved in the pullback procedure at all.; [0044] of Jiang as shown in Fig 2. “The blood vessel 40 examined in the underlying example by means of an imaging system 10 is shown in FIG. 2. FIG. 2 shows the situation after the image data 36 and the position data 34 have been obtained by means of the catheter 12. The catheter 12 was moved along a course or a path or a track 42 through the blood vessel 40. At a number of different positions 44 along the track 42 image data 36 is created by means of the ultrasound unit 20 for a cross-section 46 of the vessel 40 by the ultrasound unit 20. In FIG. 2 the cross-sections 46 are illustrated in each case as the sectional set of the points which was produced between the plane in which the cross-section 46 was obtained and the blood vessel 40. In the image data of each cross-section 46 the blood present in the vessel 40 around the catheter 12, an internal surface 48 of the vessel 40, a vessel wall 50 of the vessel 40 itself and if necessary also a part of the body tissue surrounding the vessel wall 50 are visible. [0045] Signals are also generated in each case by the positioning unit 22 for the individual positions 44, from which the positioning module 30 creates position data 34 for the positions 44. In addition there can be provision for a spatial orientation 52 of the positioning unit 52 to be created by the positioning module 30 from the signals of the positioning unit 22 and thus the plane of the cross-section 46. This can likewise be transferred as orientation data 54 from the localization module 30 to the graphic module 32. The spatial orientation 52 is represented in each case by a normal vector of the cross-sectional plane in FIG. 2,” where 52 and 44 are the same fixed position) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 5, Strommer, CHAO and Jiang teach the image processing device according to claim 4, wherein the control unit is configured to set a color of the third element to a color different from the color of the second element (see at least [0091]of Strommer “ It is further noted that the operator can direct the system to either turn on or turn off the display of superposition of any of the marks, the representation of the position of the stent, the trajectory, or a combination thereof, via the user interface. Any attribute can be selected to represent the marks and the representation of the stent, as long as they are different, such as color, shape, size, and the like. However, a mark or a stent representation is displayed by the same attribute both in two-dimensional image 104 and three-dimensional image 106. For example, marks 116, 118, 120, 122, 124, and 126 are represented in green, features 128, 130, 132, and 134 are represented in blue, and trajectory 140 is represented in red.” [0074] of CHAO “ FIGS. 5-9 illustrate screen displays providing the guidance to the clinician during a IVUS pullback in peripheral vasculature. The screen displays advantageously provide a user with additional clarity to more clearly visualize aspects of deep venous disease. The screen displays perform several functions, including highlighting the segments of the vasculature, labeling the segments, and color coding or otherwise highlighting/distinguishing the segments and/or neighboring anatomy. The screen displays also automatically provide reference and compression measures (e.g., cross-sectional lumen area, diameter, etc.) within each of the segments. Segments meeting certain criteria (e.g., greater than or equal to 50% difference between reference and compression measures) are colored, highlighted, bolded, or marked differently (e.g., colored red) to indicate a segment of clinical interest or concern. Additionally, the screen displays provide real time feedback for the user about pullback speed. The GUIs can also provide for image quality improvement by provided the ability to adjust contrast, gain, focus, and/or other image settings. Image quality can also be improved based on providing feedback to the user to reach the correct pullback speed to obtain sufficient amount of high quality IVUS data. The screen displays provide: map to anatomy directly, immediate live values (reference, compression measurements), color coded segment highlights, pullback speed gauge (guidance). where color codes are used for various elements; [0083] In this example, a reference value 746 and compression value 748 associated with the CIV segment 540 are automatically provided on the screen display as the transducer array 124 moves within the vasculature. For example, the compression value 748 may be a numerical value of the cross-sectional lumen area for the particular patient, or a % compression value. In that regard, the compression value is automatically calculated based on the obtained IVUS data and then output to the screen display adjacent to the virtual venogram 500. In this example, the CIV segment 540 is colored based on the comparison between the reference value and the compression value. For example, comparison can be a ratio of the compression value 748 and the reference value 746 (e.g., compression value divided by reference value). In this example, the CIV segment 540 is colored differently than the IVC segment 540. For example, when the compression value 748 is less than 50% of the reference value 746, the segment can be colored in a second color (e.g., green) to indicate that the amount of compression is potentially harmful to the patient. Different colorings, shadings, highlighting can be used for the comparison of the reference value 746 and compression value 748 (e.g., different colors for greater than 50%, less than 50%, between 0% and 25%, between 25 and 50%, between 50% and 75%, between 75% and 100% [0011] of Jiang). In addition, the same motivation is used as the rejection for claim 1. Regarding claim 6, Strommer, CHAO and Jiang teach the image processing device according to claim 4, wherein the control unit is configured to cause the display to display a fourth element together with the first element, the second element, and the third element, the fourth element being fixed at a position between the second element and the third element (see at least [0083] During the planning session, a respective one of the displays displays marks 116, 118 and 120 articulated by the user interface on an image of lumen 108. The operator can move marks 116, 118 and 120 together along the full length of the trajectory (e.g., trajectory 114 of FIG. 1B). Mark 118 designates the middle of the medical device, while marks 116 and 120 designate the rear end and the front end of the medical device, respectively. The system determines the distance between marks 116 and 120, according to the type (e.g., the size of stent) which the operator has selected. Marks 116, 118 and 120 together, are locked-on to the trajectory, while being operative to travel along the trajectory. The operator designates the position of mark 118 along the trajectory where the medical device is to be delivered to”; see at least[0107] - [0109] of CHAO “FIG. 23 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. Visible are the live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, one-line user instruction 2210, labeling button 2220, artery 2230 (no longer bifurcating but now joined into a single lumen), and bifurcating vein 2240. In this example, the common iliac vein (CIV) 540 has been marked and highlighted on the virtual venogram, indicating that this is the segment of the patient's vasculature presently occupied by the ultrasound imaging array 124. In this example, the right external iliac vein (EIV) 550 is marked in a different color (e.g., light gray) to indicate this is the next segment the imaging array 124 will enter. The rest of the right-leg vasculature 1720 is marked with dotted lines, to show that it is not currently involved in the pullback procedure, while the left leg vasculature is grayed out (e.g., displayed with a gray color close to the background color) to indicate that it will not be involved in the pullback procedure at all.”; [0044-0045] as shown in Fig.2 of Jiang ) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 7, Strommer, CHAO and Jiang teach the image processing device according to claim 6, wherein the control unit is configured to calculate an intermediate position between the second element and the third element as the position between the second element and the third element (see at least [0083] During the planning session, a respective one of the displays displays marks 116, 118 and 120 articulated by the user interface on an image of lumen 108. The operator can move marks 116, 118 and 120 together along the full length of the trajectory (e.g., trajectory 114 of FIG. 1B). Mark 118 designates the middle of the medical device, while marks 116 and 120 designate the rear end and the front end of the medical device, respectively. The system determines the distance between marks 116 and 120, according to the type (e.g., the size of stent) which the operator has selected. Marks 116, 118 and 120 together, are locked-on to the trajectory, while being operative to travel along the trajectory. The operator designates the position of mark 118 along the trajectory where the medical device is to be delivered to. [0084] For simplicity, the medical device in the example set forth in FIGS. 2A, 2B, 3A, and 3B, is a stent. In this case, each of marks 116, 118, and 120 is a substantially straight line, which is substantially perpendicular to lumen 108. For example, marks 116 and 120 designate the two ends of the stent, while mark 118 designates the middle of the stent. Marks 116, 118, and 120 define the location of the stent in lumen 108, as well as the orientation thereof. The marking is performed via a user interface (not shown), such as a joystick, push button, pointing device (e.g., a mouse, stylus and digital tablet, track-ball, touch pad), and the like.[0199] An operator (not shown) inputs position data respective of the selected position, by designating marks 726, 728, and 730, on image 720, to processor 666, via user interface 664. Marks 726, 728, and 730 designate the selected position within lumen 722 toward which a medical device (not shown), is to be maneuvered. The medical device is located at the tip of a catheter 732 (FIG. 13). For example, mark 726 designates the position at which a front end of a stent (not shown), should be placed, mark 730 designates the position at which the rear end of the stent should be placed, and mark 728 designates the position at which the middle of, the stent should be placed. The operator inputs position data respective of the same selected position, by designating marks 802 (FIG. 14B), 804, and 806, on image 724, to processor 666, via user interface 664.; see at least[0107] - [0109] of CHAO “FIG. 23 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. Visible are the live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, one-line user instruction 2210, labeling button 2220, artery 2230 (no longer bifurcating but now joined into a single lumen), and bifurcating vein 2240. In this example, the common iliac vein (CIV) 540 has been marked and highlighted on the virtual venogram, indicating that this is the segment of the patient's vasculature presently occupied by the ultrasound imaging array 124. In this example, the right external iliac vein (EIV) 550 is marked in a different color (e.g., light gray) to indicate this is the next segment the imaging array 124 will enter. The rest of the right-leg vasculature 1720 is marked with dotted lines, to show that it is not currently involved in the pullback procedure, while the left leg vasculature is grayed out (e.g., displayed with a gray color close to the background color) to indicate that it will not be involved in the pullback procedure at all.”;[0044-0045] as shown in Fig.2 of Jiang) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 8, Strommer, CHAO and Jiang teach the image processing device according to claim 6, wherein the control unit is configured to set a color of the fourth element to a color different from the color of the second element and the color of the third element (see at least [0091]of Strommer “ It is further noted that the operator can direct the system to either turn on or turn off the display of superposition of any of the marks, the representation of the position of the stent, the trajectory, or a combination thereof, via the user interface. Any attribute can be selected to represent the marks and the representation of the stent, as long as they are different, such as color, shape, size, and the like. However, a mark or a stent representation is displayed by the same attribute both in two-dimensional image 104 and three-dimensional image 106. For example, marks 116, 118, 120, 122, 124, and 126 are represented in green, features 128, 130, 132, and 134 are represented in blue, and trajectory 140 is represented in red.” [0074] of CHAO “ FIGS. 5-9 illustrate screen displays providing the guidance to the clinician during a IVUS pullback in peripheral vasculature. The screen displays advantageously provide a user with additional clarity to more clearly visualize aspects of deep venous disease. The screen displays perform several functions, including highlighting the segments of the vasculature, labeling the segments, and color coding or otherwise highlighting/distinguishing the segments and/or neighboring anatomy. The screen displays also automatically provide reference and compression measures (e.g., cross-sectional lumen area, diameter, etc.) within each of the segments. Segments meeting certain criteria (e.g., greater than or equal to 50% difference between reference and compression measures) are colored, highlighted, bolded, or marked differently (e.g., colored red) to indicate a segment of clinical interest or concern. Additionally, the screen displays provide real time feedback for the user about pullback speed. The GUIs can also provide for image quality improvement by provided the ability to adjust contrast, gain, focus, and/or other image settings. Image quality can also be improved based on providing feedback to the user to reach the correct pullback speed to obtain sufficient amount of high quality IVUS data. The screen displays provide: map to anatomy directly, immediate live values (reference, compression measurements), color coded segment highlights, pullback speed gauge (guidance). where color codes are used for various elements; [0083] In this example, a reference value 746 and compression value 748 associated with the CIV segment 540 are automatically provided on the screen display as the transducer array 124 moves within the vasculature. For example, the compression value 748 may be a numerical value of the cross-sectional lumen area for the particular patient, or a % compression value. In that regard, the compression value is automatically calculated based on the obtained IVUS data and then output to the screen display adjacent to the virtual venogram 500. In this example, the CIV segment 540 is colored based on the comparison between the reference value and the compression value. For example, comparison can be a ratio of the compression value 748 and the reference value 746 (e.g., compression value divided by reference value). In this example, the CIV segment 540 is colored differently than the IVC segment 540. For example, when the compression value 748 is less than 50% of the reference value 746, the segment can be colored in a second color (e.g., green) to indicate that the amount of compression is potentially harmful to the patient. Different colorings, shadings, highlighting can be used for the comparison of the reference value 746 and compression value 748 (e.g., different colors for greater than 50%, less than 50%, between 0% and 25%, between 25 and 50%, between 50% and 75%, between 75% and 100% ); [0044-0045] as shown in Fig.2 of Jiang). In addition, the same motivation is used as the rejection for claim 1. Regarding claim 9, Strommer, CHAO and Jiang teach the image processing device according to claim 6, wherein the control unit is configured to move the sensor to a position corresponding to a position of the fourth element upon receiving an operation of requesting movement of the sensor to the position corresponding to the position of the fourth element (see at least [0083] of Strommer During the planning session, a respective one of the displays displays marks 116, 118 and 120 articulated by the user interface on an image of lumen 108. The operator can move marks 116, 118 and 120 together along the full length of the trajectory (e.g., trajectory 114 of FIG. 1B). Mark 118 designates the middle of the medical device, while marks 116 and 120 designate the rear end and the front end of the medical device, respectively. The system determines the distance between marks 116 and 120, according to the type (e.g., the size of stent) which the operator has selected. Marks 116, 118 and 120 together, are locked-on to the trajectory, while being operative to travel along the trajectory. The operator designates the position of mark 118 along the trajectory where the medical device is to be delivered to. [0211] “In procedure 846, a representation respective of the selected position is superimposed on the real-time navigation image, thereby enabling an operator to visually navigate the medical device toward the selected position. With reference to FIGS. 13 and 15A, processor 666 produces real-time superimposed two-dimensional image 760, by superimposing a representation of each of marks 808, 810, and 812 on a real-time two-dimensional image of lumen 722, of catheter 732, and of medical device 762. Thus, the operator can visually navigate medical device 762 toward the selected position, according to real-time superimposed two-dimensional image 760. [0212] According to another aspect of the disclosed technique, different trajectories of an MPS catheter within the lumen is determined, corresponding to different activity states of an organ of the patient, by moving the MPS catheter within the lumen. Each trajectory is defined in a three-dimensional MPS coordinate system, and is time-tagged with the corresponding activity state. Each trajectory is superimposed on a real-time two-dimensional image of the lumen, according to the activity state associated with the real-time two-dimensional image. This superimposed real-time two-dimensional which is associated with the organ timing signal detected by an organ timing signal monitor, is displayed on the display, thereby enabling the operator to mark the selected position on the superimposed real-time two-dimensional image. The operator, navigates the medical device to the selected position, either automatically or manually by employing the method of FIG. 5, as described herein above. Alternatively, the operator navigates the medical device to the selected position, visually, by employing the method of FIG. 17, as described herein above.” [0044-0045] as shown in Fig.2 of Jiang) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 10, Strommer, CHAO and Jiang teach the image processing device according to claim 4, wherein the control unit is configured to set a color of a region between a cross section corresponding to the position of the second element and a cross section corresponding to a position of the third element to a color different from a color of an adjacent region in a three-dimensional image that is the image. (see at least [0052] of Strommer “FIG. 16B is a schematic illustration of the lumen of FIG. 15B, when the medical device has reached the selected position; [0104] In procedure 164, at least one image sequence in selected from a plurality of image sequences, each of the image sequences being acquired from a different perspective. The processor selects an image sequence among a plurality of image sequences, each acquired by a different image acquisition device, from a different viewing angle, or a combination thereof. [0132] Reference is further made to FIGS. 8A, 8B and 8C. FIG. 8A is an illustration of the lumen of FIG. 1A, having a plurality of occluded regions. FIG. 8B is a cross-sectional view of a selected region of the lumen of FIG. 8A. FIG. 8C is a schematic illustration of a representation of the lumen of FIG. 8B in a GUI, generally referenced 450, operative in accordance with another embodiment of the disclosed technique. [0136] A system (not shown) then marks only those regions on three-dimensional image 106, which are occluded more than the selected occlusion percentage. In the example set forth in FIG. 8B, only those regions of lumen 108 which are occluded 70% or more, are marked in three-dimensional image 106. Plaques 452 and 456, which exceed 70%, are represented by marked regions 470 and 472, respectively, on three-dimensional image 106. Marked regions 470 and 472 are differentiated from the rest of the portions of three-dimensional image 106, by being colored in a different hue, marked by hatches, animated, and the like.” [0171] as shown in Fig.11 “ Path 608 is a three-dimensional curve between an origin 612 and a destination 614 of a distal portion (not shown) of catheter 596 relative to lumen 108. Both origin 612 and destination 614 are within a field of view of imaging system 592. Path 608 is determined during an imaging session prior to the medical operation, and stored in the storage unit.[0172] Controller 584 calculates and constructs path 608, for example, according to a plurality of two-dimensional images obtained from lumen 108, with the aid of a C-arm imager. For example, the C-arm can obtain two two-dimensional ECG gated images of lumen 108 at two different non-parallel ECG gated image planes. When the operator indicates origin 612 and destination 614, the C-arm constructs path 608 in three dimensions. It is noted that controller 584 calculates path 608 based on one or more image processing algorithms, according to contrast variations of lumen 108 relative to the background”; [0109] of CHAO “FIG. 23 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. Visible are the live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, one-line user instruction 2210, labeling button 2220, artery 2230 (no longer bifurcating but now joined into a single lumen), and bifurcating vein 2240. In this example, the common iliac vein (CIV) 540 has been marked and highlighted on the virtual venogram, indicating that this is the segment of the patient's vasculature presently occupied by the ultrasound imaging array 124. In this example, the right external iliac vein (EIV) 550 is marked in a different color (e.g., light gray) to indicate this is the next segment the imaging array 124 will enter. The rest of the right-leg vasculature 1720 is marked with dotted lines, to show that it is not currently involved in the pullback procedure, while the left leg vasculature is grayed out (e.g., displayed with a gray color close to the background color) to indicate that it will not be involved in the pullback procedure at all” see at least [0011] of Jiang “The volume graphic can for example be provided in the form of 3D image data. 3D image data is to be understood here as a dataset comprising data for individual volume elements (voxel-volume elements) of the imaged volume. For each volume element in this case an intensity value for a gray tone can be specified or a number of intensity values for a color tone of the volume graphic can be specified.”; [0044] The blood vessel 40 examined in the underlying example by means of an imaging system 10 is shown in FIG. 2. FIG. 2 shows the situation after the image data 36 and the position data 34 have been obtained by means of the catheter 12. The catheter 12 was moved along a course or a path or a track 42 through the blood vessel 40. At a number of different positions 44 along the track 42 image data 36 is created by means of the ultrasound unit 20 for a cross-section 46 of the vessel 40 by the ultrasound unit 20. In FIG. 2 the cross-sections 46 are illustrated in each case as the sectional set of the points which was produced between the plane in which the cross-section 46 was obtained and the blood vessel 40. In the image data of each cross-section 46 the blood present in the vessel 40 around the catheter 12, an internal surface 48 of the vessel 40, a vessel wall 50 of the vessel 40 itself and if necessary also a part of the body tissue surrounding the vessel wall 50 are visible”) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 11, Strommer, CHAO and Jiang teach the image processing device according to claim 1, wherein the control unit is configured to combine a graphic element group that is an element group including the first element and the second element (see at least [0087-0089] as shown in Figs 2A-2B, 3A-3B of Strommer “During the medical operation, following the planning session, a catheter which includes a stent (not shown), is maneuvered within lumen 108 toward marks 116, 118 and 120. An MPS sensor (not shown) is attached to the catheter in the vicinity of the stent. With reference to FIGS. 2A and 2B, the position of the front end and of the rear end of the stent are represented in real-time, by features 128 and 130, respectively, on two-dimensional image 104, and by features 132 and 134, respectively, on three-dimensional image 106. In the example set forth in FIGS. 2A and 2B, each of features 128 and 130 is in form of a rectangle with longitudinal lines 136 and 138, respectively, dividing each rectangle in two. The actual trajectory of the catheter is represented by a feature 140 (FIG. 2B) superimposed on three-dimensional image 106. The actual trajectory of the catheter can be represented by another feature (not shown) superimposed on two-dimensional image 104.”; [0088] During the medical operation, the system superimposes features 128 and 130 together with marks 116, 118 and 120, while the catheter is being maneuvered through lumen 108, either on a real-time two-dimensional image of lumen 108 (e.g., angiogram), on a two-dimensional cine-loop of lumen 108, or on a frozen two-dimensional image of lumen 108. Additionally, the system superimposes features 132 and 134 together with marks 122, 124 and 126, while the catheter is being maneuvered through lumen 108, either on a real-time three-dimensional image of lumen 108, on a still three-dimensional image of lumen 108, or on a cine-loop of lumen 108. Further additionally, the system superimposes features 132 and 134 together with marks 122, 124 and 126, on the real-time two-dimensional image of lumen 108, as well as one or more navigation images of lumen 108 (e.g., virtual IVUS image--either a still image or a cine-loop), acquired from viewing angles different than that of the real-time two-dimensional image. [0089] The system determines the distance between the centers (not shown) of features 128 and 130, according to the type (i.e., size) of stent which the operator selects for mounting in lumen 108. This distance as displayed on the respective one of the displays, is substantially fixed, as the stent is maneuvered through lumen 108. Features 128 and 130 move together on image 104, while the stent is maneuvered through lumen 108. A respective one of the displays can display trajectories 140 and 142, either while a catheter (not shown) is being maneuvered through lumen 108, or during a play-back session, after performing the medical operation on the patient.”; [0044-0045] as shown in Fig.2 of Jiang) and an elongated graphic element representing a movement range of the sensor (see at least [0087-0089] as shown in Figs 2A-2B, 3A-3B of Strommer “During the medical operation, following the planning session, a catheter which includes a stent (not shown), is maneuvered within lumen 108 toward marks 116, 118 and 120. An MPS sensor (not shown) is attached to the catheter in the vicinity of the stent. With reference to FIGS. 2A and 2B, the position of the front end and of the rear end of the stent are represented in real-time, by features 128 and 130, respectively, on two-dimensional image 104, and by features 132 and 134, respectively, on three-dimensional image 106. In the example set forth in FIGS. 2A and 2B, each of features 128 and 130 is in form of a rectangle with longitudinal lines 136 and 138, respectively, dividing each rectangle in two. The actual trajectory of the catheter is represented by a feature 140 (FIG. 2B) superimposed on three-dimensional image 106. The actual trajectory of the catheter can be represented by another feature (not shown) superimposed on two-dimensional image 104.”; [0089] The system determines the distance between the centers (not shown) of features 128 and 130, according to the type (i.e., size) of stent which the operator selects for mounting in lumen 108. This distance as displayed on the respective one of the displays, is substantially fixed, as the stent is maneuvered through lumen 108. Features 128 and 130 move together on image 104, while the stent is maneuvered through lumen 108. A respective one of the displays can display trajectories 140 and 142, either while a catheter (not shown) is being maneuvered through lumen 108, or during a play-back session, after performing the medical operation on the patient.; 0101-0104] of CHAO “FIG. 17 illustrates a screen display 1700 during pullback, e.g., during recording of IVUS data, in accordance with at least one embodiment of the present disclosure. On the left side of the screen display, a roadmap image, co-registered external image, or virtual venogram 500 of the vasculature is shown. A portion 1710 of the vasculature 1720 from which IVUS data has already been collected is highlighted, colored, and/or shaded…”[0107-0109];reference numeral “1020” in Fig.22 and Fig.23 showing long graphic element indicating a movement range of a sensor”.; [0044-0045] as shown in Fig.2 of Jiang) and to cause the display to display the graphic element group and the elongated graphic element ([0087-0089] as shown Figs 2A-2B, 3A-3B, 6B of Strommer ; [0101-0104]; [0107-0109];reference numeral “1020” in Fig.22 and Fig.23 showing long graphic element indicating a movement range of a sensor”.. Thus, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the system of displaying trajectory line of the catheter within the lumen as seen in Strommer with including elongated graphic element 1020 as seen in CHAO because this modification would achieve the expected benefits of providing more attention for user.) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 12, Strommer, CHAO and Jiang teach the image processing device according to claim 11, wherein the control unit is configured to cause the display to display the elongated graphic element in a direction in which a longitudinal axis direction of the elongated graphic element is parallel to a longitudinal direction of the lumen in the three-dimensional image that is the image (see at least [0087-0089 ],[0119-0120] as shown in 2B, 3B , 6B of Strommer ; [0101-0104]; [0107-0109];reference numeral “1020” in Fig.22 and Fig.23 showing long graphic element indicating a movement range of a sensor”; [0044-0045] as shown in Fig.2 of Jiang. Thus, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the system of displaying trajectory line of the catheter within the lumen as seen in Strommer with including elongated graphic element 1020 as seen in CHAO because this modification would achieve the expected benefits of providing more attention for user.; [0044-0045] as shown in Fig.2 of Jiang ) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 13, Strommer and CHAO teaches the image processing device according to claim 1, wherein the control unit is configured to define the first element and the second element in a three-dimensional image that is the image(see at least [0085] of shown at least in Figs.2B, 3B, 8B, 8C of Strommer “A plurality of marks 122, 124 and 126, which are the counterpart of marks 116, 118, and 120, respectively, are simultaneously displayed on three-dimensional image 106 in GUI 102. For the purpose of performing the marking, each of two-dimensional image 104 and three-dimensional image 106 is frozen at the same activity-state of the inspected organ (e.g., the heart). This freezing feature provides a still image of lumen 108, thereby preventing vibrations of the image and enabling a successful marking by the operator. [0086] Instead of manually designating the marks, an algorithm can be employed to automatically identify the selected location (e.g., by entering into the algorithm, a selected percentage of occlusion by a plaque in a lumen), and designate marks 116, 118, 120, 122, 124, and 126, automatically. This aspect of the invention is described herein below in connection with FIGS. 8A, 8B, and 8C. The system associates the occlusion data with three-dimensional image 106, and projects this occlusion data on two-dimensional image 104, for the purpose of designating marks 116,118 and 120.”; Fig.12, Fig.13 of CHAO; [0044-0045] as shown in Fig.2 of Jiang), the first element being defined as at least a voxel representing an inner surface of the biological tissue or a voxel adjacent to the voxel representing the inner surface and representing the lumen in a first voxel group corresponding to a position of the sensor, the second element being defined as at least a voxel representing the inner surface or a voxel adjacent to the voxel representing the inner surface and representing the lumen in a second voxel group corresponding to a position of the sensor at time of the user operation (see at least [0137] of Strommer “It is noted the system enables the operator to manually correct the marking on screen, in case that the operator, according to her medical knowledge and experience detects for example, that the plaque portion should be different than what the system indicated. It is further noted that the system can present the various layers of the lumen (i.e., media, adventitia and intima), in GUI 450, in different colors.” where coloring the various layers of the lumen in the 3D image; Fig.12 of CHAO “[0094] FIG. 12 illustrates a screen display 1100 during pullback, e.g., during recording of the IVUS data, in accordance with at least one embodiment of the present disclosure. A current frame indicator 1215 shows where on the cartoon roadmap or virtual venogram 500 of the vasculature the transducer array 124 of the catheter 510 is presently located. Label presets 1220 are also provided (e.g., vasculature segment abbreviations such as CIV, EIV, CFV, etc.). The IVUS frames are automatically labeled based on image analysis. In this example, the current position of the transducer array has been identified as the exterior iliac vein 550, and so the EIV label preset 1220 is highlighted or illuminated. A pullback speed indicator 1230 provides guidance to the clinician or other user for a stable pullback speed. The pullback speed indicator 1230 can be a series of blocks that are filled based on the speed (e.g., more blocks indicate faster speed and fewer blocks indicate slower speed). A tomographic IVUS image 1010 shows the current frame, and an automatic label 1240 can be generated using image analysis with the label presets described with respect to the current frame indicator 1215, e.g., by the vasculature segment abbreviation. Bookmark thumbnails 1250 appear when the user presses the bookmark option and/or the label preset option. A direction indicator 1260 is also included, showing, e.g., the orientation or direction of movement of the transducer array. Anterior (A), posterior (P), medial (M), lateral (L), and/or other suitable direction labels can be used. The direction indicator can include a compass arrow that moves based on the direction of movement. Interesting anatomy 1270 (e.g., thrombus) within the IVUS image 1010 can be colored, shaded, and/or highlighted.” [0011] of Jiang “The volume graphic can for example be provided in the form of 3D image data. 3D image data is to be understood here as a dataset comprising data for individual volume elements (voxel-volume elements) of the imaged volume. For each volume element in this case an intensity value for a gray tone can be specified or a number of intensity values for a color tone of the volume graphic can be specified.”; [0044] The blood vessel 40 examined in the underlying example by means of an imaging system 10 is shown in FIG. 2. FIG. 2 shows the situation after the image data 36 and the position data 34 have been obtained by means of the catheter 12. The catheter 12 was moved along a course or a path or a track 42 through the blood vessel 40. At a number of different positions 44 along the track 42 image data 36 is created by means of the ultrasound unit 20 for a cross-section 46 of the vessel 40 by the ultrasound unit 20. In FIG. 2 the cross-sections 46 are illustrated in each case as the sectional set of the points which was produced between the plane in which the cross-section 46 was obtained and the blood vessel 40. In the image data of each cross-section 46 the blood present in the vessel 40 around the catheter 12, an internal surface 48 of the vessel 40, a vessel wall 50 of the vessel 40 itself and if necessary also a part of the body tissue surrounding the vessel wall 50 are visible”), and colors the second element distinguishably from the first element ([0136] of Strommer “A system (not shown) then marks only those regions on three-dimensional image 106, which are occluded more than the selected occlusion percentage. In the example set forth in FIG. 8B, only those regions of lumen 108 which are occluded 70% or more, are marked in three-dimensional image 106. Plaques 452 and 456, which exceed 70%, are represented by marked regions 470 and 472, respectively, on three-dimensional image 106. Marked regions 470 and 472 are differentiated from the rest of the portions of three-dimensional image 106, by being colored in a different hue, marked by hatches, animated, and the like.”; [0109] of CHAO “FIG. 23 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. Visible are the live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, one-line user instruction 2210, labeling button 2220, artery 2230 (no longer bifurcating but now joined into a single lumen), and bifurcating vein 2240. In this example, the common iliac vein (CIV) 540 has been marked and highlighted on the virtual venogram, indicating that this is the segment of the patient's vasculature presently occupied by the ultrasound imaging array 124. In this example, the right external iliac vein (EIV) 550 is marked in a different color (e.g., light gray) to indicate this is the next segment the imaging array 124 will enter. The rest of the right-leg vasculature 1720 is marked with dotted lines, to show that it is not currently involved in the pullback procedure, while the left leg vasculature is grayed out (e.g., displayed with a gray color close to the background color) to indicate that it will not be involved in the pullback procedure at all.” [0011] of Jiang “The volume graphic can for example be provided in the form of 3D image data. 3D image data is to be understood here as a dataset comprising data for individual volume elements (voxel-volume elements) of the imaged volume. For each volume element in this case an intensity value for a gray tone can be specified or a number of intensity values for a color tone of the volume graphic can be specified.”) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 14, Strommer, CHAO and Jiang teach the image processing device according to claim 1, wherein the control unit is configured to receive an operation of pressing one or more predetermined keys as the user operation (see at least [0127] of Strommer “Three-dimensional image 106 corresponds with an activity-state 420 in ECG timing signal 412. When the operator presses forward button 414, a sequence of three-dimensional images of lumen 108 is displayed in a window 422. Each of the three-dimensional images displayed in window 422, corresponds with the respective activity-state in ECG timing signal 412, as if ECG timing signal 412 would advance in a direction designated by an arrow 424. “Each of the three-dimensional images, which are displayed in window 422, is acquired by a system (not shown), during the scanning process. Thus, the operator can view animated three-dimensional images of lumen 108 as the heart of the patient would beat either forward or backward in time. The operator can alternatively view a three-dimensional image of lumen 108, which corresponds with a selected activity-state during a selected heart cycle of the patient, by pressing freeze button 418 at a selected point in time. It is noted that other sequenced images, such as a reference real-time image (i.e., served as road map during navigation, such as a fluoroscopic image, and the like) can also be made to freeze-up.”; [0094] of CHAO “FIG. 12 illustrates a screen display 1100 during pullback, e.g., during recording of the IVUS data, in accordance with at least one embodiment of the present disclosure. A current frame indicator 1215 shows where on the cartoon roadmap or virtual venogram 500 of the vasculature the transducer array 124 of the catheter 510 is presently located. Label presets 1220 are also provided (e.g., vasculature segment abbreviations such as CIV, EIV, CFV, etc.). The IVUS frames are automatically labeled based on image analysis. In this example, the current position of the transducer array has been identified as the exterior iliac vein 550, and so the EIV label preset 1220 is highlighted or illuminated. A pullback speed indicator 1230 provides guidance to the clinician or other user for a stable pullback speed. The pullback speed indicator 1230 can be a series of blocks that are filled based on the speed (e.g., more blocks indicate faster speed and fewer blocks indicate slower speed). A tomographic IVUS image 1010 shows the current frame, and an automatic label 1240 can be generated using image analysis with the label presets described with respect to the current frame indicator 1215, e.g., by the vasculature segment abbreviation. Bookmark thumbnails 1250 appear when the user presses the bookmark option and/or the label preset option. A direction indicator 1260 is also included, showing, e.g., the orientation or direction of movement of the transducer array. Anterior (A), posterior (P), medial (M), lateral (L), and/or other suitable direction labels can be used. The direction indicator can include a compass arrow that moves based on the direction of movement. Interesting anatomy 1270 (e.g., thrombus) within the IVUS image 1010 can be colored, shaded, and/or highlighted.”; [0107] FIG. 22 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. The screen display 2200 includes a live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, user instruction 2210, and labeling button 2220. In this example, the user instruction 2210 is instructing the user to click the labeling button 2220 when the pullback of the ultrasound transducer array 124 reaches the start of the common iliac vein. In some embodiments, this selection is optional, as the IVUS pullback virtual venogram system identifies the start and end of different vasculature segments automatically. In other embodiments, the IVUS pullback virtual venogram system permits the clinician or other user to select the marking of the start or end of a vasculature segment through voice, gesture, or other touch-free command, such that a non-sterile staff member is not needed to operate a keyboard, mouse, joystick, or other non-sterile input device.” [0044-0045] as shown in Fig.2 of Jiang) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 15, Strommer, CHAO and Jiang teach an image processing system ([0166] of Strommer “Reference is further made to FIGS. 11 and 12. FIG. 11 is a schematic illustration of a system, generally referenced 580, for automatically maneuvering a catheter within a lumen of the body of a patient, constructed and operative in accordance with another embodiment of the disclosed technique. FIG. 12 is a schematic illustration of a method by which the imaging system of the system of FIG. 11 determines the coordinates of a path within the lumen, in three dimensions.”).comprising: the image processing device according to claim 1 (as discussed above); and a probe including the sensor ([0081] of Strommer “This trajectory can be obtained for example, by employing a guided intravascular ultrasound catheter (GIVUS--not shown), in an imaging session prior to the planning session. The GIVUS is a catheter which includes an image detector (e.g., ultrasound transducer) at the tip thereof, and an MPS sensor in the vicinity of the image detector. The operator maneuvers the GIVUS within the lumen, as far as physically possible, and then pulls the GIVUS back through the lumen. During the pull-back, the image detector detects a plurality of two-dimensional images of the inside of the lumen.”; ([0055] “It is understood that the system 100 and/or device 102 can be configured to obtain any suitable intraluminal imaging data. In some embodiments, the device 102 may include an imaging component of any suitable imaging modality, such as optical imaging, optical coherence tomography (OCT), etc. In some embodiments, the device 102 may include any suitable non-imaging component, including a pressure sensor, a flow sensor, a temperature sensor, an optical fiber, a reflector, a mirror, a prism, an ablation element, a radio frequency (RF) electrode, a conductor, or combinations thereof. Generally, the device 102 can include an imaging element to obtain intraluminal imaging data associated with the lumen 120. The device 102 may be sized and shaped (and/or configured) for insertion into a vessel or lumen 120 of the patient. [0120] of CHAO “In step 2490, if an appropriate user input has been selected, the processing system 106 provides guidance to the clinician regarding movements of the intravascular imaging probe controls 104 that may be required to advance or retract the probe 102 to a desired location within the patient's body, or to mark the start or end of a given vascular segment, or to start or stop recording. Such guidance may be determined through conventional techniques (e.g., database lookup) or through learning-based techniques”; [0067] The external imaging system 132 can be configured to obtain x-ray, radiographic, angiographic/venographic (e.g., with contrast), and/or fluoroscopic (e.g., without contrast) images of the body of a patient (including the vessel 120). External imaging system 132 may also be configured to obtain computed tomography images of the body of patient (including the vessel 120).” The external imaging system 132 may include an external ultrasound probe configured to obtain ultrasound images of the body of the patient (including the vessel 120) while positioned outside the body. In some embodiments, the system 100 includes other imaging modality systems (e.g., MRI) to obtain images of the body of the patient (including the vessel 120). The processing system 106 can utilize the images of the body of the patient in conjunction with the intraluminal images obtained by the intraluminal device 102.”) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 16, Strommer, CHAO and Jiang teach teaches the image processing system according to claim 15, further comprising: the display ([0077] The operator can direct the system via a user interface (not shown), to alternately display GUI 100 and GUI 102, on the display. The user interface can be a switch, foot pedal, and the like, as described herein below in connection with FIG. 4D. Alternatively, the display can display GUI 100 and GUI 102 at the same time, side by side.”; [0126] Reference is further made to FIG. 7, which is a schematic illustration of an ECG coordinated display (i.e., a GUI) of a lumen, generally referenced 410, constructed and operative in accordance with a further embodiment of the disclosed technique. ECG coordinated display 410 includes an ECG timing signal 412, a forward button 414, a backward button 416, a freeze button 418 and three-dimensional image 106 (FIG. 1B).”; [0060] The PIM 104 transfers the received echo signals to the processing system 106 where the ultrasound image (including the flow information) is reconstructed and displayed on the monitor 108. The console or processing system 106 can include a processor and a memory. The processing system 106 may be operable to facilitate the features of the intraluminal imaging system 100 described herein. For example, the processor can execute computer readable instructions stored on the non-transitory tangible computer readable medium”, see Figs.5-23) In addition, the same motivation is used as the rejection for claim 1. Regarding independent claim 17, Strommer teaches an image display method of causing a display to display, based on tomographic data acquired by a sensor moving in a lumen of a biological tissue(see at least [0057] The disclosed technique overcomes the disadvantages of the prior art by graphically designating on an image of the lumen, the position where a medical device (e.g., a PCI device, a dilation balloon, a stent delivery system) has to be delivered, and indicating when the medical device has reached the selected position. The medical device is attached to the tip of a catheter. A medical positioning system (MPS) sensor constantly detects the position of the medical device relative to the selected position. This position is represented on a real-time image (e.g., live fluoroscopy), a pseudo-real-time image (e.g., previously recorded cine-loop) or a previously recorded still image frame of the lumen, thereby obviating the need to radiate the inspected organ of the patient repeatedly, neither or to repeatedly inject contrast agent to the body of the patient. The medical staff can either guide the catheter manually according to feedback from an appropriate user interface, such as display, audio output, and the like, or activate a catheter guiding system which automatically guides the catheter toward the selected position.” [0092] With reference to FIGS. 3A and 3B, while the catheter is being maneuvered through lumen 108, each of two-dimensional image 104 and three-dimensional image 106, is displayed relative to the coordinate system of lumen 108 (i.e., relative to the MPS sensor which is attached to the catheter, and which constantly moves together with lumen 108). When the stent reaches the selected position (i.e., front end of the stent is substantially aligned with mark 120 and the rear end thereof is substantially aligned with mark 116), a user interface (e.g., audio, visual, or tactile device--not shown) announces the event to the operator.”), an image representing the biological tissue (see at least [0068] Two-dimensional image 104 can be a still image of the lumen system (i.e., one of the images among a plurality of images in a cine-loop, which the operator selects). In this case, the selected two-dimensional image can be an image whose contrast for example, is better (e.g., the difference in the brightness of the dark pixels and the bright pixels in the image, is large) than all the rest, and which portrays the lumen system in a manner which is satisfactory for the operator either to designate the selected location of the medical device, or to view a real-time representation of the stent, as the medical device is being navigated within the lumen system. [0069] With reference to FIG. 1B, GUI 102 includes a three-dimensional image 106 of a lumen (referenced 108) of the lumen system displayed in GUI 100, through which the catheter is being maneuvered. Three-dimensional image 106 is reconstructed from a plurality of two-dimensional images which are detected by a two-dimensional image acquisition device, during an image acquisition stage, by a technique known in the art.”) and display a first element on a screen same as the image(see at least [0074] An MPS sensor (not shown) is firmly attached to the tip of the catheter. Three-dimensional image 106 is registered with two-dimensional image 104, such that each point in two-dimensional image 104 corresponds to a respective point in three-dimensional image 106. In this manner, the coordinates of each point in three-dimensional image 106 can be projected onto two-dimensional image 104. Alternatively, each point in two-dimensional image 104 can be transferred to three-dimensional image 106 (e.g., by acquiring a series of two-dimensional images from different viewing angles). A real-time representation 110 (FIG. 1A) of the MPS sensor is superimposed on lumen 108, as described herein below in connection with FIG. 6C. A real-time representation 112 (FIG. 1B) of the MPS sensor is superimposed on three-dimensional image 106.”; [0087] During the medical operation, following the planning session, a catheter which includes a stent (not shown), is maneuvered within lumen 108 toward marks 116, 118 and 120. An MPS sensor (not shown) is attached to the catheter in the vicinity of the stent. With reference to FIGS. 2A and 2B, the position of the front end and of the rear end of the stent are represented in real-time, by features 128 and 130, respectively, on two-dimensional image 104, and by features 132 and 134, respectively, on three-dimensional image 106. In the example set forth in FIGS. 2A and 2B, each of features 128 and 130 is in form of a rectangle with longitudinal lines 136 and 138, respectively, dividing each rectangle in two. The actual trajectory of the catheter is represented by a feature 140 (FIG. 2B) superimposed on three-dimensional image 106. The actual trajectory of the catheter can be represented by another feature (not shown) superimposed on two-dimensional image 104.”), the first element representing a position of the sensor and being displaced as the sensor moves(see at least [0087] During the medical operation, following the planning session, a catheter which includes a stent (not shown), is maneuvered within lumen 108 toward marks 116, 118 and 120. An MPS sensor (not shown) is attached to the catheter in the vicinity of the stent. With reference to FIGS. 2A and 2B, the position of the front end and of the rear end of the stent are represented in real-time, by features 128 and 130, respectively, on two-dimensional image 104, and by features 132 and 134, respectively, on three-dimensional image 106. In the example set forth in FIGS. 2A and 2B, each of features 128 and 130 is in form of a rectangle with longitudinal lines 136 and 138, respectively, dividing each rectangle in two. The actual trajectory of the catheter is represented by a feature 140 (FIG. 2B) superimposed on three-dimensional image 106. The actual trajectory of the catheter can be represented by another feature (not shown) superimposed on two-dimensional image 104.”; [0092] With reference to FIGS. 3A and 3B, while the catheter is being maneuvered through lumen 108, each of two-dimensional image 104 and three-dimensional image 106, is displayed relative to the coordinate system of lumen 108 (i.e., relative to the MPS sensor which is attached to the catheter, and which constantly moves together with lumen 108). When the stent reaches the selected position (i.e., front end of the stent is substantially aligned with mark 120 and the rear end thereof is substantially aligned with mark 116), a user interface (e.g., audio, visual, or tactile device--not shown) announces the event to the operator.”), the image display method comprising: receiving a user operation of requesting marking of the position of the sensor (see at least [0079] With reference to FIG. 2A, during a planning session, the operator graphically designates a plurality of marks 116, 118, and 120 on two-dimensional image 104, as a selected position within lumen 108, which a medical device (not shown) is to be delivered to. The operator performs the marking either on a frozen two-dimensional image of lumen 108, or on a frozen reconstructed three-dimensional model of lumen 108. The operator performs the marking in different manners, such as manually, according to an automated two-dimensional or three-dimensional quantitative cardiac assessment (QCA), and the like.”; [0084] For simplicity, the medical device in the example set forth in FIGS. 2A, 2B, 3A, and 3B, is a stent. In this case, each of marks 116, 118, and 120 is a substantially straight line, which is substantially perpendicular to lumen 108. For example, marks 116 and 120 designate the two ends of the stent, while mark 118 designates the middle of the stent. Marks 116, 118, and 120 define the location of the stent in lumen 108, as well as the orientation thereof. The marking is performed via a user interface (not shown), such as a joystick, push button, pointing device (e.g., a mouse, stylus and digital tablet, track-ball, touch pad), and the like.); and causing the display to display a second element together with the first element, the second element being fixed at a position same as a position of the first element at time of the user operation(see at least [0087] During the medical operation, following the planning session, a catheter which includes a stent (not shown), is maneuvered within lumen 108 toward marks 116, 118 and 120. An MPS sensor (not shown) is attached to the catheter in the vicinity of the stent. With reference to FIGS. 2A and 2B, the position of the front end and of the rear end of the stent are represented in real-time, by features 128 and 130, respectively, on two-dimensional image 104, and by features 132 and 134, respectively, on three-dimensional image 106. In the example set forth in FIGS. 2A and 2B, each of features 128 and 130 is in form of a rectangle with longitudinal lines 136 and 138, respectively, dividing each rectangle in two. The actual trajectory of the catheter is represented by a feature 140 (FIG. 2B) superimposed on three-dimensional image 106. The actual trajectory of the catheter can be represented by another feature (not shown) superimposed on two-dimensional image 104.”;[ 0088] During the medical operation, the system superimposes features 128 and 130 together with marks 116, 118 and 120, while the catheter is being maneuvered through lumen 108, either on a real-time two-dimensional image of lumen 108 (e.g., angiogram), on a two-dimensional cine-loop of lumen 108, or on a frozen two-dimensional image of lumen 108. Additionally, the system superimposes features 132 and 134 together with marks 122, 124 and 126, while the catheter is being maneuvered through lumen 108, either on a real-time three-dimensional image of lumen 108, on a still three-dimensional image of lumen 108, or on a cine-loop of lumen 108. Further additionally, the system superimposes features 132 and 134 together with marks 122, 124 and 126, on the real-time two-dimensional image of lumen 108, as well as one or more navigation images of lumen 108 (e.g., virtual IVUS image--either a still image or a cine-loop), acquired from viewing angles different than that of the real-time two-dimensional image.”) Strommer is understood to be silent on the remaining limitations of claim 17. In the same field of endeavor, CHAO teaches an image display method of causing a display to display, based on tomographic data acquired by a sensor moving in a lumen of a biological tissue (see at least [0093] FIG. 11B illustrates screen display 1100 of a live view during a pullback procedure in accordance with at least one embodiment of the present disclosure. A virtual venogram 500, acting as a roadmap in the live view 1100, automatically shows where the transducer array 124 is located within the body. In some embodiments, a co-registered X-ray, CAT scan, or fluoroscopy image may be used as a roadmap instead of or in addition to the virtual venogram 500. The screen display 1100 also includes a live tomographic IVUS image 1010. In addition, the screen display 1100 includes image setting controls 1120 (e.g., gain, field of view, etc.).), an image representing the biological tissue and display a first element on a screen same as the image, the first element representing a position of the sensor and being displaced as the sensor moves (see at least [0094] FIG. 12 illustrates a screen display 1100 during pullback, e.g., during recording of the IVUS data, in accordance with at least one embodiment of the present disclosure. A current frame indicator 1215 shows where on the cartoon roadmap or virtual venogram 500 of the vasculature the transducer array 124 of the catheter 510 is presently located. Label presets 1220 are also provided (e.g., vasculature segment abbreviations such as CIV, EIV, CFV, etc.). The IVUS frames are automatically labeled based on image analysis. In this example, the current position of the transducer array has been identified as the exterior iliac vein 550, and so the EIV label preset 1220 is highlighted or illuminated. A pullback speed indicator 1230 provides guidance to the clinician or other user for a stable pullback speed. The pullback speed indicator 1230 can be a series of blocks that are filled based on the speed (e.g., more blocks indicate faster speed and fewer blocks indicate slower speed). A tomographic IVUS image 1010 shows the current frame, and an automatic label 1240 can be generated using image analysis with the label presets described with respect to the current frame indicator 1215, e.g., by the vasculature segment abbreviation. Bookmark thumbnails 1250 appear when the user presses the bookmark option and/or the label preset option. A direction indicator 1260 is also included, showing, e.g., the orientation or direction of movement of the transducer array. Anterior (A), posterior (P), medial (M), lateral (L), and/or other suitable direction labels can be used. The direction indicator can include a compass arrow that moves based on the direction of movement. Interesting anatomy 1270 (e.g., thrombus) within the IVUS image 1010 can be colored, shaded, and/or highlighted.”), the image display method comprising: receiving a user operation of requesting marking of the position of the sensor (see at least [0094] FIG. 12; [0107] FIG. 22 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. The screen display 2200 includes a live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, user instruction 2210, and labeling button 2220. In this example, the user instruction 2210 is instructing the user to click the labeling button 2220 when the pullback of the ultrasound transducer array 124 reaches the start of the common iliac vein. In some embodiments, this selection is optional, as the IVUS pullback virtual venogram system identifies the start and end of different vasculature segments automatically. In other embodiments, the IVUS pullback virtual venogram system permits the clinician or other user to select the marking of the start or end of a vasculature segment through voice, gesture, or other touch-free command, such that a non-sterile staff member is not needed to operate a keyboard, mouse, joystick, or other non-sterile input device.” ); and causing the display to display a second element ([0109] FIG. 23 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. Visible are the live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, one-line user instruction 2210, labeling button 2220, artery 2230 (no longer bifurcating but now joined into a single lumen), and bifurcating vein 2240. In this example, the common iliac vein (CIV) 540 has been marked and highlighted on the virtual venogram, indicating that this is the segment of the patient's vasculature presently occupied by the ultrasound imaging array 124. In this example, the right external iliac vein (EIV) 550 is marked in a different color (e.g., light gray) to indicate this is the next segment the imaging array 124 will enter. The rest of the right-leg vasculature 1720 is marked with dotted lines, to show that it is not currently involved in the pullback procedure, while the left leg vasculature is grayed out (e.g., displayed with a gray color close to the background color) to indicate that it will not be involved in the pullback procedure at all.”) the second element being fixed at a position at time of the user operation (see at least [0109] FIG. 23 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. Visible are the live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, one-line user instruction 2210, labeling button 2220, artery 2230 (no longer bifurcating but now joined into a single lumen), and bifurcating vein 2240. In this example, the common iliac vein (CIV) 540 has been marked and highlighted on the virtual venogram, indicating that this is the segment of the patient's vasculature presently occupied by the ultrasound imaging array 124. In this example, the right external iliac vein (EIV) 550 is marked in a different color (e.g., light gray) to indicate this is the next segment the imaging array 124 will enter. The rest of the right-leg vasculature 1720 is marked with dotted lines, to show that it is not currently involved in the pullback procedure, while the left leg vasculature is grayed out (e.g., displayed with a gray color close to the background color) to indicate that it will not be involved in the pullback procedure at all.) In addition, the same motivation is used as the rejection for claim 1.Both Strommer and CHAO are understood to be silent on the remaining limitations of claim 17. In the same field of endeavor, Jiang teaches display a second element together with the first element, the second element being fixed at a position same as a position of the first element at time of the operation ([0044] as shown in Fig 2. “The blood vessel 40 examined in the underlying example by means of an imaging system 10 is shown in FIG. 2. FIG. 2 shows the situation after the image data 36 and the position data 34 have been obtained by means of the catheter 12. The catheter 12 was moved along a course or a path or a track 42 through the blood vessel 40. At a number of different positions 44 along the track 42 image data 36 is created by means of the ultrasound unit 20 for a cross-section 46 of the vessel 40 by the ultrasound unit 20. In FIG. 2 the cross-sections 46 are illustrated in each case as the sectional set of the points which was produced between the plane in which the cross-section 46 was obtained and the blood vessel 40. In the image data of each cross-section 46 the blood present in the vessel 40 around the catheter 12, an internal surface 48 of the vessel 40, a vessel wall 50 of the vessel 40 itself and if necessary also a part of the body tissue surrounding the vessel wall 50 are visible. [0045] Signals are also generated in each case by the positioning unit 22 for the individual positions 44, from which the positioning module 30 creates position data 34 for the positions 44. In addition there can be provision for a spatial orientation 52 of the positioning unit 52 to be created by the positioning module 30 from the signals of the positioning unit 22 and thus the plane of the cross-section 46. This can likewise be transferred as orientation data 54 from the localization module 30 to the graphic module 32. The spatial orientation 52 is represented in each case by a normal vector of the cross-sectional plane in FIG. 2,” where 52 and 44 are the same fixed position) In addition, the same motivation is used as the rejection for claim 1. Thus, the combination of Strommer , CHAO and Jiang teaches an image display method of causing a display to display, based on tomographic data acquired by a sensor moving in a lumen of a biological tissue, an image representing the biological tissue and display a first element on a screen same as the image, the first element representing a position of the sensor and being displaced as the sensor moves, the image display method comprising: receiving a user operation of requesting marking of the position of the sensor; and causing the display to display a second element together with the first element, the second element being fixed at a position same as a position of the first element at time of the user operation. Regarding claim 18, Strommer, CHAO and Jiang teach the image display method according to claim 17, further comprising: setting a color of the second element to a color different from a color of the first element (see at least [0091]of Strommer “ It is further noted that the operator can direct the system to either turn on or turn off the display of superposition of any of the marks, the representation of the position of the stent, the trajectory, or a combination thereof, via the user interface. Any attribute can be selected to represent the marks and the representation of the stent, as long as they are different, such as color, shape, size, and the like. However, a mark or a stent representation is displayed by the same attribute both in two-dimensional image 104 and three-dimensional image 106. For example, marks 116, 118, 120, 122, 124, and 126 are represented in green, features 128, 130, 132, and 134 are represented in blue, and trajectory 140 is represented in red.” [0074] of CHAO “ FIGS. 5-9 illustrate screen displays providing the guidance to the clinician during a IVUS pullback in peripheral vasculature. The screen displays advantageously provide a user with additional clarity to more clearly visualize aspects of deep venous disease. The screen displays perform several functions, including highlighting the segments of the vasculature, labeling the segments, and color coding or otherwise highlighting/distinguishing the segments and/or neighboring anatomy. The screen displays also automatically provide reference and compression measures (e.g., cross-sectional lumen area, diameter, etc.) within each of the segments. Segments meeting certain criteria (e.g., greater than or equal to 50% difference between reference and compression measures) are colored, highlighted, bolded, or marked differently (e.g., colored red) to indicate a segment of clinical interest or concern. Additionally, the screen displays provide real time feedback for the user about pullback speed. The GUIs can also provide for image quality improvement by provided the ability to adjust contrast, gain, focus, and/or other image settings. Image quality can also be improved based on providing feedback to the user to reach the correct pullback speed to obtain sufficient amount of high quality IVUS data. The screen displays provide: map to anatomy directly, immediate live values (reference, compression measurements), color coded segment highlights, pullback speed gauge (guidance). where color codes are used for various elements; [0083] In this example, a reference value 746 and compression value 748 associated with the CIV segment 540 are automatically provided on the screen display as the transducer array 124 moves within the vasculature. For example, the compression value 748 may be a numerical value of the cross-sectional lumen area for the particular patient, or a % compression value. In that regard, the compression value is automatically calculated based on the obtained IVUS data and then output to the screen display adjacent to the virtual venogram 500. In this example, the CIV segment 540 is colored based on the comparison between the reference value and the compression value. For example, comparison can be a ratio of the compression value 748 and the reference value 746 (e.g., compression value divided by reference value). In this example, the CIV segment 540 is colored differently than the IVC segment 540. For example, when the compression value 748 is less than 50% of the reference value 746, the segment can be colored in a second color (e.g., green) to indicate that the amount of compression is potentially harmful to the patient. Different colorings, shadings, highlighting can be used for the comparison of the reference value 746 and compression value 748 (e.g., different colors for greater than 50%, less than 50%, between 0% and 25%, between 25 and 50%, between 50% and 75%, between 75% and 100% ; [0011] of Jiang)). In addition, the same motivation is used as the rejection for claim 1. Regarding claim 19, Strommer and CHAO teach the image display method according to claim 17, further comprising: moving the sensor to a position corresponding to a position of the second element upon receiving an operation of requesting movement of the sensor to the position corresponding to the position of the second element (see at least [0083] of Strommer During the planning session, a respective one of the displays displays marks 116, 118 and 120 articulated by the user interface on an image of lumen 108. The operator can move marks 116, 118 and 120 together along the full length of the trajectory (e.g., trajectory 114 of FIG. 1B). Mark 118 designates the middle of the medical device, while marks 116 and 120 designate the rear end and the front end of the medical device, respectively. The system determines the distance between marks 116 and 120, according to the type (e.g., the size of stent) which the operator has selected. Marks 116, 118 and 120 together, are locked-on to the trajectory, while being operative to travel along the trajectory. The operator designates the position of mark 118 along the trajectory where the medical device is to be delivered to. [0211] of Strommer “In procedure 846, a representation respective of the selected position is superimposed on the real-time navigation image, thereby enabling an operator to visually navigate the medical device toward the selected position. With reference to FIGS. 13 and 15A, processor 666 produces real-time superimposed two-dimensional image 760, by superimposing a representation of each of marks 808, 810, and 812 on a real-time two-dimensional image of lumen 722, of catheter 732, and of medical device 762. Thus, the operator can visually navigate medical device 762 toward the selected position, according to real-time superimposed two-dimensional image 760. [0212] According to another aspect of the disclosed technique, different trajectories of an MPS catheter within the lumen is determined, corresponding to different activity states of an organ of the patient, by moving the MPS catheter within the lumen. Each trajectory is defined in a three-dimensional MPS coordinate system, and is time-tagged with the corresponding activity state. Each trajectory is superimposed on a real-time two-dimensional image of the lumen, according to the activity state associated with the real-time two-dimensional image. This superimposed real-time two-dimensional which is associated with the organ timing signal detected by an organ timing signal monitor, is displayed on the display, thereby enabling the operator to mark the selected position on the superimposed real-time two-dimensional image. The operator, navigates the medical device to the selected position, either automatically or manually by employing the method of FIG. 5, as described herein above. Alternatively, the operator navigates the medical device to the selected position, visually, by employing the method of FIG. 17, as described herein above.”; [0044-0045] as shown in Fig.2 of Jiang) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 20, Strommer teaches display to display, based on tomographic data acquired by a sensor moving in a lumen of a biological tissue (see at least [0057] The disclosed technique overcomes the disadvantages of the prior art by graphically designating on an image of the lumen, the position where a medical device (e.g., a PCI device, a dilation balloon, a stent delivery system) has to be delivered, and indicating when the medical device has reached the selected position. The medical device is attached to the tip of a catheter. A medical positioning system (MPS) sensor constantly detects the position of the medical device relative to the selected position. This position is represented on a real-time image (e.g., live fluoroscopy), a pseudo-real-time image (e.g., previously recorded cine-loop) or a previously recorded still image frame of the lumen, thereby obviating the need to radiate the inspected organ of the patient repeatedly, neither or to repeatedly inject contrast agent to the body of the patient. The medical staff can either guide the catheter manually according to feedback from an appropriate user interface, such as display, audio output, and the like, or activate a catheter guiding system which automatically guides the catheter toward the selected position.” [0092] With reference to FIGS. 3A and 3B, while the catheter is being maneuvered through lumen 108, each of two-dimensional image 104 and three-dimensional image 106, is displayed relative to the coordinate system of lumen 108 (i.e., relative to the MPS sensor which is attached to the catheter, and which constantly moves together with lumen 108). When the stent reaches the selected position (i.e., front end of the stent is substantially aligned with mark 120 and the rear end thereof is substantially aligned with mark 116), a user interface (e.g., audio, visual, or tactile device--not shown) announces the event to the operator.”), an image representing the biological tissue (see at least [0068] Two-dimensional image 104 can be a still image of the lumen system (i.e., one of the images among a plurality of images in a cine-loop, which the operator selects). In this case, the selected two-dimensional image can be an image whose contrast for example, is better (e.g., the difference in the brightness of the dark pixels and the bright pixels in the image, is large) than all the rest, and which portrays the lumen system in a manner which is satisfactory for the operator either to designate the selected location of the medical device, or to view a real-time representation of the stent, as the medical device is being navigated within the lumen system. [0069] With reference to FIG. 1B, GUI 102 includes a three-dimensional image 106 of a lumen (referenced 108) of the lumen system displayed in GUI 100, through which the catheter is being maneuvered. Three-dimensional image 106 is reconstructed from a plurality of two-dimensional images which are detected by a two-dimensional image acquisition device, during an image acquisition stage, by a technique known in the art.”) and display a first element on a screen same as the image (see at least [0074] An MPS sensor (not shown) is firmly attached to the tip of the catheter. Three-dimensional image 106 is registered with two-dimensional image 104, such that each point in two-dimensional image 104 corresponds to a respective point in three-dimensional image 106. In this manner, the coordinates of each point in three-dimensional image 106 can be projected onto two-dimensional image 104. Alternatively, each point in two-dimensional image 104 can be transferred to three-dimensional image 106 (e.g., by acquiring a series of two-dimensional images from different viewing angles). A real-time representation 110 (FIG. 1A) of the MPS sensor is superimposed on lumen 108, as described herein below in connection with FIG. 6C. A real-time representation 112 (FIG. 1B) of the MPS sensor is superimposed on three-dimensional image 106.”; [0087] During the medical operation, following the planning session, a catheter which includes a stent (not shown), is maneuvered within lumen 108 toward marks 116, 118 and 120. An MPS sensor (not shown) is attached to the catheter in the vicinity of the stent. With reference to FIGS. 2A and 2B, the position of the front end and of the rear end of the stent are represented in real-time, by features 128 and 130, respectively, on two-dimensional image 104, and by features 132 and 134, respectively, on three-dimensional image 106. In the example set forth in FIGS. 2A and 2B, each of features 128 and 130 is in form of a rectangle with longitudinal lines 136 and 138, respectively, dividing each rectangle in two. The actual trajectory of the catheter is represented by a feature 140 (FIG. 2B) superimposed on three-dimensional image 106. The actual trajectory of the catheter can be represented by another feature (not shown) superimposed on two-dimensional image 104.”), the first element representing a position of the sensor and being displaced as the sensor moves (see at least [0087] During the medical operation, following the planning session, a catheter which includes a stent (not shown), is maneuvered within lumen 108 toward marks 116, 118 and 120. An MPS sensor (not shown) is attached to the catheter in the vicinity of the stent. With reference to FIGS. 2A and 2B, the position of the front end and of the rear end of the stent are represented in real-time, by features 128 and 130, respectively, on two-dimensional image 104, and by features 132 and 134, respectively, on three-dimensional image 106. In the example set forth in FIGS. 2A and 2B, each of features 128 and 130 is in form of a rectangle with longitudinal lines 136 and 138, respectively, dividing each rectangle in two. The actual trajectory of the catheter is represented by a feature 140 (FIG. 2B) superimposed on three-dimensional image 106. The actual trajectory of the catheter can be represented by another feature (not shown) superimposed on two-dimensional image 104.”; [0092] With reference to FIGS. 3A and 3B, while the catheter is being maneuvered through lumen 108, each of two-dimensional image 104 and three-dimensional image 106, is displayed relative to the coordinate system of lumen 108 (i.e., relative to the MPS sensor which is attached to the catheter, and which constantly moves together with lumen 108). When the stent reaches the selected position (i.e., front end of the stent is substantially aligned with mark 120 and the rear end thereof is substantially aligned with mark 116), a user interface (e.g., audio, visual, or tactile device--not shown) announces the event to the operator.”), the processing comprising: causing the display to display, upon reception of a user operation of requesting marking of the position of the sensor (see at least [0079] With reference to FIG. 2A, during a planning session, the operator graphically designates a plurality of marks 116, 118, and 120 on two-dimensional image 104, as a selected position within lumen 108, which a medical device (not shown) is to be delivered to. The operator performs the marking either on a frozen two-dimensional image of lumen 108, or on a frozen reconstructed three-dimensional model of lumen 108. The operator performs the marking in different manners, such as manually, according to an automated two-dimensional or three-dimensional quantitative cardiac assessment (QCA), and the like.”; [0084] For simplicity, the medical device in the example set forth in FIGS. 2A, 2B, 3A, and 3B, is a stent. In this case, each of marks 116, 118, and 120 is a substantially straight line, which is substantially perpendicular to lumen 108. For example, marks 116 and 120 designate the two ends of the stent, while mark 118 designates the middle of the stent. Marks 116, 118, and 120 define the location of the stent in lumen 108, as well as the orientation thereof. The marking is performed via a user interface (not shown), such as a joystick, push button, pointing device (e.g., a mouse, stylus and digital tablet, track-ball, touch pad), and the like.), a second element together with the first element, the second element being fixed at a position same as a position of the first element at time of the user operation (see at least [0087] During the medical operation, following the planning session, a catheter which includes a stent (not shown), is maneuvered within lumen 108 toward marks 116, 118 and 120. An MPS sensor (not shown) is attached to the catheter in the vicinity of the stent. With reference to FIGS. 2A and 2B, the position of the front end and of the rear end of the stent are represented in real-time, by features 128 and 130, respectively, on two-dimensional image 104, and by features 132 and 134, respectively, on three-dimensional image 106. In the example set forth in FIGS. 2A and 2B, each of features 128 and 130 is in form of a rectangle with longitudinal lines 136 and 138, respectively, dividing each rectangle in two. The actual trajectory of the catheter is represented by a feature 140 (FIG. 2B) superimposed on three-dimensional image 106. The actual trajectory of the catheter can be represented by another feature (not shown) superimposed on two-dimensional image 104.”;[ 0088] During the medical operation, the system superimposes features 128 and 130 together with marks 116, 118 and 120, while the catheter is being maneuvered through lumen 108, either on a real-time two-dimensional image of lumen 108 (e.g., angiogram), on a two-dimensional cine-loop of lumen 108, or on a frozen two-dimensional image of lumen 108. Additionally, the system superimposes features 132 and 134 together with marks 122, 124 and 126, while the catheter is being maneuvered through lumen 108, either on a real-time three-dimensional image of lumen 108, on a still three-dimensional image of lumen 108, or on a cine-loop of lumen 108. Further additionally, the system superimposes features 132 and 134 together with marks 122, 124 and 126, on the real-time two-dimensional image of lumen 108, as well as one or more navigation images of lumen 108 (e.g., virtual IVUS image--either a still image or a cine-loop), acquired from viewing angles different than that of the real-time two-dimensional image.”) Strommer is understood to be silent on the remaining limitations of claim 20. In the same field of endeavor, CHAO teaches non-transitory computer-readable medium storing an image processing program configured to cause a computer to execute processing ([0126] The memory 2564 may include a cache memory (e.g., a cache memory of the processor 2560), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, other forms of volatile and non-volatile memory, or a combination of different types of memory. In an embodiment, the memory 2564 includes a non-transitory computer-readable medium. The memory 2564 may store instructions 2566. The instructions 2566 may include instructions that, when executed by the processor 2560, cause the processor 2560 to perform the operations described herein. Instructions 2566 may also be referred to as code. The terms “instructions” and “code” should be interpreted broadly to include any type of computer-readable statement(s). For example, the terms “instructions” and “code” may refer to one or more programs, routines, sub-routines, functions, procedures, etc. “Instructions” and “code” may include a single computer-readable statement or many computer-readable statements”)., the computer causing a display (Fig.1, item 108) to display, based on tomographic data ([0093] FIG. 11B illustrates screen display 1100 of a live view during a pullback procedure in accordance with at least one embodiment of the present disclosure. A virtual venogram 500, acting as a roadmap in the live view 1100, automatically shows where the transducer array 124 is located within the body. In some embodiments, a co-registered X-ray, CAT scan, or fluoroscopy image may be used as a roadmap instead of or in addition to the virtual venogram 500. The screen display 1100 also includes a live tomographic IVUS image 1010. In addition, the screen display 1100 includes image setting controls 1120 (e.g., gain, field of view, etc.).) acquired by a sensor (Fig.1, item 124; 0093] FIG. 11B illustrates screen display 1100 of a live view during a pullback procedure in accordance with at least one embodiment of the present disclosure. A virtual venogram 500, acting as a roadmap in the live view 1100, automatically shows where the transducer array 124 is located within the body. In some embodiments, a co-registered X-ray, CAT scan, or fluoroscopy image may be used as a roadmap instead of or in addition to the virtual venogram 500. The screen display 1100 also includes a live tomographic IVUS image 1010. In addition, the screen display 1100 includes image setting controls 1120 (e.g., gain, field of view, etc.).) moving in a lumen of a biological tissue (Figure 1, item 120), an image representing the biological tissue and display a first element on a screen same as the image (see at least [0094] FIG. 12 illustrates a screen display 1100 during pullback, e.g., during recording of the IVUS data, in accordance with at least one embodiment of the present disclosure. A current frame indicator 1215 shows where on the cartoon roadmap or virtual venogram 500 of the vasculature the transducer array 124 of the catheter 510 is presently located. Label presets 1220 are also provided (e.g., vasculature segment abbreviations such as CIV, EIV, CFV, etc.). The IVUS frames are automatically labeled based on image analysis. In this example, the current position of the transducer array has been identified as the exterior iliac vein 550, and so the EIV label preset 1220 is highlighted or illuminated. A pullback speed indicator 1230 provides guidance to the clinician or other user for a stable pullback speed. The pullback speed indicator 1230 can be a series of blocks that are filled based on the speed (e.g., more blocks indicate faster speed and fewer blocks indicate slower speed). A tomographic IVUS image 1010 shows the current frame, and an automatic label 1240 can be generated using image analysis with the label presets described with respect to the current frame indicator 1215, e.g., by the vasculature segment abbreviation. Bookmark thumbnails 1250 appear when the user presses the bookmark option and/or the label preset option. A direction indicator 1260 is also included, showing, e.g., the orientation or direction of movement of the transducer array. Anterior (A), posterior (P), medial (M), lateral (L), and/or other suitable direction labels can be used. The direction indicator can include a compass arrow that moves based on the direction of movement. Interesting anatomy 1270 (e.g., thrombus) within the IVUS image 1010 can be colored, shaded, and/or highlighted.”) the first element representing a position of the sensor and being displaced as the sensor moves (see at least [0094] FIG. 12 illustrates a screen display 1100 during pullback, e.g., during recording of the IVUS data, in accordance with at least one embodiment of the present disclosure. A current frame indicator 1215 shows where on the cartoon roadmap or virtual venogram 500 of the vasculature the transducer array 124 of the catheter 510 is presently located. Label presets 1220 are also provided (e.g., vasculature segment abbreviations such as CIV, EIV, CFV, etc.). The IVUS frames are automatically labeled based on image analysis. In this example, the current position of the transducer array has been identified as the exterior iliac vein 550, and so the EIV label preset 1220 is highlighted or illuminated. A pullback speed indicator 1230 provides guidance to the clinician or other user for a stable pullback speed. The pullback speed indicator 1230 can be a series of blocks that are filled based on the speed (e.g., more blocks indicate faster speed and fewer blocks indicate slower speed). A tomographic IVUS image 1010 shows the current frame, and an automatic label 1240 can be generated using image analysis with the label presets described with respect to the current frame indicator 1215, e.g., by the vasculature segment abbreviation. Bookmark thumbnails 1250 appear when the user presses the bookmark option and/or the label preset option. A direction indicator 1260 is also included, showing, e.g., the orientation or direction of movement of the transducer array. Anterior (A), posterior (P), medial (M), lateral (L), and/or other suitable direction labels can be used. The direction indicator can include a compass arrow that moves based on the direction of movement. Interesting anatomy 1270 (e.g., thrombus) within the IVUS image 1010 can be colored, shaded, and/or highlighted.”), the processing comprising: causing the display ([0063] “The controller or processing system 106 may include a processing circuit having one or more processors in communication with memory and/or other suitable tangible computer readable storage media. The controller or processing system 106 may be configured to carry out one or more aspects of the present disclosure. In some embodiments, the processing system 106 and the monitor 108 are separate components. In other embodiments, the processing system 106 and the monitor 108 are integrated in a single component. For example, the system 100 can include a touch screen device, including a housing having a touch screen display and a processor. The system 100 can include any suitable input device, such as a touch sensitive pad or touch screen display, keyboard/mouse, joystick, button, etc., for a user to select options shown on the monitor 108. The processing system 106, the monitor 108, the input device, and/or combinations thereof can be referenced as a controller of the system 100. The controller can be in communication with the device 102, the PIM 104, the processing system 106, the monitor 108, the input device, and/or other components of the system 100.”) to display, upon reception of a user operation of requesting marking of the position of the sensor, a second element, the second element being fixed at a position at time of the user operation (see at least [0094] FIG. 12; [0107] FIG. 22 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. The screen display 2200 includes a live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, user instruction 2210, and labeling button 2220. In this example, the user instruction 2210 is instructing the user to click the labeling button 2220 when the pullback of the ultrasound transducer array 124 reaches the start of the common iliac vein. In some embodiments, this selection is optional, as the IVUS pullback virtual venogram system identifies the start and end of different vasculature segments automatically. In other embodiments, the IVUS pullback virtual venogram system permits the clinician or other user to select the marking of the start or end of a vasculature segment through voice, gesture, or other touch-free command, such that a non-sterile staff member is not needed to operate a keyboard, mouse, joystick, or other non-sterile input device.” [0109] FIG. 23 is a screenshot of a pullback navigation and marking display 2200, in accordance with at least one embodiment of the present disclosure. Visible are the live tomographic IVUS image 1010, image longitudinal display (ILD) 1020, virtual venogram 500, pullback speed indicator 520, one-line user instruction 2210, labeling button 2220, artery 2230 (no longer bifurcating but now joined into a single lumen), and bifurcating vein 2240. In this example, the common iliac vein (CIV) 540 has been marked and highlighted on the virtual venogram, indicating that this is the segment of the patient's vasculature presently occupied by the ultrasound imaging array 124. In this example, the right external iliac vein (EIV) 550 is marked in a different color (e.g., light gray) to indicate this is the next segment the imaging array 124 will enter. The rest of the right-leg vasculature 1720 is marked with dotted lines, to show that it is not currently involved in the pullback procedure, while the left leg vasculature is grayed out (e.g., displayed with a gray color close to the background color) to indicate that it will not be involved in the pullback procedure at all.”) In addition, the same motivation is used as the rejection for claim 1. Both Strommer and CHAO are understood to be silent on the remaining limitations of claim 20. In the same field of endeavor, Jiang teaches display a second element together with the first element, the second element being fixed at a position same as a position of the first element at time of the operation ([0044] as shown in Fig 2. “The blood vessel 40 examined in the underlying example by means of an imaging system 10 is shown in FIG. 2. FIG. 2 shows the situation after the image data 36 and the position data 34 have been obtained by means of the catheter 12. The catheter 12 was moved along a course or a path or a track 42 through the blood vessel 40. At a number of different positions 44 along the track 42 image data 36 is created by means of the ultrasound unit 20 for a cross-section 46 of the vessel 40 by the ultrasound unit 20. In FIG. 2 the cross-sections 46 are illustrated in each case as the sectional set of the points which was produced between the plane in which the cross-section 46 was obtained and the blood vessel 40. In the image data of each cross-section 46 the blood present in the vessel 40 around the catheter 12, an internal surface 48 of the vessel 40, a vessel wall 50 of the vessel 40 itself and if necessary also a part of the body tissue surrounding the vessel wall 50 are visible. [0045] Signals are also generated in each case by the positioning unit 22 for the individual positions 44, from which the positioning module 30 creates position data 34 for the positions 44. In addition there can be provision for a spatial orientation 52 of the positioning unit 52 to be created by the positioning module 30 from the signals of the positioning unit 22 and thus the plane of the cross-section 46. This can likewise be transferred as orientation data 54 from the localization module 30 to the graphic module 32. The spatial orientation 52 is represented in each case by a normal vector of the cross-sectional plane in FIG. 2,” where 52 and 44 are the same fixed position) In addition, the same motivation is used as the rejection for claim 1. Thus, the combination of Strommer, CHAO and Jiang teaches a non-transitory computer-readable medium storing an image processing program configured to cause a computer to execute processing, the computer causing a display to display, based on tomographic data acquired by a sensor moving in a lumen of a biological tissue, an image representing the biological tissue and display a first element on a screen same as the image, the first element representing a position of the sensor and being displaced as the sensor moves, the processing comprising: causing the display to display, upon reception of a user operation of requesting marking of the position of the sensor, a second element together with the first element, the second element being fixed at a position same as a position of the first element at time of the user operation. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARAH LE whose telephone number is (571)270-7842. The examiner can normally be reached Monday: 8AM-4:30PM EST, Tuesday: 8 AM-3:30PM EST, Wednesday: 8AM-2:30PM EST, Thursday and Friday off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SARAH LE/ Primary Examiner, Art Unit 2614 /KENT W CHANG/ Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Mar 28, 2024
Application Filed
Dec 13, 2025
Non-Final Rejection — §103, §112
Apr 08, 2026
Interview Requested
Apr 15, 2026
Applicant Interview (Telephonic)
Apr 16, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569321
PROPOSING DENTAL RESTORATION MATERIAL PARAMETERS
2y 5m to grant Granted Mar 10, 2026
Patent 12573128
Progressive Compression of Geometry for Graphics Processing
2y 5m to grant Granted Mar 10, 2026
Patent 12536715
GENERATION OF STYLIZED DRAWING OF THREE-DIMENSIONAL SHAPES USING NEURAL NETWORKS
2y 5m to grant Granted Jan 27, 2026
Patent 12505585
SYSTEMS AND METHODS FOR OVERLAY OF VIRTUAL OBJECT ON PROXY OBJECT
2y 5m to grant Granted Dec 23, 2025
Patent 12505590
NODE LIGHTING
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+33.4%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 258 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month