Prosecution Insights
Last updated: April 19, 2026
Application No. 17/503,544

IMAGING PROBE WITH COMBINED ULTRASOUND AND OPTICAL MEANS OF IMAGING

Non-Final OA §103§112
Filed
Oct 18, 2021
Examiner
KELLOGG, MICHAEL S
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Sunnybrook Health Sciences Centre
OA Round
5 (Non-Final)
42%
Grant Probability
Moderate
5-6
OA Rounds
4y 6m
To Grant
98%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
114 granted / 268 resolved
-27.5% vs TC avg
Strong +56% interview lift
Without
With
+55.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
30 currently pending
Career history
298
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
34.5%
-5.5% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
33.3%
-6.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 268 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/10/2025 has been entered. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed application, Provisional Application No. 60,881,169 (hereafter PD1), fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. Specifically PD1 does not teach or describe a dynamic composite image that employs alternation. Indeed one will not find the terms “dynamic” or synonyms, nor “alternates” or synonyms, nor is the generic formation of a composite image actually disclosed in any meaningful detail in PD1 as it is merely mentioned in passing in the single paragraph bridging pages 53-54 of the specification of PD1 and is not ever depicted or subject to clear disclosure that would allow one of ordinary skill in the art to make or use a composite image of the sort described in Claim 1 of the instant application. Therefore the subject matter of claims 1-20 of the instant application are held to have an effective filing date of January 22nd 2008 in accordance with the filing of 12/010,208 (now Patent No. 8,784,321) which was the earliest priority document to contain the claimed subject matter. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 5 recites the limitation "said one or more first subregion portions of said first image" in lines 1-2 and limitation “said one or more portions of said second image second subregions” in lines 2-3. There is insufficient antecedent basis for both of these limitations in the claim. For compact prosecution the examiner recommends replacing the former with “said one or more first subregions” and the latter with “said one or more second subregions” in accordance with the newly claimed wording of parent claim 1. Claim 6 is similarly affected, at least by virtue of dependency. Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1-2, 7-10, 12-14, 16-18 and 20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over US 20050101859 A1 by Michael Maschke (hereafter Maschke, previously of record) further in view of JP 3639030 B2 by Oishi et al (hereafter Oishi, previously of record. Machine translation previously provided) together or, alternatively, further in view of WO2007030424A2 by Kuhn (hereafter Kuhn, previously of record). Regarding claim 1, Maschke teaches: 1. A method of displaying co-registered images (see Maschke’s Abstract and [0019]-[0020] or simply see Figs. 5), said method comprising the steps of: obtaining a first image and a second image from an imaging catheter configured to obtain images according to two or more imaging modalities (see Maschke’s Figs. 1-3 noting the catheters 1, 7, or 9 with multimodal sensors 4, 11, 3, 8, or 10 and/or see the Abstract which covers as much and explains the reference characters depicted in said figures), wherein said first image is obtained according to a first imaging modality and said second image is obtained according to a second imaging modality, and wherein said imaging catheter is configured such that respective imaging beams associated with said first and second imaging modalities spatially overlap, thereby achieving spatial co-registration of said first image and said second image (see e.g. Maschke’s Fig. 5 where 5A us an OCT image, 5B is an IVUS image, and 5C-D are a spatially co-registered composite image; additionally, see Maschke’s [0024] which textually describes the exact feature the applicant appears to be driving at which is that the spatial overlap can be a function of the image acquisition directly by gathering the same images at the same time, location, and in phase with each other; however and for compact prosecution purposes, the examiner also notes that gathering these out of phase (e.g. with sensors operating at different rotational speeds) would still be the imaging catheter being formatted so as to have spatially overlapping acquisitions that achieve (though further processing) co-registration such that the cited figures alone which show the co-registered acquisitions appear to teach the breadth of the claim language); and processing the first image and the second image to … generate a dynamic composite image in which throughout a prescribed image region (see Maschke’s Figs. 5 A-C), … such that the dynamic composite image enables identification of features in both the first image and the second image (see Mashke’s Fig. 5A-C, where C itself is a fusion of A+B as per [0035]-[0037], and where the display can encompasses multiple images as per [0048] and where these images can be used to “enable” identification of features in each image modality either because this is inherent (i.e. if the data is displayed then the user could, at their prerogative, accomplish as much) or as per e.g. [0055]-[0056] which describes that the images contain distinct details so as to explicitly enable as much. As for being dynamic, see e.g. [0009]-[0010] which explains that these 360 degree/2D images are individual cross sections of the artery within a larger image sequence obtained while the device is transiting a pullback, see also e.g. [0027]-[0029] or [0044]-[0047] or [0051]; wherein said prescribed image region of said dynamic composite image comprises one or more first subregions displaying image data from said first image (see Maschke’s Fig. 5A and 5C) and one or more second subregions displaying image data from said second image (see Maschke’s Figs. 5B and 5C), said one or more first subregions being absent of spatial overlap with said or more second subregions (see Mashke’s Fig. 5A-C); and wherein borders between adjacent subregions vary spatially with time (as iterated above, Mashke’s images are a sequence obtained as the device is driven along the artery, see e.g. [0009]-[0010] or see above. As such, whether or not the absolute position of the box which defines the boarder depicted in Mashke’s Fig. 5C moves relative to the outer boarder of Fig. 5C or relative to its past locations, the first and second regions, and thus the boarder between them, is still going to vary spatially with time since the data is gatherer over time as the imaging ROI moves spatially within the body. While the foregoing addresses the claimed wording, the examiner does understand what the applicant is arguing for and thus and for compact prosecution purposes, the examiner notes that, at least in the alternative, Oishi also covers this in multiple ways as addressed below). Mashke therefore teaches the majority of claimed limitations and even further teaches that the user should be able to alter the presentation of the images for display (see Mashke’s [0048] noting in particular “An I/O unit 25 is connected to the display unit 24 and can be used to enter information. In particular it can influence the presentation of the image or images shown on the display unit 24. The I/O unit 25 can be embodied as a keyboard or an operator console and is also connected to data bus 21”). However, the examiner omitted that the processing is such that the first and second image data “alternates with time” because Mashke does not state that the user should alternate the display with time. Likewise, while Mashke teaches that the “borders between adjacent subregions vary spatially with time” from a first perspective, this can also be taught, at least in the alternative, from a second perspective to compact prosecution. To that end Oishi, in the related field of multi-modality imaging and display processing (see Oishi’s Abstract or [0001]-[0002]) teaches that one can repeatedly move or alter the dimensions of overlaid images or to alternate between two images in an area over time and that this would be advantageous (ee Oishi’s Fig. 23 and [0171]-[0175] which show and iterate that a dynamic composite image is formed on the display with images from two modalities which states in relevant part with respect to movement of the boundaries of the ROI that “Further, by operating the mouse m or the like in this state and moving the line marker to the left or right, for example, the difference between the three-dimensional images A and B (changes in shade or presence) can be clearly discriminated with the line marker as a boundary. Therefore, accurate interpretation can be performed.”, and state in relevant part with respect to alternating that: “In the latter case, when OFF is pressed, the combined portion is returned to the original 3D image A. When ON is pressed again, the 3D image B is again combined with the designated ROI. To do. By moving the ROI to the left or right in the vicinity of the region of interest or repeatedly turning on / off the composition in the region of interest, the difference between the three-dimensional images A and B in the region of interest (changes in shade or presence) can be clearly identified. Therefore, accurate interpretation can be performed.” And see also [0174] as this can be the same or different modality as, while this was addressed above, it is also stated in the same embodiment (19) directly). Therefore it would have been obvious to one of ordinary skill in the art prior to the date of invention to improve the display process of Maschke movement of the boundary and/or with alternation of the bounded overlaid image taught by Oishi in order to advantageously allow accurate image interpretation of the same area of the patient in multiple images. Additionally the examiner further omitted the term “autonomously” from the foregoing, as indicated by ellipsis as neither Maschke nor Oishi would enforce the image generation entirely automatically as, e.g. Oishi has the user toggle the view. However and as a first grounds of rejection for this feature, the examiner notes that the only difference between the instant claims and the prior art is that a computer based activity is done automatically instead of manually. To that end, the examiner notes that MPEP 2144.04(III) makes clear that this sort of difference is prima facie obvious and does not cause a claim to distinguish over the prior art. Therefore it would have been prima facie obvious to one of ordinary skill in the art prior to the date of invention to automate the ROI movement taught by Oishi in order to reap the same advantages without requiring user intervention, at least in light of the legal precedent set forth in MPEP 2144.04(III). Additionally or alternatively and as noted in the conclusion section of the previous office action, Kuhn teaches that automatically flipping between images is advantageous. See specifically the first paragraph of page 10 of the Kuhn reference which iterates that automatically blinking between two views improves upon conventional comparison methods such as are present in Oishi as the eye naturally is drawn to and increased attention is paid to this movement therefore it is advantageous to automate the switching of images. Therefore and in the alternative it would have been obvious to one of ordinary skill in the art prior to the date of invention to take the switching of image ROIs taught by Oishi and automate the process as taught by Kuhn as this advantageously allows user to pay closer attention to/emphasizes the changes between the images. Regarding claim 13, Maschke teaches: 13. A system for displaying co-registered images (see Maschke’s Abstract), said system comprising: an imaging catheter configured to obtain images according to two or more imaging modalities (see Maschke’s Figs. 1-3 noting the catheters 1, 7, or 9 with multimodal sensors 4, 11, 3, 8, or 10 and/or see the Abstract which covers as much and explains the reference characters depicted in said figures), and wherein said imaging catheter is configured such that respective imaging beams associated with a first imaging modality and a second imaging modality spatially overlap, thereby achieving spatial co-registration of a first image obtained by said first imaging modality and a second image obtained by said second imaging modality (see e.g. Maschke’s Fig. 5 where 5A us an OCT image, 5B is an IVUS image, and 5C-D are a spatially co-registered composite image; additionally, see Maschke’s [0024] which textually describes the exact feature the applicant appears to be driving at which is that the spatial overlap can be a function of the image acquisition directly by gathering the same images at the same time, location, and in phase with each other; however and for compact prosecution purposes, the examiner also notes that gathering these out of phase (e.g. with sensors operating at different rotational speeds) would still be the imaging catheter being formatted so as to have spatially overlapping acquisitions that achieve (though further processing) co-registration such that the cited figures alone which show the co-registered acquisitions appear to teach the breadth of the claim language); and processing circuitry (see Maschke’s [0013] noting that the first and second sensors connect to an image processing device that generates the image discussed below. This is also/alternatively inherent as the GUI of Figs. 5 is electronic and it is impossible for the user to e.g. draw the combined images 5C-D in real time, as such the involvement of the processor will not be hereafter addressed) configured to perform operations comprising: processing the first image and the second image to … generate a dynamic composite image in which throughout a prescribed image region (see Maschke’s Figs. 5 A-C), … such that the dynamic composite image enables identification of features in both the first image and the second image (see Mashke’s Fig. 5A-C, where C itself is a fusion of A+B as per [0035]-[0037], and where the display can encompasses multiple images as per [0048] and where these images can be used to “enable” identification of features in each image modality either because this is inherent (i.e. if the data is displayed then the user could, at their prerogative, accomplish as much) or as per e.g. [0055]-[0056] which describes that the images contain distinct details so as to explicitly enable as much. As for being dynamic, see e.g. [0009]-[0010] which explains that these 360 degree/2D images are individual cross sections of the artery within a larger image sequence obtained while the device is transiting a pullback, see also e.g. [0027]-[0029] or [0044]-[0047] or [0051]; said processing circuitry being further configured that: said prescribed image region of said dynamic composite image comprises one or more first subregions displaying image data from said first image (see Maschke’s Fig. 5A and 5C) and one or more second subregions displaying image data from said second image (see Maschke’s Figs. 5B and 5C), said one or more first subregions being absent of spatial overlap with said or more second subregions (see Mashke’s Fig. 5A-C); and wherein borders between adjacent subregions vary spatially with time (as iterated above, Mashke’s images are a sequence obtained as the device is driven along the artery, see e.g. [0009]-[0010] or see above. As such, whether or not the absolute position of the box which defines the boarder depicted in Mashke’s Fig. 5C moves relative to the outer boarder of Fig. 5C or relative to its past locations, the first and second regions, and thus the boarder between them, is still going to vary spatially with time since the data is gatherer over time as the imaging ROI moves spatially within the body. While the foregoing addresses the claimed wording, the examiner does understand what the applicant is arguing for and thus and for compact prosecution purposes, the examiner notes that, at least in the alternative, Oishi also covers this in multiple ways as addressed below). Mashke therefore teaches the majority of claimed limitations and even further teaches that the user should be able to alter the presentation of the images for display (see Mashke’s [0048] noting in particular “An I/O unit 25 is connected to the display unit 24 and can be used to enter information. In particular it can influence the presentation of the image or images shown on the display unit 24. The I/O unit 25 can be embodied as a keyboard or an operator console and is also connected to data bus 21”). However, the examiner omitted that the processing is such that the first and second image data “alternates with time” because Mashke does not state that the user should alternate the display with time. Likewise, while Mashke teaches that the “borders between adjacent subregions vary spatially with time” from a first perspective, this can also be taught, at least in the alternative, from a second perspective to compact prosecution. To that end Oishi, in the related field of multi-modality imaging and display processing (see Oishi’s Abstract or [0001]-[0002]) teaches that one can repeatedly move or alter the dimensions of overlaid images or to alternate between two images in an area over time and that this would be advantageous (ee Oishi’s Fig. 23 and [0171]-[0175] which show and iterate that a dynamic composite image is formed on the display with images from two modalities which states in relevant part with respect to movement of the boundaries of the ROI that “Further, by operating the mouse m or the like in this state and moving the line marker to the left or right, for example, the difference between the three-dimensional images A and B (changes in shade or presence) can be clearly discriminated with the line marker as a boundary. Therefore, accurate interpretation can be performed.”, and state in relevant part with respect to alternating that: “In the latter case, when OFF is pressed, the combined portion is returned to the original 3D image A. When ON is pressed again, the 3D image B is again combined with the designated ROI. To do. By moving the ROI to the left or right in the vicinity of the region of interest or repeatedly turning on / off the composition in the region of interest, the difference between the three-dimensional images A and B in the region of interest (changes in shade or presence) can be clearly identified. Therefore, accurate interpretation can be performed.” And see also [0174] as this can be the same or different modality as, while this was addressed above, it is also stated in the same embodiment (19) directly). Therefore it would have been obvious to one of ordinary skill in the art prior to the date of invention to improve the display process of Maschke movement of the boundary and/or with alternation of the bounded overlaid image taught by Oishi in order to advantageously allow accurate image interpretation of the same area of the patient in multiple images. Additionally the examiner further omitted the term “autonomously” from the foregoing, as indicated by ellipsis as neither Maschke nor Oishi would enforce the image generation entirely automatically as, e.g. Oishi has the user toggle the view. However and as a first grounds of rejection for this feature, the examiner notes that the only difference between the instant claims and the prior art is that a computer based activity is done automatically instead of manually. To that end, the examiner notes that MPEP 2144.04(III) makes clear that this sort of difference is prima facie obvious and does not cause a claim to distinguish over the prior art. Therefore it would have been prima facie obvious to one of ordinary skill in the art prior to the date of invention to automate the ROI movement taught by Oishi in order to reap the same advantages without requiring user intervention, at least in light of the legal precedent set forth in MPEP 2144.04(III). Additionally or alternatively and as noted in the conclusion section of the previous office action, Kuhn teaches that automatically flipping between images is advantageous. See specifically the first paragraph of page 10 of the Kuhn reference which iterates that automatically blinking between two views improves upon conventional comparison methods such as are present in Oishi as the eye naturally is drawn to and increased attention is paid to this movement therefore it is advantageous to automate the switching of images. Therefore and in the alternative it would have been obvious to one of ordinary skill in the art prior to the date of invention to take the switching of image ROIs taught by Oishi and automate the process as taught by Kuhn as this advantageously allows user to pay closer attention to/emphasizes the changes between the images. Regarding claims 8 and 17, Maschke IVO Oishi teaches the basic invention as given above in regards to claims 1 and 13 and Maschke further teaches that the processing circuitry performs the image processing steps as established above or as seen in Maschke’s [0013]; however, Maschke does not perform tissue type identification and therefore fails to teach: “8. The method according to claim 1 further comprising the steps of: processing one or more of said first image and said second image to identify one or more tissue types; generating an updated dynamic composite image comprising an indication of said one or more tissue types; and displaying said updated dynamic composite image.” Or “17. The system according to claim 13 wherein said processing circuitry configured to perform operations further comprising : processing one or more of said first image and said second image to identify one or more tissue types; generating an updated dynamic composite image comprising an indication of said one or more tissue types; and displaying said updated dynamic composite image.” However, the examiner notes that performing tissue type identification and the advantages thereof such as improving or aiding in diagnosis and/or identifying important tissues are old and well known in the art. See MPEP 2144.03. Therefore it would have been prima facie obvious to one of ordinary skill in the art prior to the date of invention to improve the method of Maschke with the use of tissue type identification in order to advantageously aiding in diagnosis and/or identify important tissues types. Regarding claim 9, Maschke further teaches: 9. The method according to claim 2 wherein respective spatial regions associated with said one or more portions of said first image and said one or more portions of said second image vary with time (see Maschke’s Figs. 5 and then note that Maschke’s [0036]-[0037] iterates that these images are generated as a continuous image (e.g. video) such that all portions of the image, including the spatial regions associated with the OCT data and the spatial regions associated with the IVUS data, vary with time). Regarding claims 10 and 18, Maschke further teaches: 10. The method according to claim 1 wherein said imaging catheter is configured such that respective imaging beams associated with said first imaging modality and said second imaging modality are angled inwardly with respect to one another to facilitate the spatial overlap. 18. The system according to claim 13 wherein said imaging catheter is configured such that respective imaging beams associated with said first imaging modality and said second imaging modality are angled inwardly with respect to one another to facilitate the spatial overlap (see Maschke’s Figs. 1 in light of [0024] and [0038] that is, the IVUS sensor 3 and the OCT sensor 4 are both outward viewing and non-colinear (e.g. the OCT sensor is proximal to the IVUS sensor) but image the same region at the same time and are in phase with one another. As such in order to perform registration set forth in [0024] or to form the image as set forth in Figs. 5C-D or described in [0045] the imaging beams must diverge so as to cover the an overlapping FOV and thus some beams from the IVUS sensor must angle proximally towards the OCT sensor and some beams from the OCT sensor must angle distally towards the IVUS sensor in order to obtain the overlapped image. Additionally and in order to compact prosecution, as the examiner is not blind to the contents of the specification and to the fact that the applicant’s Fig. 12 depicts the sensors/foci of the sensors themselves as angled inward relative to each other, the examiner has included art in the conclusion section that, in the same field of combined optical/US imaging, would teach this narrower limitation). Regarding claims 12 and 20, Maschke further teaches: The method according to claim 1 wherein said imaging catheter is configured such that respective imaging beams associated with said first imaging modality and said second imaging modality are spatially overlapping over a substantial portion of their respective focal ranges. 20. The system according to claim 13 wherein said imaging catheter is configured such that respective imaging beams associated with said first imaging modality and said second imaging modality are spatially overlapping over a substantial portion of their respective focal ranges (see Maschke’s Figs. 5 A-D noting that the overlap is depicted and as close to total as is practicable with the shorter penetration of the OCT, which rather clearly reads on the mere “substantial portion” limitation; however, and to fully compact prosecution see also [0024] which attributes this to the image sensors of the catheter being run so as to capture the images at the same time and in phase with each other). Claims 5-6 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Maschke IVO Oishi together, alternatively further IVO Kuhn as applied to claim 1 above, and further alternatively in view of US 20070244393 A1 by Oshiki et al. (hereafter Oshiki, previously of record). Regarding claims 5-6, Oishi further teaches: 5. The method according to claim 1 wherein said one or more first subregions portions of said first image and said one or more portions of said second image second subregions are determined according to input from a user. And 6. The method according to claim 5 wherein said user input comprises an identification of one or more of the borders between adjacent subregions (as iterated in the modification to include Oishi above, the same sections cited above teach this limitation, e.g. from [0172]-[0173] one can note: “That is, the operator operates, for example, the mouse m of the input unit 3c to specify boundary line data (for example, a line marker that equally divides the display area into left and right at the center of the screen) … by operating the mouse m or the like in this state and moving the line marker to the left or right, for example, the difference between the three-dimensional images A and B (changes in shade or presence) can be clearly discriminated”). However and in the alternative examiner notes that Oshiki in the same or eminently related field of spatially co-registered diagnostic imaging (see e.g. Oshiki’s Abstract or Figs. 1 in light of [0087] for aligning and displaying diagnostic images in general; though the examiner notes that the foregoing covers the same field of Maschke, that one can also see that Oshiki and Maschke are far more related and in fact address the same specific problems within this same field such as vessel imaging using IVUS, where Oshiki depicts vessels in particular in many places such as Figs. 3-4, 7, 11, etc. and states that this can utilize IVUS images of the vessels e.g. where [0087] utilizes US imaging and where Fig. 5 in light of [0148] iterates that this imaging probe is in the vessel, and does so in real time see e.g. Claim 6; so as to even more clearly relate the arts) teaches allowing the user to designate one or more portions/subregions of said first image and said one or more portions/subregions of said second image (see Oshiki’s Fig. 2-3 and/or [0098] and [0101]) and then uses this is an automated process that identifies the same tissue boundaries over time, even if the tissue moves or rotates (see in general Oshiki’s [0090]-[0105], [0119]-[0126] and Figs. 2-3: e.g. noting semi-automated extraction in Oshiki [0095]-[0096], automated extraction in Oshiki [0099]/[0125], edges and borders being detected as depicted in Oshiki Figs. 2-3; then note that this is used in the context of vessel imaging as a whole and is not limited to a single vessel slice as depicted in Fig. 3, e.g. where a probe is being advanced in the vessel as depicted in Fig. 4 or 7 which show that the vessel does not stay in the same location and may bend/move with respect to the frame of reference, and as iterated in the description of Fig. 5 wherein probe 100 is advanced as in [0148], and even note that this can be a time series of images, e.g. see Fig. 11 and noting the time axis t and the determined vessel borders which specifically move and not only narrow but approach the central axis at different points in a vessel that possessed turns etc. such that the location of the border will rotate if the vessel rotates teaching the rotation from a first perspective or see [0133] which iterates that the size and direction (angle) of the ROI are alterable by the user allowing for a second form of rotation). Oshiki goes on to teach that this partially automated boundary identification is advantageous (see Oshiki’s [0002] which iterates that this can supplement visualization when only one modality will clearly show the object, or see [0164] which iterates that the perceptibility of the extracted regions can be improved, or see [0193] which iterates that this allows for improved diagnostic capabilities). Therefore it would have been obvious to one of ordinary skill in the art prior to the date of invention to improve the method of Maschke with the use of user input assisted region identification and tracking (i.e. displaying said image while varying locations of said sectors, wherein said sectors rotate over time and wherein said one or more portions of said first image and said one or more portions of said second image are determined according to input from a user), as taught by Oshiki in order to advantageously allow for better visualization of regions which show differentially in different modalities, to increase perceptibility of the extracted regions, and to improve diagnostic capabilities of the images. Regarding claim 6, Oshiki further teaches: 6. The method according to claim 5 wherein said user input comprises an identification of one or more of the borders between adjacent subregions (again see Fig. 3 and note the automatic contour selection iterated above in regards to the parent claim and incorporated herein wherein the user input 15-16 identifies the contours even if they are not a direct input of whole contours). However and for compact prosecution purposes the examiner notes the, at least because the input used to identify the contour in Oshiki and the input used to identify the contours in the applicant’s specification are different, the examiner notes that it would have alternatively been prima facie obvious to one of ordinary skill in the art to allow the user to input the contour shape because such methodology is old and well known in the art. See MPEP 2144.03 and note the reference previously provided below in the conclusion section. Therefore and in the alternative it would also have been prima facie obvious to one of ordinary skill in the art prior to the date of invention to modify the combination of Maschke and Oishi with the teachings of Oshiki to allow the user to input the contour directly because it is old and well known to do so. Claims 11 and 19 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Maschke IVO Oishi together or alternative further IVO Kuhn as applied to claim 1 above, and further in view of WO 2007123518 A1 by Papaioannou et al. (hereafter Papaioannou) Regarding claims 11 and 19, Maschke IVO Oishi teaches the basic invention as given above in regards to claims 1 and 13; however Maschke’s invention has the IVUS/OCT sensors share a proximal/distal relationship and as such they are not colinear. However, Papaioannou in the same or eminently related field of multi-modality IVUS/OCT imaging catheters (see Papaioannou’s Abstract) teaches that one can arrange the IVUS and OCT emitters and detectors such that they use a common/colinear imaging axis (see Papaioannou’s Fig. 1 noting the optical fiber 150 for the OCT arrangement is coaxially located within transducer ring 140 so as to be spatially correlated (i.e. co-registered) and image the same target area at the same time, see also page 18, lines 13-25 which also describe this arrangement and relationship). Papaioannou goes on to explain why this sort of arrangement for the probe elements is advantageous over other arrangements (see again Papaioannou’s page 18 lines 13-25 and note that this same section also lays out the advantages of such a design including in relevant part: “The device 100, as depicted in Figure 1, may allow for more compact design (i.e. smaller outer diameter), which may be beneficial when deployed in a cavity or hollow space.”). Therefore it would have been obvious to one of ordinary skill in the art prior to the date of invention to improve the combination of Maschke and Oishi with the arrangement of elements taught by Papaioannou in order to advantageously allow for a more compact design of the sort known to be beneficial in the internal imaging arts. Allowable Subject Matter Claims 3-4 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: As can be best seen in the Applicant’s Figs. 20 and as most clearly iterated in Claim 4, the applicant has a unique way of displaying multiple images that the examiner could not find in the prior art nor could the examiner find any reason to hold this sort of arrangement to be obvious over what is known in the art. More specifically, catheters which obtain spatially overlapping images from two imaging modalities are fairly well known in the art and can be seen in any of US 20050101859 A1 (hereafter Maschke, see e.g. Fig. 1 as depicted) and/or WO 2007123518 A1 (hereafter Papaioannou, see e.g. Fig. 1 as depicted) or US 20070078343 A1 (hereafter Kawashima, see e.g. Fig. 1 as depicted) among others which each show multi-modality catheters that gather US and optical images in a spatially overlapping manner from the same catheter. Likewise forming a composite image in general is old and well known and is covered in, e.g. Maschke’s Figs. 5C-D. Where the applicant’s teachings become novel is in their alternation of the combined images. Some forms of alternation are taught in the art. Maschke’s Fig. 5D is a good example and shows one way of going about this by forming a fused image where, in a border region that overlaps both images FOV, the images from the multiple modalities are blended and yet constantly update in intensity as they are video/live images. Likewise others have done similar feats such as WO 2007030424 A2 (hereafter Kuhn, see e.g. page 10 first paragraph) and JP 3639030 B2 (hereafter Oishi, see e.g. Fig. 23 and [0171]-[0174]) which show a computer automated or user directed (respectively) alternation between two overlaid images in the same area of a display/for the same area of the body. However, these each can be differentiated from the applicant’s claims as claims 3-4 and 15 require that plural segments switch back and forth over time between adjacent segments displaying, alternatingly, the different modalities. This is substantively different than any of the foregoing discussed arts and was singular feature that the examiner could not find nor find any cause to hold to be obvious. From there the applicant also further differentiates their invention from the prior art via claim 5 which, by both dependency and by the further limitation of rotating the segments around a central point, further defines the applicant’s invention over the prior art. For such reasons the examiner finds that claims 3-4 and 15 are both novel and unobvious over the prior art and thus would be allowable if drafted as independent claims. Response to Arguments Applicant’s arguments, see pages 8-11, filed 09/20/2024, with respect to the rejection(s) of claim(s) 1-2, 5-14, 16-20 under Maschke IVO Oishi, Oshiki, and Papaioannou have been fully considered and are persuasive in light of the amendment to automate the image formation. Therefore, the previous grounds of rejection have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of either the legal precedent set forth in MPEP 2144.04(III) and/or the art of Kuhn previously cited in the conclusion section to address this very feature. Applicant's arguments filed 03/10/2025 have been fully considered but they are not persuasive, with each argument being responded to in the order presented as follows: On pages 8-9 the applicant begins by providing an introduction, support for the amendment, and general rational for the changes which is acknowledged. On pages 9-10 the applicant begins their arguments against the 103(a) rejection by opining that the newly claimed subject matter is not taught and in particular attempts to draw a distinction between the images being dynamic based on being continuous/video images versus the borders being dynamic. In this instance the examiner will freely admit that the applicant is correct in the core argument provided, but notes that they have strayed from the wording of the claim limitations. For instance, the arguments regard Mashke’s boarder as temporally fixed and opine that changing the image over time does not change the boarder location on the display; however, the claim does not specifically address the border’s location on the display but instead recites that the borders “vary spatially with time” which, as addressed above in the case for Mashke alone, can be taught by the prior art. In this instance the examiner has modified the rejection to emphasize that images are not just a video stream, but are acquired during a pullback and thus that they are also varying in space over time. Given as much, the letter of the claim can still be addressed using Mashke’s teachings such that the examiner is not actually convinced by the argument provided. However, rejecting only the letter of the claim, especially given the applicant’s clear intent described in their argument and the fact that there is a difference between the applicant’s specification and the teachings of Mashke that is related to this feature would not compact prosecution. As such, the examiner has issued an alternative rejection for this feature that will likely be much more palatable to the applicant as it addresses what they are arguing even though the examiner would hold that the claim language itself is not so limited in the current drafting. More specifically, and further regarding the statement made on page 11 that the 103(a) modifying references do not cure the deficiencies, the examiner notes that should any deficiency exist than either Oishi or Oshiki would each remedy this presumed deficiency. In more detail, when one looks to Oishi the examiner previously addresses how Oishi could alternate the overlay on and off as that read on the prior claim wording. However, in nearly the same paragraphs Oishi also regarded changing the location of the border between the overlay image and the base image instead. As such while the exact citation provided may (arguendo) not have taught what the applicant was arguing, it is clear from simply reading a few paragraphs further into Oishi that Oishi does indeed addresses this same feature the applicant is attempting to argue. As such Oishi would remedy any presumed deficiency and, more importantly to the examiner, would address what the applicant is arguing for in addition to addressing the rote claim language. However and interestingly, the examiner also notes that Oshiki, now modified to be an alternative rejection given the amendment to the independent claim and the foregoing teachings of Oishi, already covered letting the user set the ROI boundaries as covered in the previous rejection of depended claims 5-6. Given that these boundaries are settable/alterable, it seems these same teachings would also undo the applicant’s arguments for reasons of record that do not appear to have been addressed other than by a general argument that these arts had not been applied to the independent claim which, while true, would not render the independent claim allowable given that there would be a suitable modification on record whose merits are not debated. As such and for the foregoing reasons the examiner is not convinced to remove the 103(a) rejection of claims 1 and 13. The applicant concludes on page 11 by arguing for patentability for dependency for all remaining claims; however, for the foregoing reasons the examiner is not convinced that the independent claims are patentable and thus is not convinced to allow the dependent claims at the current juncture. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael S Kellogg whose telephone number is (571)270-7278. The examiner can normally be reached M-F 9am-1pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui Pho can be reached at (571)272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL S KELLOGG/Examiner, Art Unit 3798 /PASCAL M BUI PHO/Supervisory Patent Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Oct 18, 2021
Application Filed
Sep 26, 2022
Non-Final Rejection — §103, §112
Jan 04, 2023
Response Filed
Apr 08, 2023
Final Rejection — §103, §112
Jul 27, 2023
Applicant Interview (Telephonic)
Jul 29, 2023
Examiner Interview Summary
Aug 30, 2023
Request for Continued Examination
Sep 06, 2023
Response after Non-Final Action
Jun 13, 2024
Non-Final Rejection — §103, §112
Sep 20, 2024
Response Filed
Dec 28, 2024
Final Rejection — §103, §112
Mar 10, 2025
Response after Non-Final Action
Apr 09, 2025
Request for Continued Examination
Apr 10, 2025
Response after Non-Final Action
Oct 17, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12551133
GRAPHICAL USER INTERFACE FOR CATHETER POSITIONING AND INSERTION
2y 5m to grant Granted Feb 17, 2026
Patent 12543955
APPARATUS AND METHOD FOR PATIENT MONITORING BASED ON ULTRASOUND MODULATION
2y 5m to grant Granted Feb 10, 2026
Patent 12544042
APPARATUS AND METHOD FOR REAL-TIME TRACKING OF TISSUE STRUCTURES
2y 5m to grant Granted Feb 10, 2026
Patent 12514454
INSERT AND PHOTOACOUSTIC MEASUREMENT DEVICE COMPRISING INSERT
2y 5m to grant Granted Jan 06, 2026
Patent 12433489
DEVICES AND METHODS FOR IN VIVO TISSUE DIAGNOSIS
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
42%
Grant Probability
98%
With Interview (+55.8%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 268 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month