DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/05/2026 has been entered.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 12-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 12, and all dependent claims thereof, recite the limitations “apply a stitching algorithm on the series of overlapping visible light images to generate a large visible light image of the body part, the stitching algorithm determining and applying a set of stitching parameters, apply the stitching algorithm on the series of overlapping fluorescence im- ages to generate a large fluorescence image, wherein the stitching algorithm applies the set of stitching parameters determined when performing the stitching of the visible light images if the viewing direction and perspective of the visible light images and the fluorescence images are identical, and the stitching algorithm applies the set of stitching parameters having a fixed offset if the viewing direction and perspective of the visible light images and the fluorescence images are not identical… wherein the stitching algorithm comprises extracting features from the series of visible light images and the series of fluorescence images and performing image matching using the extracted features.”(emphasis added)
The claim requires defining the stitching parameters of the stitching algorithm using ONLY the visible light images, and then applying that set of parameters with or without a fixed offset to the fluorescence images. However, the newly added limitations require extracting features from both the fluorescence and visible light images and “performing image matching using the extracted features.” The portion of the disclosure cited by Applicant as support for these limitations (figure 7 and the description of figure 7 appearing at pg. 32 of the instant specification) describes performing feature extraction and matching on a series of acquired images (does not specify that this is both of the fluorescence and visible light images) and that the extracted feature matching is used to determine the set of stitching parameters (see steps 7-11). However, the claim states that the stitching parameters are determined for the visible light images only and then that same set of parameters is applied to the fluorescence images with or without an offset. These appear to be two contradictory requirements – either the stitching parameters are determined for only the visible light images and then applied to both sets of images (as was previously claimed) or the algorithm of figure 7 is applied to BOTH sets of images, which would result in two sets of stitching parameters. It is not clear which process is required by the claims – determining a single set of stitching parameters and applying that to both sets of images or determining stitching parameters (via feature extraction and image matching) for both sets of images independently.
For the purposes of further examination, this claim will be interpreted to include extracting features from the visible light images, matching the features in the visible light images, generating stitching parameters for the visible light images, and then applying the stitching parameters to both the visible light images and the fluorescence images. This interpretation is consistent with the original claims and with the disclosure (see the bottom of pg. 27 into pg. 28 of the instant specification).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 12-15 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsumatori (WO 2021/038729 A1, Mar. 04, 2021) (hereinafter “Tsumatori”) in view of Lurie, Kristen L., Roland Angst, and Audrey K. Ellerbee. "Automated mosaicking of feature-poor optical coherence tomography volumes with an integrated white light imaging system." IEEE Transactions on Biomedical Engineering 61.7 (2014): 2141-2153 (hereinafter “Lurie”).
Regarding claim 12, as best understood based on limitations which are indefinite: Tsumatori discloses an image capturing and processing device configured to measure a fluorescence signal in a tissue of a body part, to which a fluorescent agent has been added, and configured to image a surface of the body part, wherein the tissue to which the fluorescent agent has been added forms part of the body part, the image capturing and processing device comprising: an image capturing device (figs. 1-2) comprising: an illumination light source configured to illuminate the tissue with excitation light having a wavelength suitable to generate emitted light by ex- cited emission of the fluorescent agent (figs. 1-2, light source 5; pg. 5, paragraphs 5-6; or fig. 17, light source 105b; pg. 15, final paragraph), two or more image sensors configured to capture a fluorescence image by spatially resolved measurement of the emitted light so as to provide a fluorescence image, and capture a visible light image of a section of a surface of the body part wherein the two or more image sensors are configured in that a viewing direction and/or a perspective of the fluorescence image and the visible light image are linked via a known relationship (figs. 2 and 5 or 17, visible light detection unit 11a and fluorescence light detection unit 11b; pg. 7, paragraphs 4-7), wherein the two or more image sensors are further configured to repeat capturing of the fluorescence image and the visible light image to provide a series of overlapping fluorescence images (pg. 17, paragraph 3) and a series of visible light images (pg. 3, paragraphs 1-2; pg. 8, paragraphs 2-3), the image capturing and processing device further comprising a one or more processors comprising hardware, the one or more processors (image processing unit 23) being configured to: apply a stitching algorithm on the series of visible light images to generate a large visible light image of the body part, the stitching algorithm determining and applying a set of stitching parameters, apply the stitching algorithm on the series of fluorescence images to generate a large fluorescence image, wherein the stitching algorithm applies the set of stitching parameters determined when performing the stitching of the visible light images, if the viewing direction and perspective of the visible light images and the fluorescence images are identical, wherein the stitching algorithm comprises extracting features from the series of visible light images and the series of fluorescence images and performing image matching using the extracted features (extended image generation unit 53; fig. 18, pg. 16, paragraph 4 – “the feature points in the visible image 62 are extracted by image recognition, and the movement amount of the feature points in the visible image 62 when the imaging field of view 71 is moved by a predetermined distance is used”), and output the large visible light image and the large fluorescence image (pg. 12, final paragraph).
While Tsumatori describes using overlapping fluorescence images, Tsumatori does not mention that the visible light images would also overlap. However, it is considered to be prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to have both overlapping fluorescence images and overlapping visible light images in light of Tsumatori’s repeated teachings that the images are acquired in the same manner at the same locations (see e.g. pg. 16, paragraph 4).
It is noted that the alternative limitation "... if the viewing direction and perspective of the visible light images and the fluorescence images are identical, and the stitching algorithm applies the set of stitching parameters having a fixed offset if the viewing direction and perspective of the visible light images and the fluorescence images are not identical, the offset reflecting the known relationship linking the viewing direction and perspective of the fluorescence image and the visible light image" is, in essence, claiming two separate hardware arrangements rather than claiming two possible software processing paths (since the processing performed by a device with an identical light path for acquiring both images would have no need for an alternative applying an offset). However, since the claim requires the that the processor be configured to perform both functions and in the interest of advancing prosecution, both alternatives will be considered.
Lurie, in the same problem solving area of generating composite mosaic (stitched) images from two modalities, discloses a system configured to generate and overlay two mosaic images – one from a visible light camera and one from an infrared-based OCT imager. Lurie discloses that the stitching parameters are generated based on the white light (WL) images by extracting features from the white light (“visible light”) images, matching the features between at least two frames, and determining stitching parameters based on the matching features (see figure 2 and algorithm steps 1) Coregister pairs of volumes acquired sequentially or having a high degree of overlap; a) – creating a homology using matching feature points between two WL images), and then applied to the OCT images, including applying a fixed offset reflecting the known relationship linking the viewing direction and perspective of the OCT image and the visible light image (A. Algorithm Overview, in particular step 1b – “…using the static relative transformation between the WL and OCT cameras…”).
It would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the algorithm of Tsumatori by including the fixed offset of Lurie in order to obtain properly aligned images for the composite/overlay in a case when the viewing direction and perspective of the visible light images and the fluorescence images are not identical.
Regarding claim 13: Tsumatori further discloses wherein the one or more pro- cessors being further configured to: superimpose the large visible light image and the large fluorescence image to provide an overlay image of the body part, and output the overlay image as output of the large visible light image and the large fluorescence image (pg. 12, final paragraph; fig. 12; pg. 10, final paragraph).
Regarding claim 14: Tsumatori further discloses wherein the two or more im- age sensors are configured in that the viewing direction and the perspective of the fluorescence image and the visible light image are identical (figs. 2 and 5, visible light detection unit 11a and fluorescence light detection unit 11b; pg. 7, paragraphs 4-7).
Regarding claim 15: Tsumatori further discloses the device according to claim 14, wherein the two or more image sensors are configured in that the fluorescence image and the visible light image are captured through a same objective lens (figs. 2 and 5, common optical system 12; pg. 7, paragraphs 4-7).
Regarding claim 18: Tsumatori further discloses wherein the illumination unit, the two or more image sensors are arranged in a single image capturing device, which further comprises a measurement sensor configured to measure a distance be- tween the surface of the body part and the two or more image sensors, which is captured in the visible light image (pg. 9, paragraph 3 - "distance meter").
Regarding claim 19: Tsumatori further discloses wherein the image capturing device is further configured to output a distance signal, which is indicative of the measured distance (pg. 9, paragraph 3 - since the distance measurement is used by the control unit 22 for determining position, field of view, and zoom magnification, the "distance signal" must be output at least from the measurement sensor to the control unit 22).
Claim(s) 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsumatori and Lurie, as applied to claim 12 above in view of Meester (US 2021/0137369 A1, May 13, 2021) (hereinafter “Meester”).
Regarding claim 16: Tsumatori and Lurie teach the device according to claim 12, wherein the two or more im- age sensors are configured to capture the fluorescence image and the visible light image at the same location and having the same field of view (Tsumatori - pg. 7, paragraph 4; pg. 12, paragraph 5), but are silent on the timing of the image acquisition including wherein the two or more im- age sensors are configured to capture the fluorescence image and the visible light image simultaneously, in absence of time-switching between a signal of the fluores- cence image and a signal of the visible light image.
Meester, in the same problem solving area of multi-wavelength imaging, teaches an image capture device comprising multiple image sensors which are configured to capture images having overlapping wavelength ranges simultaneously, in absence of time-switch- ing between a signal of the fluorescence image and a signal of the visible light image ([0011]).
It would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Tsumatori and Lurie so that the fluorescence image sensor and visible light image sensor may acquire images simultaneously in absence of time-switching in view of the teachings of Meester, in order to avoid the need for registration when there is motion of the device.
Regarding claim 17: Tsumatori and Lurie teach the device according to claim 12, including a prism used to split the fluorescence light and the visible light (prism 12b; pg. 7, paragraphs 6-7) but is/are silent on the details of the prism and do not disclose a first prism subassembly comprising a first prism, a second prism, a first compensator prism located between the first prism and the second prism, a second dichroic prism subassembly for splitting the visible light in three light components, and a second compensator prism located between the second prism and the second prism subassembly, wherein the first prism and the second prism each have a cross section with at least five corners, each corner having an inside angle of at least 90 degrees, wherein the corners of the first prism and the second prism each have a respective entrance face and a respective exit face, and are each configured so that an incoming beam which enters the entrance face of the respective prism in a direction parallel to a normal of said entrance face is reflected twice inside the respective prism and exits the respective first prism and second prism through its exit face parallel to a normal of said exit face, wherein the normal of the entrance face and the normal of the exit face of the respective first prism and second prism are perpendicular to each other; wherein, when light enters the first prism through the entrance face, the light is partially reflected towards the exit face of the first prism thereby travel- ing a first path length from the entrance face of the first prism to the exit face of the first prism, and the light partially enters the second prism via the first compensator prism and is partially reflected towards the exit face of the second prism, thereby traveling a second path length from the entrance face of the first prism to the exit face of the second prism, and wherein the first prism is larger than the second prism so that the first and the second path lengths are the same.
Meester, in the same problem solving area of multi-wavelength imaging, teaches an image capturing device comprising a dichroic prism assembly configured to receive fluores- cent light forming the fluorescence image and visible light forming the visible light image through an entrance face, the dichroic prism assembly comprising: a first prism subassembly comprising a first prism, a second prism, a first compensator prism located between the first prism and the second prism, a second dichroic prism subassembly for splitting the visible light in three light components, and a second compensator prism located between the second prism and the second prism subassembly, wherein the first prism and the second prism each have a cross section with at least five corners, each corner having an inside angle of at least 90 degrees, wherein the corners of the first prism and the second prism each have a respective entrance face and a respective exit face, and are each configured so that an incoming beam which enters the entrance face of the respective prism in a direction parallel to a normal of said entrance face is reflected twice inside the respective prism and exits the respective first prism and second prism through its exit face parallel to a normal of said exit face, wherein the normal of the entrance face and the normal of the exit face of the respective first prism and second prism are perpendicular to each other; wherein, when light enters the first prism through the entrance face, the light is partially reflected towards the exit face of the first prism thereby travel- ing a first path length from the entrance face of the first prism to the exit face of the first prism, and the light partially enters the second prism via the first compensator prism and is partially reflected towards the exit face of the second prism, thereby traveling a second path length from the entrance face of the first prism to the exit face of the second prism, and wherein the first prism is larger than the second prism so that the first and the second path lengths are the same (fig. 5 and all associated description). Meester further teaches that this arrangement is advantageous because the optical assembly can have sensors mostly on one side, still use an even amount of direction change so all sensors see the same image and no mirror effects need to be compensated for ([0084]) and the prism modules can be easily aligned which makes automatic assembly easy. These modules can be prepared individually and afterwards bonded together with a simple robotic or manual tool ([0090]).
It would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to implement the system of Tsumatori and Lurie with a dichroic prism assembly as taught by Meester in order to achieve the advantages of no mirror effect compensation and easy automatic assembly in view of the further teachings of Meester.
Response to Arguments
Applicant’s arguments regarding prior art rejections of claims 12-19, filed 02/05/2026, have been fully considered but are moot in view of the updated grounds of rejection necessitated by amendment. However, in the interest of advancing prosecution, certain of Applicant’s arguments will be addressed.
As a preliminary matter, Examiner notes that the claims as written are very unclear. Applicant presents a claim to a device (including an image capturing device), but then requires two separate processing flows for two separate image capturing device structures. In addition, Applicant requires that the two separate processing flows include contradictory steps. The claims require determining a set of stitching parameters using only the visible light images, then applying that same set of stitching parameters to the fluorescence light images with or without an offset (depending on what the structure of the image capturing device is), but also extracting features and performing feature matching from both the visible light and fluorescence images (unclear if this is in addition to determining the stitching parameters or as part of determining the stitching parameters, although the specification presents the feature extraction and matching of one set of images as part of the stitching parameter determination). It is difficult to discern which steps are required to practice the claimed invention.
With respect to Applicant’s arguments against Tsumatori, Examiner notes that Tsumatori presents more than one embodiment which includes at least one embodiment using overlapping images and determining the mosaicking (“stitching”) parameters using extracted feature matching.
With respect to Applicant’s arguments against Lurie, Examiner respectfully disagrees with Applicant’s assertion that Lurie “only relates to the process of trying to match White Light (WL) images with corresponding optical coherence tomography (OCT) images.” Lurie presents a detailed algorithm (in both text and a diagrammatic flow chart) including the steps of registering images “having a high degree of overlap” by extracting features from the WL (visible) images, matching those features (creating a homology), generating a transformation (“stitching parameters”), creating a homology (matching extracted features) between the WL and OCT images, and applying the parameters to create a mosaicked image. See section II including figure 2 and the algorithm presented in the second column of pg. 2 into pg. 3.
Moreover, Lurie discloses that the conventional approach to mosaicking (“stitching”) images is “1)Detect interest points—locations with rich local intensity gradient variation—in each volume. 2) Extract a descriptor for each interest point that describes the local gradients. 3) Coregister pairs of volumes based on correspondences (i.e., one-to-one relationships between image points of different images) generated between locations having simi lar descriptors. 4) Bundle adjust the positions of the volumes with respect to the observed correspondences, effectively dis tributing the cumulative error from pairwise (PW)-registered volumes across multiple volumes collected in a loop” (final paragraph of pg. 1). Meaning that, even if the mosaicking algorithm of Lurie was somehow different from the claimed “stitching” algorithm, Lurie still discloses the same general approach as being purely conventional.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLYN A PEHLKE whose telephone number is (571)270-3484. The examiner can normally be reached 9:00am - 5:00pm (Central Time), Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Koharski can be reached at (571) 272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAROLYN A PEHLKE/ Primary Examiner, Art Unit 3799