DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4 and 6-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hammer et al. (“An Active Retinal Tracker for Clinical Optical Coherence Tomography Systems,” 2005) in view of Xiao et al. (“Retinal Image Registration and Comparison for Clinical Decision Support,” 2012).
Regarding claim 1, Hammer teaches an image processing method for image processing performed by a processor (as shown in abstract: optical coherence tomography (OCT) system is disclosed for fundus imaging, and it is apparent to have a processor in OCT system, see also abstract teaches “Analysis software was developed to co-align and co-add multiple fundus and
OCT images and to extract quantitative information on location of structures in the images”),
the image processing comprising:
acquiring a first fundus image of an examined eye (page 5 section 4.1: “For each OCT scan collected, a fundus image is also saved”);
acquiring a position for acquiring a tomographic image of the examined eye's fundus, which is set using the first fundus image (see page 5 section 4.1: “The software algorithm works by registration of all images in a set to the position of the OCT beam”, and page 5 section 4.1 second para. “Once the approximate center is located, a region demarcated by an annulus is examined”, after which “the OCT beam and its center can then be located precisely”);
acquiring a second fundus image of the examined eye (page 5 section 4.1: “For each OCT scan collected, a fundus image is also saved”, and abstract: “software was developed to co-align and co-add multiple fundus and OCT images and to extract quantitative information on location of structures in the images”).
Hammer further teaches positional alignment of the first and second fundus images (“page 5 section 4.1: a software program was devised and developed to co-align sets of fundus images,” and that the software algorithm “works by registration of all images in a set to the position of the Oct beam.”), and computing movement of the examined eye because Hammer discloses that the retinal tracker (abstract: “detects transverse eye motion via changes in feature reflectance,”) and further explain that (page 5 section 4.1: “if the eye moves with respect to the OCT scan, fundus feature will appear blurry in the co-added image. If the eye is made stationary with respect to the OCT scan, fundus features will appear crisp.”)
Hammer fails to teach: determining whether or not the acquired position is included in a specific range of the first fundus image; and computing a first movement amount of the examined eye in a case in which the acquired position is included in the specific range, using first registration processing to positionally align the first fundus image and the second fundus image, and computing a second movement amount of the examined eye in a case in which the acquired position falls outside the specific range, using second registration processing to positionally align the first fundus image and the second fundus image, the second registration processing being different from the first registration processing.
Xiao, in the same field of endeavor of retinal fundus image registration teaches determining whether or not the acquired position is included in a specific range of the first fundus image and using different registration processing depending on the image region and image quality. Xiao states that (abstract under Method section: “two image registration solutions were proposed for facing different image qualities of retinal images to make the registration methods more robust and feasible in a clinical application system.)” Xiao further teaches that one image may show (see page 508 left column: “lack of blood vessel information in its peripheral region,” and that “solution 2 adapts a region-based registration method while blood vessel features are not fully available in visual fields for some fundus camera.”). Xiao also teaches the claimed different first and second registration processing (“solution 1 proposes a novel blood vessel enhancement method for blood vessel based retinal image registration,” whereas “solution 2 is optic-disk-region intensity-based registration,” and that “the difference of the registration process from that in solution 1 is that the mutual information computed from the two optic-disk-region candidates is used as a metric for both translation and affine registrations.” See page 509 left column.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Hammer to determine whether the acquired tomographic-image position is within a specific range of the first funds image, and to use a first registration processing for positions included in that range and a different second registration processing for positions outside that range, as taught by Xiao, because Xiao expressly teaches using two different retinal image registration solutions for different image qualities and regions in order to make the registration methods more robust and feasible in a clinical application system. Such modification would have predictably improved the robustness and accuracy of Hammer’s fundus-image alignment and eye-movement computation under differing retinal image conditions.
Regarding claim 2, the combination of Hammer teaches the image processing method of claim 1, and Hammer further teaches further comprising controlling a scanning device for acquiring the tomographic image, based on the first movement amount or on the second movement amount (Hammer teaches discloses that the retinal tracker is used to control the OCT scanning based on detection eye motion “detects transverse eye motion via change in feature reflectance and positions the OCT diagnostic bam to fixed coordinates on the retina” see abstract).
Regarding claim 3, the combination of Hammer teaches the image processing method of claim 1, and Xiao further teaches wherein the specific range is a fundus center region (distinguishing between central and peripheral regions of the fundus “lack of blood vessel information in its peripheral region”, which this teaching inherently distinguishes the peripheral region from the central (fundus center) region, where vessel information is more available and reliable. Xiao further teaches using different processing depending on such regions and image characterizes “two images registration solutions were proposed for facing different image qualities of retinal images).
Regarding claim 4, the combination of Hammer teaches the image processing method of claim 1, and Xiao further teaches further comprising extracting a feature point of respective retinal vessels from the first fundus image and the second fundus image, wherein the first registration processing positionally aligns the first fundus image and the second fundus image using the feature points of the retinal vessels (“Solution 1 proposes a novel blood vessel enhancement method for blood vessel based retinal image registration” page 508 left column, and “blood vessel feature extraction and matching are used to align retinal images”).
Regarding claim 6, the combination of Hammer teaches the image processing method of claim 1, and Hammer further teaches that eye movement is determined in terms of displacement of features (“detects transverse eye motion via in feature reflectance” see abstract) however, Hummer fails to wherein: the first movement amount is configured by components comprising a first shift amount and a first shift direction; and the second movement amount is configured by components comprising a second shift amount and a second shift direction. Xiao teaches computing image alignment using transformation parameters that include displacement components. Xiao disclosed that registration involves determining geometric transformations including translation between images, which inherently includes both magnitudes and direction of shift.
Regarding claim 7, the combination of Hammer teaches the image processing method of claim 1 and Hammer further teaches wherein: the first fundus image includes an image of the examined eye's fundus itself (“fundus images … are used … to co-align and co-add multiple fundus and OCT images” see abstract); and the acquiring the second fundus image of the examined eye includes using the first fundus image to acquire a position of a region that is configured so as to contain at least a part of an image of the examined eye's fundus itself, and acquiring the second fundus image by acquiring an image of the examined eye's fundus based on the position of the region acquired (page 5: “The software algorithm works by registration of all images in a set to the position of the OCT beam” and “once the approximate center is located…the OCT beam and its center can then be located precisely”). Hammer does not explicitly disclose that the second fundus image is acquired by explicitly defining a region based on the first fundus image and then acquiring the second fundus image based on that region. Xiao, teaches selecting regions of interest in fundus images for registration and processing, and performing image acquisition/processing based on such selected regions, including region-based registration approaches (solution 2 adopts a region-based registration method…” see pages 508). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Hammer to explicitly acquire the second fundus image based on a region determined from the first fundus image, as taught by Xiao, because region-based processing is commonly used to improve registration accuracy and computational efficiency by focusing on relevant portions of the image, thereby improving robustness and performance of image alignment.
Regarding claim 8, the combination of Hammer teaches the image processing method of claim 7, but fails to teach wherein the size of the region is smaller than the size of the first fundus image. Xiao, teaches selecting and processing a region of interest that is smaller than the full fundus image (solution 2 adopts a region-based registration method…” see pages 508). Xiao further teaches that registration is performed using selected regions (e.g., optic-disk-region candidate see page 509), rather than the entire image, thereby inherently using a region smaller than the full fundus image. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure the region in Hammer to be smaller than the size of the first fundus image, as taught by Xiao, because region-of-interest processing is a well-known technique used to reduce computational complexity and improve registration robustness by focusing on relevant portions of the image.
Regarding claim 9, the combination of Hammer teaches the image processing method of claim 7, and Hammer further teaches that the OCT scan position is defined on the fundus image (“positions the OCT diagnostic beam to fixed coordinates on the retina” see abstract), and Hammer further teaches aligning images relative to this scan position (“the software algorism works by registration of all images in a set to the position of the OCT beam” see page 5). However, Hammer does not explicitly disclose wherein the region includes at least a part of the position for acquiring the tomographic image. Xiao, teaches selecting a region of interest for registration that corresponds to a meaningful anatomical or functional area (“solution 2 adapts a region-based registration”) and selecting specific regions (e.g., optic-disk region candidates) for registration, which inherently corresponds to target positions within the funds. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure the region in Hammer to include at least a part of the position for acquiring the tomographic image, as taught by Xiao, because selecting a region of interest that includes the scan position ensured accurate alignment and tracking of the are being imaged, thereby improving the precision and reliability of the tomographic imaging process.
Regarding claim 10, the combination of Hammer teaches the image processing method of claim 2, and Hammer further teaches further comprising using the controlled scanning device to acquire a tomographic image of the examined eye's fundus (“an active … retinal tracker was built into a clinical OCT system for stabilization of high-resolution retinal sections” see abstract, and “for each OCT scan collected …” page 5).
Regarding claim 11, the combination of Hammer teaches the image processing method of claim 2, and Hammer further teaches further comprising acquiring a tomographic image of the examined eye fundus while repeatedly executing the acquiring the second fundus image, the determining, the computing, and the controlling (“software was developed to co-align and co-add multiple fundus and OCT images” and “for each OCT scan collected, a fundus image is also saved” see page 5, and “detects transvers eye motion … and positions the OCT diagnostic beam…” see abstract).
Regarding claims 12 and 13, recites an image processing device comprising a processor configured to execute the same image processing steps recited in claim 1. Hammer, as discussed in claim 1, teaches all of the recited image processing steps. Hammer further teaches “analysis software was developed to co-align and co-add multiple fundus and OCT images and to extract quantitative information…”. Since Hammer in view of Xiao teaches all of the image processing steps as set forth in claim 1, and Hammer further teaches a processor executing such image processing, claims 12 and 13 is unpatentable over Hammer in view of Xiao for the same reasons set forth in claim 1.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hammer and Xiao as applied to claim 1 above, and further in view of Hirokawa US 2021/0004939.
Regarding claim 5, the combination of Hammer teaches the image processing method of claim 1, and Xiao teaches further comprising extracting a feature point of retinal vessels and using them for registrations (“solution 1 proposes a novel blood vessel enhancement method for blood vessel based retinal image registration”), however the combination of Hammer and Xiao fails to teach: extracting feature points of choroidal vessels and using such feature for registration i.e., extracting a feature point of respective choroidal vessels from the first fundus image and the second fundus image, wherein the second registration processing positionally aligns the first fundus image and the second fundus image using the feature points of the choroidal vessels.
In the same field of endeavor Hirokawa teaches processing fundus images to extract and enhance chorial vessel structures (see para 0048), also expressly teaches enhancement of the choroidal vessels, stating that “the image processing unit 182 enhanced the choroidal blood vessels in the first fundus image… by performing CLAHE processing, as a result … a choroidal blood vessel image in which the choroidal blood vessels appears enhance is obtained.” (see para 0049). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Hammer in view of Xiao and further Hirokawa to extract feature points of choroidal vessels and use those feature points in the second registration processing to align fundus images, because Hirokawa teaches that choroidal vessels can be enhanced and identified in fundus images, and Xiao teaching using vessel-based feature points for registration, thereby suggesting the use of alternative vessel structures (including choroidal vessels) as features points to improve robustness of registration in regions where retinal vessel features are insufficient.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EPHREM ZERU MEBRAHTU whose telephone number is (571)272-8386. The examiner can normally be reached 10 am -6 pm (M-F).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Pham can be reached at 571-272-3689. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EPHREM Z MEBRAHTU/ Primary Examiner, Art Unit 2872