Prosecution Insights
Last updated: April 19, 2026
Application No. 17/769,288

IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE, AND PROGRAM

Final Rejection §103
Filed
Apr 14, 2022
Examiner
DICKERSON, CHAD S
Art Unit
2683
Tech Center
2600 — Communications
Assignee
Nikon Corporation
OA Round
4 (Final)
63%
Grant Probability
Moderate
5-6
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
376 granted / 600 resolved
+0.7% vs TC avg
Strong +23% interview lift
Without
With
+23.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
35 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
55.5%
+15.5% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 600 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 12/3/2025 have been fully considered but they are not persuasive. The arguments state that the applied references do not perform the features of “generating a second fundus image by performing background processing to replace a first pixel value of a first pixel configuring the background area with a second pixel value different from the first pixel value, the second pixel value corresponding to a second pixel of the foreground area closest in distance to the first pixel or an average value of a plurality of pixels of the foreground area”. The Examiner respectfully disagrees with this assertion and will explain why below. Regarding the Talwar reference, the invention discloses in figure 2A the conversion of a fundus and background area into another figure in 2B. This same conversion is shown in figures 3-5. These figures involve converting vessels into a certain pixel value and converting an outer ring in the edge of the fundus in the background area from a darker color to a lighter color present in the foreground area. The white color represents a line filtered image containing an edge that surrounds the fundus image where the pixel color of the background area is replaced. This color can be considered as a pixel that is closest in distance to the background area edge being used as the line edge surrounding the fundus since the converted vessel pixels approach background area. The line filtered image displaying the edge line surrounding the fundus is explained in ¶ [27], [38] and [41]-[52]. The later images in figure 2-5 represent the second fundus image generated. Therefore, based on the above, the rejection of the claims is maintained. Thus, based on the above, the features of the claims are disclosed below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 4-6, 8-11 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Talwar (US Pub 2016/0163041) in view of Fujita (JP Pub 2007-097634 (Pub Date: 4/19/2007)). Re claim 1: Talwar discloses an image processing method comprising: a processor acquiring a first fundus image of the examined eye including a foreground area, which is an area of a fundus that includes blood vessels, and a background area other than the foreground area, which is an area other than the fundus (e.g. an image of a fundus image is acquired, which is taught in ¶ [18], [29] and [30]. The image is separated into foreground and background, which is taught in ¶ [53]. A processor is taught in ¶ [62]. Figures 5 and 7 show the vessels within the foreground area and the background area outside the area where the vessels are present.); and [0018] A technique for blood vessel extraction in fundus color images of the eye using an alpha-matting technique is described. In one embodiment, the alpha-matting technique is a K-nearest neighbors (KNN) based alpha-matting and is used to separate vessel and non-vessel regions in an image. In one embodiment, larger blood vessels are used to generate the tri-map needed for matting. Therefore, in one embodiment, no input is required from the user to generate the tri-map. A multi-dimensional feature set is constructed and fed into the matting framework. The affinities among the pixels are used to obtain the matting Laplacian, whose eigen decomposition leads to segregation of the vessel and non-vessel regions in the image having retinal vessels. [0029] FIG. 1 is a flow diagram of one embodiment of a process for creating a multi-dimensional feature space. The process is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), firmware, or a combination of the three. [0030] Referring to FIG. 1, the process begins by performing preprocessing on an original image 100 (processing block 110). Color fundus images invariably show important intensity variations, poor contrast and noise. In one embodiment, the preprocessing includes filtering and other operations performed on the input image. For example, in one embodiment, the preprocessing includes noise filtering to remove granular noise. In one embodiment, the noise filtering is accomplished by using Wiener filtering. In one embodiment, a Weiner filter is applied over a 3×3 window to remove granular and speckle noise from the image. KNN-Matting [0053] Alpha matting refers to the process of separating an image into foreground and background components and obtaining a corresponding opacity mask. The opacity mask is commonly called the alpha matte and its value is obtained for each individual pixel in the image. Typically, for a grayscale image I(i,j)=αF(i,j)+(1−α)B(i,j)  (6) where I(i,j) is the image intensity at pixel (i,j), F is the foreground and B is the background image. At each pixel, the value of three unknowns {α, F, B} is calculated from one equation. Consequently, the alpha matting problem is highly under constrained. A user could provide a tri-map or scribbles which can be used to identify known foregrounds and backgrounds. However, additional constraints are still needed to solve the problem. KNN matting uses the multi-dimensional feature space and works on non-local principle for finding neighbors. It is advantageous because it does not need large kernels to search for similar pixels in the neighborhood and provides a closed form solution. [0062] Referring to FIG. 9, vessel extraction system 910 includes a bus 912 to interconnect subsystems of vessel extraction system 910, such as a processor 914, a system memory 917 (e.g., RAM, ROM, etc.), an input/output controller 918, an external device, such as a display screen 924 via display adapter 926, serial ports 928 and 930, a keyboard 932 (interfaced with a keyboard controller 933), a storage interface 934, a floppy disk drive 937 operative to receive a floppy disk 938, a host bus adapter (HBA) interface card 935A operative to connect with a Fibre Channel network 990, a host bus adapter (HBA) interface card 935B operative to connect to a SCSI bus 939, and an optical disk drive 940. Also included are a mouse 946 (or other point-and-click device, coupled to bus 912 via serial port 928), a modem 947 (coupled to bus 912 via serial port 930), and a network interface 948 (coupled directly to bus 912). the processor generating a second fundus image by performing background processing to replace a first pixel value of a first pixel configuring the background area with a second pixel value different from the first pixel value, the second pixel value corresponding to a second pixel of the foreground area closest in distance to the first pixel or an average value of a plurality of pixels of the foreground area (e.g. the original preprocessed image and the high pass/normalized image appears in figures 2 and 3, which is taught in ¶ [35] and [36]. Additional Frangi vesselness and Hu’s moment for the high pass/normalized images are shown as high pass images in figures 4 and 5, which is taught in ¶ [41], [42], [46] and [47]. The feature space represents a multi-dimensional feature set passed to a MF-FODG filter and undergoes the alpha matting process, which is taught in ¶ [48]-[52]. When the performance of the alpha matte operation occurs, figure 8A represents the processing of the images shown in figures 2, 3 and 5. In particular, in figures 5A and 5B, the vessels are emphasized and the edge of the fundus is seen to be in the darker black pixels a part of the background area pixel color. In figures 5C and 5D, these edge pixels are replaced with pixel values that are associated with the pixel value that is associated with the pixel values within the foreground area, which can be the pixel value closest to the vessel colors within the foreground area. These figures are discussed in ¶ [41], [42] and [46]-[52].). [0035] Referring back to FIG. 1, processing logic generates a normalized image from the preprocessed image (processing block 113). Image normalization is performed to enhance the vessels in retina images. In one embodiment, processing logic generates the normalized image by applying a median (high pass) filter with a large window to the preprocessed image to obtain a rough estimate of the background image. In one embodiment, the window is a 25×25 pixel window. Other sized windows may be used, such as, for example, 5×5, 50×50, depending on the base image size. For example, for an image of size approximately 500×500, a window of 25×25 is used. However, any large block size of image can be used, such 4-5% of the image size can be used. Processing logic then subtracts the preprocessed image from the background image resulting from applying the high pass filter. Vessels appear brighter in this high pass image. This subtracted image is then normalized to a zero mean and unit variance to enhance the vessels lying towards lower end of grey-levels. This normalization process involves subtracting a mean value from each of the pixels values, finding the variance among the pixel values that have undergone the subtraction operation and then dividing each of the pixel values by the variance. This normalization leads to discriminative enhancement of vessels and suppression of background. FIGS. 2A and B illustrate an original (preprocessed) image and the final high pass/normalized image, respectively. [0036] Processing logic also generates a LoG filtered image by applying a LoG filter to the preprocessed image (processing block 112). The LoG filter enhances vessel boundaries from the preprocessed image. The LoG filtered image has vessels at a higher intensity level than the background intensity of the preprocessed image. This LoG filtered image is normalized to zero mean and unit variance to increase contrast and utilize the complete span of gray values. FIGS. 3A and B illustrate an original (preprocessed) image and its normalized LoG filtered image, respectively. [0041] The image that results from applying the line filter over the Frangi image is then normalized to a zero mean and unit variance to enhance the small vessels. Thereafter, in one embodiment, median filtering is performed over the normalized image to reduce noise and traces of central vessel reflex. The median filtered image is used as a mask over the Frangi image to reduce the floating speckles. In such a case, this mask is multiplied with the Frangi image. The resultant image retains the entire vessel region as in the original (preprocessed) image but noise and false positives are reduced by a large amount. [0042] FIGS. 4A-C illustrate an example of an image, a Frangi vesselness image generated from the image in FIG. 4A, and the output of line filtering applied on the Frangi image, respectively. [0046] In one embodiment, prior to calculating Hu's moments for the high pass/normalized image, processing logic performs area thresholding over the image to reduce stray noisy marks in image before feeding it to the moment generation subroutine. This feature provides a very nice estimate of the skeleton of blood vessels but has the potential to reduce the overall accuracy due to suppression of thin vessels. To avoid this, processing logic adds the line filtered image (prior to normalization) to the output of the Hu's moment based feature image (processing block 118) and adds the resultant image to the feature set. [0047] FIGS. 5A-B depict Hu's first and second moment on the high pass/normalized image, respectively. FIGS. 5C and D depict the same images depicted in FIGS. 5A and 5B, respectively, after addition of line filtered image. [0048] Processing logic also identifies the X and Y coordinates for each pixel (processing block 111) from original (preprocessed) image to enable feature vectors to be created along with the pixel values from each of the images produced, namely, the LoG filtered image from processing block 112, the high pass/normalized image from processing block 113, the Frangi image from processing block 114, the line filtered image from processing block 116, the image resulting from multiplying the line filtered image from processing block 116 Thjese with the Frangi image from processing block 114, and the two images generated from the high pass/normalized image from processing block 113 and the line filtered image from processing block 116 for which Hu's moments were generated. In one embodiment, these images are all gray scale images. This represents feature space 120. [0049] Once created, processing logic feeds feature space 120, which is the multi-dimensional feature set, into the rest of the vessel extraction process. FIG. 6 is a flow diagram of one embodiment of the vessel extraction process. The process is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), firmware, or a combination of the three. [0050] Referring to FIG. 6, processing logic receives feature images 601 of the feature space and applies a matched filter with first-order derivative of the Gaussian (MF-FDOG) to the images (processing block 602). Processing logic also applies the MF-FDOG filter to the original (preprocessed) image 610 (processing block 611). [0051] Processing logic then performs alpha matting to the image data of the images (processing block 604). In one embodiment, the alpha matting is K-nearest neighbor (KNN)-alpha matting. [0052] Using the alpha matte 604 produced by performing alpha matting and the filter image from processing block 611, processing logic performs Otsu segmentation (processing block 605) and produces output image 606, which is an image of the delineated block vessels. Tri-Map Generation [0055] As mentioned above, any matting problem needs user inputs in the form of sparse mark-ups. In one embodiment, the dominant vessels in the retina images are used to generate the tri-map automatically. The normalized high pass image consists of only dominant vessels and is used to create a skeleton for the vasculature. This skeleton acts as user scribbles for foreground area (vessels). In one embodiment, prior to using the normalized high pass image as a tri-map, optic cup removal is applied to the normalized high pass image. [0056] Morphological thinning operation and area thresholding are performed before using the high pass image as a tri-map. The known background areas are also determined from the same image. It is assumed that the normalized image covers most of the vasculature of the retina images. This image is dilated and the inverted image can then be used as the background mask. FIGS. 7A-C illustrate a normalized image, an example of a foreground map generated by the procedure and a background map generated by this procedure, respectively. Segmentation [0057] The feature images in multi-dimensional feature set contain traces of boundary of optic disc and eye due to their vessel like appearance. In one embodiment, a first order derivative of Gaussian (FDOG) filter is used to suppress these. Blood vessels have a gaussian like profile across their cross-section whereas the optic disk and other thick bright regions have intensity profile similar to a step function. The FDOG curve is anti-symmetric around its center value. Thus, the response to applying an FDOG filter is symmetric for bright optic disk and anti-symmetric for blood vessels. The local average of the response (equivalent to low pass filtering) typically returns a high value at the boundary of optic disk and minimal value for blood vessels. The inverted image of this response is used as mask, multiplied to the feature images in the multi-dimensional feature set. [0058] For choosing a threshold value, the Otsu segmentation technique is used along with the locally averaged response output from the FDOG filter. The Otsu segmentation on alpha matte returns a threshold value which is added to the locally averaged response image output from the FDOG filter. The resulting image generated by adding the threshold value to the locally averaged response image output from the FDOG filter is then used as a reference threshold image, Every pixel in the resultant alpha matte is compared with the corresponding pixel in the reference image and a decision (binarize) is made based on whether the value in the reference image is higher or lower than a current pixel. This yields a relatively higher threshold value for false positives regions whereas other regions are compared against the normal value of Otsu segmentation technique. [0059] FIGS. 8A-C show an alpha matte, a thresholded image, and ground truth, respectively. As shown in FIGS. 8A-C, the alpha matte contains a prominent optic disk boundary which can be greatly reduced by the FDOG based thresholding technique. [0060] Thus, a novel scheme of alpha matting to separate foreground from background for purposes of vessel extraction has been disclosed. The tri-map generation to generate the tri-map as part of the alpha matting is automated from the generated features. This serves skeleton for sure vessel region whereas background region is constructed from it. This is a huge step in the automatic extraction of blood vessel from retina color images, thus reducing the human involvement. An Example of a Vessel Extraction System [0061] FIG. 9 depicts a block diagram of a vessel extraction system. The vessel extraction system comprising a memory to store one or more images, such as an original image that is to undergo retinal vessel extraction and/or feature set images. The system also includes a processing unit that acts as a retinal vessel extractor to perform alpha matting on a multi-dimensional feature set derived from image data of a first image and perform retinal vessel extraction, including performing segmentation on the image by separating foreground and background image data from the first image using an output from the alpha matting. In one embodiment, the alpha matting comprises K-nearest neighbor (KNN) based alpha matting and the processing unit performs an Otsu segmentation process on the output of the alpha matting. However, Talwar fails to specifically teach the features of a processor controlling an emission of light from a light source to emit light towards an examined eye, wherein the first fundus image is acquired using reflected light from the examined eye in response to the emission of light from the light source. However, this is well known in the art as evidenced by Fujita. Similar to the primary reference, Fujita discloses evaluating a fundus (same field of endeavor or reasonably pertinent to the problem). Fujita discloses a processor controlling an emission of light from a light source to emit light towards an examined eye (e.g. it is conventional to have an eyeball irradiated with light controlled by a controller in a device in order to capture the state of the fundus. The state of the fundus is then stored as a medical record, which is taught in ¶ [02].). [0002] In addition, the state of the fundus is observed by irradiating the eyeball with light from the outside, or the state of the fundus is photographed as a fundus photograph (fundus image) by an optical device such as a camera, and is recorded as a medical record. [0032] Next, the flow of processing of the image analysis computer 2 in the image analysis system 1 of the present embodiment will be described mainly based on the flowcharts of FIGS. 6 and 7. Here, Steps S1 to S20 in FIGS. 6 and 7 correspond to the image analysis program of the present invention. [0033] First, the fundus of a subject (patient or examinee) is photographed using a fundus camera 9 having a digital camera function, and image data 15 (see FIG. 2) related to the photographed fundus image 3 is image data reading means 16. Is read into the image analysis computer 2 (step S1). Here, the fundus image 3 that is the source of the image data 15 may be either a color image or a monochrome image. In the present embodiment, the image data 15 is directly read from the fundus camera 9. For example, the fundus image (fundus photo) printed on the photographic paper is read using an optical reading device such as a scanner. The digitized image data 15 may be acquired by scanning or scanning a fundus photographic negative film using a film scanner. Then, the read image data 15 is stored in the storage means 17 (step S2). [0034] Thereafter, the read image data 15 is analyzed, and binarization processing is performed according to a predetermined threshold value using a difference in pixel value of each pixel 21 constituting the image data 15, and the processed binary image is processed. Two regions of the blood vessel region 4 and the background region 19 are extracted from the data 18 (step S3: see FIG. 3). In the photographed fundus image 3, a portion (part) corresponding to the blood vessel region 4 is shown in black or dark color, and a background region 19 other than the blood vessel region 4 (for example, a retinal region or an optic disc) is generally used. It is often shown in white or lighter than the blood vessel region 4. Therefore, the blood vessel region 4 and the background region 19 can be clearly extracted by using the difference in color (pixel value) and binarizing the above-described image data 15 with the prescribed threshold as a boundary. it can. Here, the extracted blood vessel region 4 and the background region 19 are configured by a plurality of square pixels 21 (pixels) closely arranged vertically and horizontally, as schematically shown in FIG. That is, the boundary portion 24 between the blood vessel region 4 and the background region 19 is in a state in which the vertical or horizontal sides of each pixel 21 are in contact with each other, and is configured by a combination of straight lines and right angles. Then, one target pixel 5 is selected from the plurality of pixels 21 (step S4). The selection of the target pixel 5 is performed on all the pixels constituting the blood vessel region 4 as will be described later, so that the target pixel 5 is selected in order from the end of the blood vessel region 4 in advance. It is selected according to established rules. Simultaneously with the selection, the value of the inclination θ with respect to the virtual perpendicular V passing through the pixel center C of the selected target pixel 5 is set to “θ = 0 °” (step S5). PNG media_image1.png 322 370 media_image1.png Greyscale PNG media_image2.png 298 372 media_image2.png Greyscale Therefore, in view of Fujita, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of a processor controlling an emission of light from a light source to emit light towards an examined eye, incorporated in the device of Talwar, in order to acquire an amount of information about the eye, which can aid in acquiring useful information without a physical burden on a patient (as stated in Fujita ¶ [02]). Re claim 4: Talwar discloses the image processing method of claim 1, wherein the background area is a single color area (e.g. the background is a single color in the image shown in figure 7A, which is described in ¶ [55] and [56] above.). Re claim 5: Talwar discloses the image processing method of claim 1, further comprising the processor generating a third fundus image by binarizing pixel values of pixels of the foreground area in the second fundus image or in an image resulting from emphasizing blood vessels in the second fundus image, by binarization with respect to a threshold value determined based on pixel values of peripheral pixels to the pixels of the foreground area (e.g. Figure 7C shows a binarization of figure 7B or an inversion of the values in order to create a third image, which is taught in ¶ [55] and [56] above and illustrated in Figures 7A-7C.). Re claim 6: Telwar discloses the image processing method of claim 1, further comprising the processor executing processing to analyze blood vessels of a fundus of the examined eye (e.g. the invention teaches analyzing the blood vessels of the image, which is taught in ¶ [18] above.). Re claim 8: Talwar discloses the image processing method of claim 1, further comprising the processor replacing, with respect to the second fundus image, a pixel value of a pixel of the background area with a third pixel value different from the second pixel value (e.g. the pixels in the image are again compared to a threshold in order to determine whether to binarize the pixel. This would turn the pixel to a different value based on being higher or lower than a current pixel, which is taught in ¶ [57]-[60]. In addition, the background area is turned to white, which is seen in figure 8A-8C.). Segmentation [0057] The feature images in multi-dimensional feature set contain traces of boundary of optic disc and eye due to their vessel like appearance. In one embodiment, a first order derivative of Gaussian (FDOG) filter is used to suppress these. Blood vessels have a gaussian like profile across their cross-section whereas the optic disk and other thick bright regions have intensity profile similar to a step function. The FDOG curve is anti-symmetric around its center value. Thus, the response to applying an FDOG filter is symmetric for bright optic disk and anti-symmetric for blood vessels. The local average of the response (equivalent to low pass filtering) typically returns a high value at the boundary of optic disk and minimal value for blood vessels. The inverted image of this response is used as mask, multiplied to the feature images in the multi-dimensional feature set. [0058] For choosing a threshold value, the Otsu segmentation technique is used along with the locally averaged response output from the FDOG filter. The Otsu segmentation on alpha matte returns a threshold value which is added to the locally averaged response image output from the FDOG filter. The resulting image generated by adding the threshold value to the locally averaged response image output from the FDOG filter is then used as a reference threshold image, Every pixel in the resultant alpha matte is compared with the corresponding pixel in the reference image and a decision (binarize) is made based on whether the value in the reference image is higher or lower than a current pixel. This yields a relatively higher threshold value for false positives regions whereas other regions are compared against the normal value of Otsu segmentation technique. [0059] FIGS. 8A-C show an alpha matte, a thresholded image, and ground truth, respectively. As shown in FIGS. 8A-C, the alpha matte contains a prominent optic disk boundary which can be greatly reduced by the FDOG based thresholding technique. [0060] Thus, a novel scheme of alpha matting to separate foreground from background for purposes of vessel extraction has been disclosed. The tri-map generation to generate the tri-map as part of the alpha matting is automated from the generated features. This serves skeleton for sure vessel region whereas background region is constructed from it. This is a huge step in the automatic extraction of blood vessel from retina color images, thus reducing the human involvement. Re claim 9: Talwar discloses the image processing method of claim 8, wherein the first pixel value is the same as the third pixel value (e.g. the pixels in the background area in figure 8A are white, which is taught in ¶ [57]-[60] above and seen in figure 8A-8C.). Re claim 10: Talwar discloses the image processing method of claim 1, wherein the background processing is performed on at least pixels adjacent to pixels of the foreground area, among pixels configuring the background area (e.g. background pixel processing occurs on pixels that are adjacent to foreground pixels, which is taught in ¶ [55] and [56] above.). Re claim 11: Talwar discloses the image processing method of claim 1, wherein the second pixel value is a value in a range of possible values for pixel values of pixels in the foreground area (e.g. the background is turned to a value that was the value of the foreground area before inversion, which is taught in ¶ [57]-[60] above.). Re claim 13: Talwar discloses an image processing device comprising: a memory, and a processor coupled to the memory (e.g. the invention contains a processor and memory coupled thereto, which is taught in ¶ [62].), [0062] Referring to FIG. 9, vessel extraction system 910 includes a bus 912 to interconnect subsystems of vessel extraction system 910, such as a processor 914, a system memory 917 (e.g., RAM, ROM, etc.), an input/output controller 918, an external device, such as a display screen 924 via display adapter 926, serial ports 928 and 930, a keyboard 932 (interfaced with a keyboard controller 933), a storage interface 934, a floppy disk drive 937 operative to receive a floppy disk 938, a host bus adapter (HBA) interface card 935A operative to connect with a Fibre Channel network 990, a host bus adapter (HBA) interface card 935B operative to connect to a SCSI bus 939, and an optical disk drive 940. Also included are a mouse 946 (or other point-and-click device, coupled to bus 912 via serial port 928), a modem 947 (coupled to bus 912 via serial port 930), and a network interface 948 (coupled directly to bus 912). wherein the processor: acquires a first fundus image of an examined eye including a foreground area, which is an area of a fundus that includes blood vessels, and a background area other than the foreground area, which is an area other than the fundus (e.g. an image of a fundus image is acquired, which is taught in ¶ [18], [29] and [30] above. The image is separated into foreground and background, which is taught in ¶ [53] above. A processor is taught in ¶ [62] above. Figures 5 and 7 show the vessels within the foreground area and the background area outside the area where the vessels are present.); and generates a second fundus image by performing background processing to replace a first pixel value of a first pixel configuring the background area with a second pixel value different from the first pixel value, the second pixel value corresponding to a second pixel of the foreground area closest in distance to the first pixel or an average value of a plurality of pixels of the foreground area (e.g. the original preprocessed image and the high pass/normalized image appears in figures 2 and 3, which is taught in ¶ [35] and [36]. Additional Frangi vesselness and Hu’s moment for the high pass/normalized images are shown as high pass images in figures 4 and 5, which is taught in ¶ [41], [42], [46] and [47]. The feature space represents a multi-dimensional feature set passed to a MF-FODG filter and undergoes the alpha matting process, which is taught in ¶ [48]-[52]. When the performance of the alpha matte operation occurs, figure 8A represents the processing of the images shown in figures 2, 3 and 5. In particular, in figures 5A and 5B, the vessels are emphasized and the edge of the fundus is seen to be in the darker black pixels a part of the background area pixel color. In figures 5C and 5D, these edge pixels are replaced with pixel values that are associated with the pixel value that is associated with the pixel values within the foreground area, which can be the pixel value closest to the vessel colors within the foreground area. These figures are discussed in ¶ [41], [42] and [46]-[52].). However, Talwar fails to specifically teach the features of an examined eye. However, this is well known in the art as evidenced by Fujita. Similar to the primary reference, Fujita discloses evaluating a fundus (same field of endeavor or reasonably pertinent to the problem). Fujita discloses an examined eye (e.g. it is conventional to have an eyeball irradiated with light controlled by a controller in a device in order to capture the state of the fundus. The state of the fundus is then stored as a medical record, which is taught in ¶ [02] above.). Therefore, in view of Fujita, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of an examined eye, incorporated in the device of Talwar, in order to acquire an amount of information about the eye, which can aid in acquiring useful information without a physical burden on a patient (as stated in Fujita ¶ [02]). Claim(s) 14 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Talwar in view of Fujita and Yasuno (US Pub 2016/0135683). Re claim 14: Talwar discloses a non-transitory storage medium storing a program that causes a computer to execute processing comprising: acquiring a first fundus image of an examined eye including a foreground area, which is an area of a fundus that includes blood vessels, and a background area other than the foreground area, which is an area other than the fundus (e.g. an image of a fundus image is acquired, which is taught in ¶ [18], [29] and [30] above. The image is separated into foreground and background, which is taught in ¶ [53] above. A processor is taught in ¶ [62] above. Figures 5 and 7 show the vessels within the foreground area and the background area outside the area where the vessels are present.); and generating a second fundus image by performing background processing to replace a first pixel value of a first pixel configuring the background area with a second pixel value different from the first pixel value, the second pixel value corresponding to a second pixel of the foreground area closest in distance to the first pixel or an average value of a plurality of pixels of the foreground area (e.g. the original preprocessed image and the high pass/normalized image appears in figures 2 and 3, which is taught in ¶ [35] and [36]. Additional Frangi vesselness and Hu’s moment for the high pass/normalized images are shown as high pass images in figures 4 and 5, which is taught in ¶ [41], [42], [46] and [47]. The feature space represents a multi-dimensional feature set passed to a MF-FODG filter and undergoes the alpha matting process, which is taught in ¶ [48]-[52]. When the performance of the alpha matte operation occurs, figure 8A represents the processing of the images shown in figures 2, 3 and 5. In particular, in figures 5A and 5B, the vessels are emphasized and the edge of the fundus is seen to be in the darker black pixels a part of the background area pixel color. In figures 5C and 5D, these edge pixels are replaced with pixel values that are associated with the pixel value that is associated with the pixel values within the foreground area, which can be the pixel value closest to the vessel colors within the foreground area. These figures are discussed in ¶ [41], [42] and [46]-[52].). However, Talwar fails to specifically teach the features of an examined eye, wherein the first fundus image is acquired using reflected light from the examined eye in response to an emission of light from a light source. However, this is well known in the art as evidenced by Fujita. Similar to the primary reference, Fujita discloses evaluating a fundus (same field of endeavor or reasonably pertinent to the problem). Fujita discloses controlling an examined eye, wherein the first fundus image is acquired using reflected light from the examined eye in response to an emission of light from a light source (e.g. it is conventional to have an eyeball irradiated with light controlled by a controller in a device in order to capture the state of the fundus. The state of the fundus is then stored as a medical record, which is taught in ¶ [02] above.). Therefore, in view of Fujita, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of an examined eye, wherein the first fundus image is acquired using reflected light from the examined eye in response to an emission of light from a light source, incorporated in the device of Talwar, in order to acquire an amount of information about the eye, which can aid in acquiring useful information without a physical burden on a patient (as stated in Fujita ¶ [02]). However, the combination above fails to specifically teach the features of comprises a vascular image in which retinal blood vessels are removed and choroidal blood vessels are visible. However, this is well known in the art as evidenced by Yasuno. Similar to the primary reference, Yasuno discloses emphasizing choroidal vessels from scanned data (same field of endeavor or reasonably pertinent to the problem). Yasuno discloses comprises a vascular image in which retinal blood vessels are removed and choroidal blood vessels are visible (e.g. the invention discloses separating the choroidal vessels from the rest of the vascular network image. This involves removing the shadows of the retinal vessel image and while the choroidal vessel images are emphasized or shown with the removal of the retinal vessel image. This is taught in the ¶ [38]-[41], [114] and [119]-[122].). [0038] To achieve the aforementioned object, the present invention provides an optical coherence tomography apparatus for selectively visualizing and analyzing the vascular network in the choroidal layer comprising an optical coherence tomography, and a computer that obtains three-dimensional OCT tomographic images based on OCT-measured data acquired by the optical coherence tomography and processes the three-dimensional OCT tomographic images, wherein such optical coherence tomography apparatus for selectively visualizing and analyzing the vascular network in the choroidal layer is characterized in that: the computer functions as a means for selectively separating out only the images of the choroidal vessels from the three-dimensional OCT tomographic images to obtain image data of the choroidal vessels, and also as a means for obtaining the data to be used in the quantitative evaluation of the shape of the choroidal vessels based on the image data of the choroidal vessels; and the means for acquiring image data of the choroidal vessels is constituted in such a way that tomographic image data of the choroidal layer is extracted from OCT-measured data, the tomographic image data of the choroidal layer is sliced at equally pitched positions in the depth direction of the choroidal layer and data of image slices is extracted, after which image data of the choroidal vessels is obtained from the data of image slices. [0039] Desirably the means for acquiring image data of the choroidal vessels is constituted in such a way that, for each of multiple windows of different sizes, each pixel in the image slice is binarized according to whether or not the pixel color density is equal to or higher than the pre-determined specified threshold in order to obtain an estimated vessel parts-extracted binary image, and in this estimated vessel parts-extracted binary image, those regions where the ratio of pixels with different binary data to each pixel in the applicable window is equal to or greater than the pre-determined specified value are deleted as pseudo vessels, while the estimated vessel parts whose diameter is smaller than the pre-determined specified diameter with respect to the dimension of the applicable window are also deleted as noise and non-vessels, in order to obtain binary image data of the vascular network in the choroidal layer. [0040] Desirably the means for acquiring image data of the choroidal vessels is constituted in such a way that the depth-direction slope of the anterior region of Bruch's membrane at the back of the eye is detected from the three-dimensional OCT tomographic images to obtain data at the positions of the inner segment/outer segment junctions with the photoreceptor cells at the back of the eye, so that the data at the positions of the inner segment/outer segment junctions and Bruch's membrane is used to extract highly reflective structures around the retinal pigment epithelium, after which the optical intensities at the highly reflective structures are averaged to obtain image data specifying the shadows created by the retinal vessels at the highly reflective structures, which is then followed by flange-filtering the image data to emphasize the lines and binarizing the obtained image data, in order to obtain binary image data of the shadows of the retinal vessels. [0041] Desirably the means for acquiring image data of the choroidal vessels is constituted in such a way that, for all pixels corresponding to the vessels, classified data of medium and small vessels and large vessels that have been classified by magnitude of optical intensity is created based on the tomographic image data of the choroidal layer, and based on this classified data, the binary image data of the vascular network in the choroidal layer as obtained for each of the multiple windows of different sizes is selectively combined, while at the same time binary image data of the choroidal network vessels separating out only the choroidal vessels is formed based on the binary image data of the shadows of the retinal vessels. [0114] On the OCT tomographic image of the choroidal layer, shadows of the retinal vessels in front are projected and mistaken as the choroidal vessels. This mistake can inhibit accurate quantitative evaluation of the thickness of the choroidal layer and thickness of the choroidal vessels. Accordingly, these shadows must be removed. (7) Means and Method for Separating Out the Shadows Created by the Retinal Vessels ((7) in FIG. 3) [0119] As explained in (6) above, the retinal vessels enter the image data of the choroidal layer as shadows and are mistaken as the choroidal vessels in the data of image slices. This mistake can inhibit accurate quantitative evaluation of the thickness of the choroidal layer and thickness of the choroidal vessels. [0120] Accordingly, the images at the highly reflective structures obtained in (6) above must be used to separate out the shadows created by the retinal vessels, from the choroidal vessels. The image-processing program pertaining to the present invention causes the computer to function as a means for separating out the shadows created by the retinal vessels and to perform the following operations. [0121] The image data at the highly reflective structures is flange-filtered to emphasize the lines, and as a result of non-linear binarization with appropriate threshold (flange filtering, etc.), image data is obtained which consists of “1” representing the vessel parts and “0” representing the remainder, as shown in FIG. 6 (b). [0122] Thereafter, morphological closing (the regions having the value “1” are eroded and then dilated) is performed to ensure continuity of the vessels, which effectively removes the shadows of the retinal vessel image that have been extracted slightly thicker than they actually are, from the choroidal vessel image. Since the retinal vessel image is completely removed, the binary image data shown in FIG. 6 (c) can be obtained. Therefore, in view of Yasuno, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of comprises a vascular image in which retinal blood vessels are removed and choroidal blood vessels are visible, incorporated in the device of Talwar, as modified by Fujita, in order to remove the retina vessel images to view a choroidal vessel image, which clears an image to view the appropriate vessels within the image (as stated in Yasuno ¶ [34]). Re claim 15: Talwar discloses the image processing method of claim 1, wherein the foreground area is the area of the fundus that includes the blood vessels, which is reached by reflected light from the fundus, and the background area other than the foreground area is the area other than the fundus, which is not reached by the reflected light from the fundus (e.g. figure 4A shows an example of a light that is shown in the fundus area, which highlights the blood vessels in the eye reflected in figure 4B. The area outside of the foreground area where the vessels are located and the area is black, this area is not where light is reflected since it is outside of the eye area. The light being reflected and the figures showing light reflected is explained in ¶ [32] and [42].). [0032] In one embodiment, the preprocessing includes Central Vessel Reflex (lighter region at the center of the larger vessels) suppression. In one embodiment, the Central Vessel Reflex removal is accomplished by performing a morphological closing operation with a disk structuring element of 1 pixel diameter applied to an image. [0042] FIGS. 4A-C illustrate an example of an image, a Frangi vesselness image generated from the image in FIG. 4A, and the output of line filtering applied on the Frangi image, respectively. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dimter discloses replacing background pixels with foreground pixels when discussing figure 9. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAD S DICKERSON whose telephone number is (571)270-1351. The examiner can normally be reached Monday-Friday 10AM-6PM EST.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached on 571-270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHAD DICKERSON/ Primary Examiner, Art Unit 2681
Read full office action

Prosecution Timeline

Apr 14, 2022
Application Filed
Aug 10, 2024
Non-Final Rejection — §103
Nov 14, 2024
Response Filed
Feb 19, 2025
Final Rejection — §103
May 12, 2025
Request for Continued Examination
May 13, 2025
Response after Non-Final Action
May 31, 2025
Non-Final Rejection — §103
Dec 03, 2025
Response Filed
Mar 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602908
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12603960
IMAGE ANALYSIS APPARATUS, IMAGE ANALYSIS SYSTEM, IMAGE ANALYSIS METHOD, PROGRAM, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM COMPRISING READING A PRINTED MATTER, ANALYZING CONTENT RELATED TO READING OF THE PRINTED MATTER AND ACQUIRING SUPPORT INFORMATION BASED ON AN ANALYSIS RESULT OF THE CONTENT FOR DISPLAY TO ASSIST A USER IN FURTHER READING OPERATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12579817
Vehicle Control Device and Control Method Thereof for Camera View Control Based on Surrounding Environment Information
2y 5m to grant Granted Mar 17, 2026
Patent 12522110
APPARATUS AND METHOD OF CONTROLLING THE SAME COMPRISING A CAMERA AND RADAR DETECTION OF A VEHICLE INTERIOR TO REDUCE A MISSED OR FALSE DETECTION REGARDING REAR SEAT OCCUPATION
2y 5m to grant Granted Jan 13, 2026
Patent 12519896
IMAGE READING DEVICE COMPRISING A LENS ARRAY INCLUDING FIRST LENS BODIES AND SECOND LENS BODIES, A LIGHT RECEIVER AND LIGHT BLOCKING PLATES THAT ARE BETWEEN THE LIGHT RECEIVER AND SECOND LENS BODIES, THE THICKNESS OF THE LIGHT BLOCKING PLATES EQUAL TO OR GREATER THAN THE SECOND LENS BODIES THICKNESS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
63%
Grant Probability
86%
With Interview (+23.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 600 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month