Corrected DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Note that the Non-Final dated 10/1/2025 omitted claims 21-22. Please see the below rejection that includes claims 1-22.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Wu Menglin et al (Geographic atrophy segmentation in SD OCT images using synthesized fundus autofluoresence imaging) in view of Boyd et al (US 2022/0207729)
As to claim 1, Wu et al teaches the method comprising:
receiving fundus autofluorescence (FAF) imaging data of a retina (generating the synthesized FAF image, we first apply the restricted summed-voxel projection (RSVP) method [9] to generate the en-face OCT image for GA visualization. Then, the en-face OCT image with GA locations is fed to our RA-CGAN to train the network and synthesize plausible FAF images; FAF image generation; section 3, page 3);
receiving optical coherence tomography (OCT) imaging data of the retina (generate the en-face OCT image for GA visualization., section 3, FAF image generation, section 3);
and predicting an assessment for a geographic atrophy (GA) lesion in the retina (GA assessment, abstract, The FAF images generated by the RA-CGAN with augmentation better preserved the GA details and provided clearer lesion boundaries , section 6.2) using the FAF and OCT imaging data (The network balances the consistency between the entire synthesize FAF image and the lesion. We use a fully convolutional deep network architecture to segment the GA region using the multimodal images, where the features of the en-face OCT and synthesized FAF images are fused on the front-end of the network, abstract; see also “we design a fully convolutional deep network architecture to segment the GA region by utilizing both the en-face OCT and synthesized FAF images. A, section 4, GA segmentation using multimodal images). While Wu et al teaches the limitation above, Wu fails to teach that the GA assessment includes a lesion growth rate . Boyd et al teaches training phase, receive training data corresponding to a plurality of ocular images; perform feature extraction and feature selection to generate features based on the training data to build a pattern recognition model; at a classification phase, receive a plurality of ocular images corresponding to a plurality of imaging modalities; classify features of the plurality of ocular images using the pattern recognition model( abstract). Boyd teaches in FIG. 4 is a flow diagram of a method 300 for processing, storage and retrieval of ocular images according to some aspects. At 302, system 100 acquires ocular images in various modalities. System 100 can acquire images using imaging device 208 or from database 202(figure 4). Boyd et al teaches post-processing unit 460 further processes the stored retinal images 412 to generate unique information regarding the stage and progression of various ocular diseases. The additional processing features can be implemented on a stand-alone basis or sequentially ( paragraph [0196]; note that figure 17A shows the growth rate where DNIRA provides a quantifiable, functional readout of tissue function. Using DNIRA, system 100 can detect areas of black in the images that correspond to regions of irreversible tissue loss and to abnormal or sick tissue. Over time, areas of black can remain the same, increase or decrease in size. System 100 can implement a determination of “baseline DNIRA” images ( paragraph [0227]). It would have been obvious to one skilled in the art before filing of the claimed invention to use the DNIRA that includes the growth rate of the lesion in the GA assessment in order to adequately diagnose, stage, grade, prognosticate, monitor or predict response to treatments or interventions, or to predict safety of those treatments or interventions. Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
As to claim 2, Boyd et al teaches the method of claim 1, further comprising: predicting a baseline lesion area for the GA lesion using the FAF and OCT imaging data (segmentation unit 404 can segment regions of the image and features of interest in the image acquired in the first visit to serve as the baseline for size and shape of feature, paragraph [0197][0206]).
As to claim 3, Wu et al teaches the method of claim 1, wherein predicting the lesion growth rate further comprises: generating a first input using the FAF imaging data (synthetic FAF, figure 4) and a second input using the OCT imaging data ( enhance OCT, figure 4); fusing together the first and second input to form a fused input ( fusing net, figure 4) ; and generating the lesion growth rate for the geographic atrophy lesion using the fused input ( final segmentation, page 6).
As to claim 4, Boyd et al teaches the method of claim 3, further comprising: extracting a biomarker from the fused input ( the imaging method can be used as a surrogate biomarker for diagnosis, prognosis, disease progression, treatment selection and prediction of a treatment response or clinical trial design, or any combination thereof 7) .
As to claim 5, Boyd et al teaches the method of claim 4, wherein the biomarker comprises lesion perimeter, lesion shape- descriptive features, wedge-shaped subretinal hyporeflectivity, retinal pigment epithelium (RPE) attenuation and disruption, hyper-reflective foci, reticular pseudo drusen (RPD), multi-layer thickness reduction, photoreceptor atrophy, hypo-reflective cores in drusen, high central drusen volume, surrounding abnormal autofluorescence patterns, previous GA progression rate, outer- retinal tubulation, choriocapillaris flow void, GA lesion size, GA distance to fovea, lesion contiguity, or a combination thereof ( figure 28-32) .
As to claim 6, Wu et al teaches the method of claim 1, wherein predicting the lesion growth rate further comprises: generating a first input using the set of FAF images ( synthetic FAF, figure 4) and a second input using the set of OCT images ( enface OCT, figure 4); extracting a first feature of interest from the FAF imaging data; extracting a second feature of interest from the OCT imaging data; fusing together the first feature of interest and the second feature of interest to form a fused feature input (To combine the features of the en-face OCT and the synthesized FAF, we designed a fully convolutional deep network to solve the multimodal image segmentation problem, in which two images of different modalities are fused before the U-Net layers for segmentation. Our team previously performed GA segmentation using geometric deformation model [9,12,13] and a DVM [15]. l section 2.2 page 3) ; and generating the lesion growth rate for the geographic atrophy lesion using the fused feature input (The fusion net is on the front-end of the network and combines the features of the en-face OCT and synthesized FAF images. The number of output channels of the fusion net for the en-face OCT and synthesized FAF images is 64 and 32, respectively, because we treat the synthesized FAF image as a complementary secondary modality and the en-face OCT image as the main modality. The segmentation net is implemented by a U-Net structure [33], which consists of encoder-decoder paths and skip connections, similar to the architecture of the generator in the RACGAN. Fig. 4 illustrates the entire network. We apply the Adam optimizer [46] for stochastic gradient updating and employ the binary cross-entropy loss function. The output of the network is a probability map computed by the sigmoid function and indicates the probability of each pixel belonging to the GA, section 4, page 5).
As to claim 7, Wu et al teaches the method of claim 6, wherein the retina is associated with a patient, the method further comprising: receiving clinical factor data associated with the patient (A dataset of 56 SD-OCT volumes from 56 patients was acquired using a Cirrus SD-OCT device (Carl Zeiss Meditec, Inc., Dublin, CA; section 5.1); fusing together the clinical factor data with the first feature of interest and the second feature of interest to form the fused feature input; and generating the lesion growth rate for the geographic atrophy lesion using the fused feature input (The fusion net is on the front-end of the network and combines the features of the en-face OCT and synthesized FAF images. The number of output channels of the fusion net for the en-face OCT and synthesized FAF images is 64 and 32, respectively, because we treat the synthesized FAF image as a complementary secondary modality and the en-face OCT image as the main modality. The segmentation net is implemented by a U-Net structure [33], which consists of encoder-decoder paths and skip connections, similar to the architecture of the generator in the RACGAN. Fig. 4 illustrates the entire network. We apply the Adam optimizer [46] for stochastic gradient updating and employ the binary cross-entropy loss function. The output of the network is a probability map computed by the sigmoid function and indicates the probability of each pixel belonging to the GA, section 4, page 5).
As to claim 8, Wu et al teaches the method of claim 7, wherein the clinical factor data includes a subject's age, sex, smoking status, an observed GA lesion area, a distance of an observed GA lesion to a foveal center of the retina, image contiguity, a best-corrected visual acuity (BCVA) score, a low-luminance deficit (LLD) score, or a combination thereof ( figure 1 B scan distance of the lesion to center of the retina, figure 1) .
As to claim 9, Wu et al teaches the method of claim 5, wherein the fused feature input is formed using a model comprising an average pooling method, a squeeze and excitation method, or a combination thereof ( use of CNN ( section 2.2), and we average the pixel intensity of the band, the region excluding GA appears as low reflection and the region including GA appears as a relatively high reflection in the en-face OCT; section 3.1) .
As to claim 10, Boyd et al teaches the method of claim 1, further comprising: receiving infrared (IR) imaging data of the retina; and predicting, a lesion growth rate for a geographic atrophy lesion in the retina using the FAF imaging data, the OCT imaging data, and the IR imaging data (Time point 1 vs Time point 4) and it can be extended to compare images taken from different modalities, such as fundus auto-fluorescence (FAF), infrared (IR), or DNIRA. Report unit 448 can generate graphical representations of output data for display on display device 450 as part of GUI. Data selection 436 selects and isolates data sets for comparison or for use by report generation 448, paragraph [0201]).
As to claim 11, Wu et al teaches the method of claim 3, further comprising: pre-processing the FAF imaging data to form the first input, the pre-processing including macular field FAF image selection, region of interest extraction, image contrast adjustment, or multi-field FAF image combination(Fig. 7. Examples of GA segmentation using various approaches. From the first column to the seventh column: the reference standard obtained by a specialist (manual segmentation of the FAF image followed by registration onto the OCT en-face image), and the results of U-Net (using synthesized FAF), the proposed method, Chen’s method, CVLSF, U-Net (using en-face OCT), and DVM. The dashed green ellipse indicates significant errors. The size of the GA region increases from (a) to (e). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.).
As to claim 12, Wu et al teaches the method of claim 3, further comprising: pre-processing the OCT imaging data to form the second input, the pre-processing comprising: generating a set of en-face maps above a retinal membrane and below the retinal membrane; and predicting the lesion growth rate for the GA lesion using the generated set of en- face maps (Fig. 5. Procedure of segmentation detail refinement. (a) Probability map. (b) Clustered probability map obtained via SFCM. The white, gray, and black regions represent the GA, undetermined area, and background, respectively. (c) Clustered synthesized FAF image obtained via SFCM. The white and black areas represent the low-reflection area and the background, respectively. (d) Final refined segmentation result. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.).
The limitation of claims 13-20, and 22 has been addressed above.
Allowable Subject Matter
Claim 21 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY BITAR whose telephone number is (571)270-1041. The examiner can normally be reached Mon-Friday from 8:00 am to 5:00 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mrs. Jennifer Mahmoud can be reached at 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NANCY . BITAR
Examiner
Art Unit 2664
/NANCY BITAR/Primary Examiner, Art Unit 2664