DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 09/16/2025 have been fully considered but they are not persuasive.
Ozcan 2019 in [0020], [0049], [0050], [0077] discloses using multi-height phase retrieval with 8 holograms acquired at different sample-to-sensor distances, corresponding to “obtaining a plurality of raw holographic intensity or amplitude images of the sample volume at different sample-to-sensor distances”.
“inputting raw holographic intensity or amplitude images directly into a trained neural network” is not included in the claimed invention.
In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971).
In response to applicant's argument that Cho is not related to imaging of specimens or samples with a microscope, the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981).
In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, it is obvious to use a convolutional recurrent neural network (RNN), in order to achieve image restoration from a big data set (CHO [0109]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Ozcan et al. (US 20190294108 A1) (hereinafter Ozcan 2019) in view of (CHO et al. (US 20200090306 A1) or LEE et al. (US 20230251603 A1)).
Regarding claim 1. Ozcan 2019 discloses A method of simultaneously performing auto-focusing and phase-recovery (abstract, the trained deep neural network simultaneously achieves phase-recovery and auto-focusing) using a plurality of holographic intensity or amplitude images of a sample volume (abstract, A method of performing phase retrieval and holographic image reconstruction of an imaged sample includes obtaining a single hologram intensity image of the sample using an imaging device) comprising:
obtaining a plurality of raw holographic intensity or amplitude images of the sample volume at different sample-to-sensor distances using a lens free microscope device comprising an image sensor ([0020], [0049], [0050], [0077] using multi-height phase retrieval with 8 holograms acquired at different sample-to-sensor distances; [0041] receives a single hologram intensity image 20 of a sample 22 obtained with an image sensor 24; [0008] obtaining a single hologram intensity image of the sample using an image sensor (e.g., an image sensor found in a lens-free microscope image); figure 1, [0041] The image 20 is obtained using an imaging device 110, for example, a lens-free microscope device; claim 12, the single hologram intensity image is obtained using a lens-free microscope device); and
providing a trained convolutional neural network (CNN) comprising respective down-sampling and up-sampling paths with respective convolutional blocks at a plurality of different scales wherein convolutional recurrent blocks connect respective convolutional blocks between respective convolutional blocks at the same scale (figure 12, [0034] The network has a down-sampling decomposition path (arrows 77) and a symmetric up-sampling expansion path (arrows 78). The arrows 79 represent the connections between the down-sampling and up-sampling paths, where the channels of the output from the down-sampling block are concatenated with the output from the corresponding up-sampling block, doubling the channel numbers; figure 12, [0103] The trained deep neural network 10 is a CNN and it consists of a down-sampling path 70a-70d as well as a symmetric up-sampling path 72a-72d; [0106]) that is executed by the image processing software using one or more processors, wherein the trained CNN is trained with holographic images obtained at different sample-to-sensor distances and back-propagated to a common axial plane and their corresponding in-focus phase-recovered ground truth images ([0042], figure 2, the image processing software 104 performs free space back-propagation, without phase retrieval, to create a real input image 30 and an imaginary input image 32 of the sample; [0064] back-propagated to the sample plane, yielding the amplitude and phase, or, real and imaginary images of the sample; [0009] the deep neural network or convolutional neural network is trained using a plurality of training hologram intensity images. The training updates the neural network's parameter space Θ which includes kernels, biases, and weights. The convolution neural network may be programed using any number of software programs; [0049] The first step in the deep learning-based phase retrieval and holographic image reconstruction framework involves “training” of the neural network 10, i.e., learning the statistical transformation between a complex-valued image that results from the back-propagation of a single hologram intensity of the sample 22 (or object(s) in the sample 22) and the same image of the sample 22 (or object(s) in the sample 22) that is reconstructed using a multi-height phase retrieval algorithm using eight (8) hologram intensities acquired at different sample-to-sensor distances), wherein the trained CNN is configured to receive a set of real input images and imaginary input images of the sample volume that are generated from the plurality of raw holographic intensity or amplitude images or the raw holographic intensity or amplitude images obtained at different sample-to-sensor distances (abstract, A trained deep neural network is provided that is executed by the image processing software using one or more processors and configured to receive the real input image and the imaginary input image of the sample and generate an output real image and an output imaginary image; [0049] The first step in the deep learning-based phase retrieval and holographic image reconstruction framework involves “training” of the neural network 10, i.e., learning the statistical transformation between a complex-valued image that results from the back-propagation of a single hologram intensity of the sample 22 (or object(s) in the sample 22) and the same image of the sample 22 (or object(s) in the sample 22) that is reconstructed using a multi-height phase retrieval algorithm using eight (8) hologram intensities acquired at different sample-to-sensor distances) and simultaneously outputs an in-focus output real image and an in-focus output imaginary image of the sample volume, substantially matching the image quality of the ground truth images ([0101] the output real and/or imaginary images (i.e., the object images) are in-focus; [0013] the output real and/or imaginary images of all the objects in the sample volume are brought into focus, all in parallel).
CHO discloses a convolutional recurrent neural network (RNN) ([0109] The neural network may be an example of a deep neural network (DNN). The DNN may include, for example, a fully connected network, a deep convolutional network, and a recurrent neural network).
LEE also discloses a convolutional recurrent neural network (RNN) may be used in a digital holographic apparatus ([0040], [0044]-[0045], [0099]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Ozcan according to the invention of CHO or LEE, to use a convolutional recurrent neural network (RNN), in order to achieve image restoration from a big data set (CHO [0109]).
Regarding claim 2. Ozcan 2019 discloses The method of claim 1, wherein the plurality of the obtained raw holographic intensity or amplitude images comprises two or more holographic images ([0020], [0049], [0050], [0077] using multi-height phase retrieval with 8 holograms acquired at different sample-to-sensor distances).
Regarding claim 3. Ozcan 2019 discloses The method of claim 1, wherein the plurality of obtained raw holographic intensity or amplitude images comprise super-resolved holographic images of the sample volume ([0084] multiple subpixel-shifted holograms were used to synthesize a higher resolution (i.e., pixel super-resolved) hologram).
Regarding claim 4. Ozcan 2019 discloses The method of claim 1, wherein the plurality of obtained raw holographic intensity or amplitude images are obtained over an axial defocus range of at least 100 μm ([0102] the de-focused images that are used to train the deep neural network 10 are obtained over an axial defocus range. This axial defocus range may vary. In one embodiment, the axial defocus range is less than about 10 mm or in other embodiments less than 5 mm. In still other embodiments, this range is smaller, e.g., less than 1 mm or less than 0.5 mm).
Regarding claim 5. Ozcan 2019 discloses The method of claim 1, wherein the sample volume comprises a tissue block, a tissue section, particles, cells, bacteria, viruses, mold, algae, particulate matter, dust or other micro-scale objects located at various depths within the sample volume ([0041] The sample 22 may include tissue that is disposed on or in an optically transparent substrate 23 (e.g., a glass or plastic slide or the like). In this regard, the sample 22 may include a sample volume that is three dimensional. The sample 22 may also include particles, cells, or other micro-scale objects (those with micrometer-sized dimensions or smaller) located at various depths).
Regarding claim 6. Ozcan 2019 discloses The method of claim 1, wherein twin-image and/or interference-related artifacts are substantially suppressed or eliminated in the output ([0041] The trained deep neural network 10 outputs or generates an output real image 50 and an output imaginary image 52 in which the twin-image and/or interference-related artifacts are substantially suppressed or eliminated).
Regarding claim 7. Ozcan 2019 discloses The method of claim 1, wherein the corresponding in-focus phase-recovered ground truth images are obtained using a phase recovery algorithm ([0046] the trained deep neural network 10 that is used to generate amplitude and phase images 50, 52; the deep neural network 10 is trained to minimize a loss function as illustrated in operation 60 between the real and imaginary parts of the network output (Output (real), Output (imaginary)) with respect to the real and imaginary parts of the corresponding ground truth images (Label (real) and Label (imaginary))).
Regarding claim 8. Ozcan 2019 discloses The method of claim 1, wherein the plurality of obtained raw holographic intensity or amplitude images of the sample volume comprises a stained or unstained tissue sample ([0060] the samples are prepared (e.g., appropriately stained and fixed) with the correct procedures, tailored for the type of the sample).
Regarding claim 9. Ozcan 2019 discloses The method of claim 1, wherein the plurality of obtained raw holographic intensity or amplitude images are back propagated by angular spectrum propagation (ASP) or a transformation that is an approximation to ASP executed by image processing software (figure 2, [0042] back-propagation of the single hologram intensity image 20 is accomplished using the angular-spectrum propagation (ASP) or a transformation that is an approximation to ASP; the image processing software 104 performs free space back-propagation, without phase retrieval, to create a real input image 30 and an imaginary input image 32 of the sample; [0064] back-propagated to the sample plane, yielding the amplitude and phase, or, real and imaginary images of the sample).
Regarding claim 10. Ozcan 2019 discloses A method of performing simultaneous auto-focusing and phase-recovery (abstract, the trained deep neural network simultaneously achieves phase-recovery and auto-focusing) using a plurality of holographic intensity or amplitude images of a sample volume (abstract, A method of performing phase retrieval and holographic image reconstruction of an imaged sample includes obtaining a single hologram intensity image of the sample using an imaging device) comprising:
obtaining a plurality of raw holographic intensity or amplitude images of the sample volume at different sample-to-sensor distances using a lens free microscope device comprising an image sensor ([0020], [0049], [0050], [0077] using multi-height phase retrieval with 8 holograms acquired at different sample-to-sensor distances; [0041] receives a single hologram intensity image 20 of a sample 22 obtained with an image sensor 24; [0008] obtaining a single hologram intensity image of the sample using an image sensor (e.g., an image sensor found in a lens-free microscope image); figure 1, [0041] The image 20 is obtained using an imaging device 110, for example, a lens-free microscope device; claim 12, the single hologram intensity image is obtained using a lens-free microscope device); and
providing a trained convolutional neural network (CNN) comprising respective down-sampling and up-sampling paths with respective convolutional blocks at a plurality of different scales wherein convolutional recurrent blocks connect respective convolutional blocks between respective convolutional blocks at the same scale (figure 12, [0034] The network has a down-sampling decomposition path (arrows 77) and a symmetric up-sampling expansion path (arrows 78). The arrows 79 represent the connections between the down-sampling and up-sampling paths, where the channels of the output from the down-sampling block are concatenated with the output from the corresponding up-sampling block, doubling the channel numbers; figure 12, [0103] The trained deep neural network 10 is a CNN and it consists of a down-sampling path 70a-70d as well as a symmetric up-sampling path 72a-72d; [0106]) that is executed by the image processing software using one or more processors, wherein the trained CNN is trained with holographic images obtained at different sample-to-sensor distances and their corresponding in-focus phase-recovered ground truth images ([0009] the deep neural network or convolutional neural network is trained using a plurality of training hologram intensity images. The training updates the neural network's parameter space Θ which includes kernels, biases, and weights. The convolution neural network may be programed using any number of software programs; [0049] The first step in the deep learning-based phase retrieval and holographic image reconstruction framework involves “training” of the neural network 10, i.e., learning the statistical transformation between a complex-valued image that results from the back-propagation of a single hologram intensity of the sample 22 (or object(s) in the sample 22) and the same image of the sample 22 (or object(s) in the sample 22) that is reconstructed using a multi-height phase retrieval algorithm using eight (8) hologram intensities acquired at different sample-to-sensor distances), wherein the trained CNN is configured to receive a plurality of raw holographic intensity or amplitude images obtained at different sample-to-sensor distances ([0049] The first step in the deep learning-based phase retrieval and holographic image reconstruction framework involves “training” of the neural network 10, i.e., learning the statistical transformation between a complex-valued image that results from the back-propagation of a single hologram intensity of the sample 22 (or object(s) in the sample 22) and the same image of the sample 22 (or object(s) in the sample 22) that is reconstructed using a multi-height phase retrieval algorithm using eight (8) hologram intensities acquired at different sample-to-sensor distances) and outputs an in-focus output real image and an in-focus output imaginary image of the sample volume, substantially matching the image quality of the ground truth images ([0101] the output real and/or imaginary images (i.e., the object images) are in-focus; [0013] the output real and/or imaginary images of all the objects in the sample volume are brought into focus, all in parallel).
CHO discloses a convolutional recurrent neural network (RNN) ([0109] The neural network may be an example of a deep neural network (DNN). The DNN may include, for example, a fully connected network, a deep convolutional network, and a recurrent neural network).
LEE also discloses a convolutional recurrent neural network (RNN) may be used in a digital holographic apparatus ([0040], [0044]-[0045], [0099]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Ozcan according to the invention of CHO or LEE, to use a convolutional recurrent neural network (RNN), in order to achieve image restoration from a big data set (CHO [0109]).
Regarding claim 11. CHO discloses a trained RNN comprises a plurality of dilated convolutional layers ([0126] convolutional layers corresponding to a plurality of kernels with different dilation gaps).
Regarding claim 12. Ozcan 2019 discloses The method of claim 10, wherein the plurality of obtained raw holographic intensity or amplitude images comprises two or more holographic images ([0020], [0049], [0050], [0077] using multi-height phase retrieval with 8 holograms acquired at different sample-to-sensor distances).
Regarding claim 13. Ozcan 2019 discloses The method of claim 10, wherein the plurality of obtained raw holographic intensity or amplitude images comprise super-resolved holographic images of the sample volume ([0084] multiple subpixel-shifted holograms were used to synthesize a higher resolution (i.e., pixel super-resolved) hologram).
Regarding claim 14. Ozcan 2019 discloses The method of claim 10, wherein the plurality of obtained raw holographic intensity or amplitude images are obtained over an axial defocus range of at least 100 μm ([0102] the de-focused images that are used to train the deep neural network 10 are obtained over an axial defocus range. This axial defocus range may vary. In one embodiment, the axial defocus range is less than about 10 mm or in other embodiments less than 5 mm. In still other embodiments, this range is smaller, e.g., less than 1 mm or less than 0.5 mm).
Regarding claim 15. Ozcan 2019 discloses The method of claim 10, wherein the sample volume comprises tissue blocks, tissue sections, particles, cells, bacteria, viruses, mold, algae, particulate matter, dust or other micro-scale objects located at various depths within the sample volume ([0041] The sample 22 may include tissue that is disposed on or in an optically transparent substrate 23 (e.g., a glass or plastic slide or the like). In this regard, the sample 22 may include a sample volume that is three dimensional. The sample 22 may also include particles, cells, or other micro-scale objects (those with micrometer-sized dimensions or smaller) located at various depths).
Regarding claim 16. Ozcan 2019 discloses The method of claim 10, wherein twin-image and/or interference-related artifacts are substantially suppressed or eliminated in the output images of the sample volume ([0041] The trained deep neural network 10 outputs or generates an output real image 50 and an output imaginary image 52 in which the twin-image and/or interference-related artifacts are substantially suppressed or eliminated).
Regarding claim 17. Ozcan 2019 discloses The method of claim 10, wherein the corresponding in-focus phase-recovered ground truth images are obtained using a phase recovery algorithm ([0046] the trained deep neural network 10 that is used to generate amplitude and phase images 50, 52; the deep neural network 10 is trained to minimize a loss function as illustrated in operation 60 between the real and imaginary parts of the network output (Output (real), Output (imaginary)) with respect to the real and imaginary parts of the corresponding ground truth images (Label (real) and Label (imaginary))).
Regarding claim 18. Ozcan 2019 discloses The method of claim 10, wherein the plurality of obtained raw holographic intensity or amplitude images of the sample volume comprises a stained or an unstained tissue sample ([0060] the samples are prepared (e.g., appropriately stained and fixed) with the correct procedures, tailored for the type of the sample).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOLAN XU whose telephone number is (571)270-7580. The examiner can normally be reached Mon. to Fri. 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V. PERUNGAVOOR can be reached at (571) 272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAOLAN XU/ Primary Examiner, Art Unit 2488