DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on August 9th, 2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
This is in response to applicant’s amendment/response filed on December, 23rd , 2025 which have been entered and made of record.
Applicant’s arguments regarding claim objections for claims 2-18 and 20-31 have been fully considered and are persuasive. Claim objections have been withdrawn.
Applicant’s arguments regarding claim rejections under 35 U.S.C. 112(b) have been fully considered. Claim 5’s amendments remedy the 112(b) moot. Thus claim rejections under 35 U.S.C. 112(b) for claim 5 have been withdrawn. Claim 4’s amendments remedy the original 112(b) rejection moot, but amended claim language is indefinite. Thus, claim rejections under 35 U.S.C. 112(b) for claim 4 are not persuasive.
Applicant’s arguments regarding claim rejections under 35 U.S.C. 103 have been fully considered but they are not persuasive.
Applicant argues as made more explicit by the amendments presented herein, the claimed inventions are directed to light field microscopy, in which a multi-lens array is arranged in an intermediate image plane or in a vicinity of an intermediate image plane or in a pupil plane or in a vicinity of a pupil plane of a detection beam path in front of the at least one camera sensor. See, e.g., FIG. 1, multi-lens array 20; paragraphs 0022-26. Applicant notes that neither Wildner nor Ben-Yaker teaches or suggests such a configuration.
For example, the Office acknowledges that Wildner fails to teach a multi-lens array and a camera having at least one camera sensor. Office Action, p. 6. However, the Office asserts that Ben-Yaker teaches such a configuration, and cites col. 2, lines 47-54 and FIG. 5, col. 10, lines 39-61 of Ben-Yaker in support of this contention. Id. at 7.
However, contrary to the Office's assertion, the cited portions of Ben-Yaker, and Ben-Yaker in its entirety, fail to teach the claimed light field arrangement. In particular, col. 2, lines 47-54 of Ben-Yaker merely describe optical detectors, each of which collects light from a different respective segment of the excitation beam. Col. 10, lines 39-61 of Ben-Yaker describes the optical setup shown in FIG. 5. However, neither the illustrated optical setup nor the discussion thereof teaches a multi-lens array arranged as in claim 1. Ben-Yaker includes significant discussion of linear arrays of optical detectors, such as a photomultiplier tube (PMT) array. However, these components are clearly distinct from and do not teach or suggest the claimed multi-lens array.
Examiner respectable disagrees. According to Broadest Reasonable Interpretation (BRI) a Multi-lens Array can be multiple lenses of any type that can either be grouped together in the same plane or not. Therefore, Ben-Yaker teaches a Multi-lens array (Fig. 5, Col.10, Lines 39-61) as several lenses are present. Fig. 33A-B depict multiple Cylindrical lenses and a set of lenses (Col. 38 Lines 50-67 and Col.39 Lines 1-4). Ben-Yaker also teaches an intermediate image plane (Fig. 1 and Col.3 Lines 58-64) as lenses can be placed in a conjugate imaging plane of the image path, which is an intermediate image plane. Since, the multiple lenses are placed in the image path, it would in the vicinity to the intermediate image plane. Thus, Ben-Yaker teaches the multi-lens array is arranged in a vicinity of an intermediate image plane.
Regarding the remaining arguments applicant argues with respect to the amended claim language, which is fully addressed in the prior art rejections set forth below.
Claim Interpretation
The specification contains several defined terms by applicant, the terms have been restated below to make clear on the record the defined terms.
Illumination Beam Path - denotes all optical beam-guiding and beam-modifying components, for example lenses, mirrors, prisms, gratings, filters, stops, beam splitters, modulators, e.g. spatial light modulators (SLM) by means of which and via which the excitation light from the light source is guided to the sample to be examined.
Detection Beam Path - denotes all beam-guiding and beam-modifying optical components, for example lenses, mirrors, prisms, gratings, filters, stops, beam splitters, modulators, e.g. spatial light modulators (SLM), by means of which and via modulators, e.g. spatial light modulators (SLM), by means of which and via which the emission light is guided from the sample to be examined to the camera sensor.
Light Field Arrangement - denotes an optical arrangement having at least one multi-lens array and a camera.
Partial Image - denotes a part of the image data set which can be assigned to a specific lens of the multi-lens array.
Image Data of a Partial Image - denotes the totality of the measurement data of the individual pixels that the partial image consists of.
Control Unit - denotes all hardware and software components which interact with the components of the microscope according to the invention for the intended functionality of the latter.
Dynamic Range - is taken to mean, in particular, the ratio of the largest measurement values or measurement data to the smallest measurement values or measurement data.
Measurement Values/Measurement Data - are taken to mean, in particular, the digitized measurement values and, respectively, the measurement data of the individual camera pixels.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 4, which states the following amended limitation “the region comprises or the regions comprise an intermediate region or regions with crosstalk or a region or regions in which individual partial images overlap;” Examiner is unsure of what an “intermediate region” is in terms of the regions. As well as what exactly the region or regions comprise. Examiner is interpreting “intermediate region” as a portion of the regions. Examiner is also interpreting the amended limitation to mean the regions or region can comprise portions of a region, regions with crosstalk, or regions where the individual partial images overlap. Thus, claim 4 will be examined as best understood by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wildner et al. European Patent Application EP 3822686 A1(hereinafter Wildner) in further view of Ben-Yaker et al. United States Patent US 12133714 B2 (hereinafter Ben-Yaker).
Regarding claim 1, Wildner teaches a method for light field microscopy (Col.1, Para. 0001), wherein the following method steps are carried out:
by a light field arrangement comprising a (Digital Camera #20, Col.2 Para. 0009 Lines 55-58) having at least one camera sensor (Image Sensor, Col.2 Para. 0009 Lines 55-58), (Individual Images of the Sample) which each contain at least one partial image (Subsampling Individual Images, Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) of a sample are successively recorded, measurement data from pixels of the at least one camera sensor (Image Sensor, Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) in each case being read out,
wherein in order to increase a number of the image data sets (Individual Images of the Sample) which are recordable per unit time, at least one of the following method steps is carried out: (Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) By subsampling individual images of the sample, images with less pixels of the individual image are created.
a) measurement data only from a first number of pixels (Reduced Number of Pixels) of the at least one camera sensor (Image Sensor) are read out, the first number being less than a total number of the pixels; (Col.3 Para 0014 Line 58 and Col.4 Lines 1-22)
b) measurement data only from a second number of pixels (n-th Image Pixel) of the at least one camera sensor (Image Sensor) are processed further, the second number being less than the total number of the pixels or the first number of the pixels read; (Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) The number of pixels to be read out can be different from other subsampled images)
and c) for some or all of the total number of the pixels or the first number of pixels read, measurement data are processed further with a bit depth which is reduced in comparison with a maximum possible bit depth.
However, Wildner fails to explicitly teach a light field arrangement comprising a multi-lens array and a camera having at least one camera sensor, wherein the multi-lens array is arranged in an intermediate image plane or in a vicinity of an intermediate image plane or in a pupil plane or in a vicinity of a pupil plane of a detection beam path in front of the at least one camera sensor,
Wildner and Ben-Yaker are analogous to the claimed invention because both of them are in the same field of acquiring images of samples from microscopes.
Ben-Yaker teaches a light field arrangement (Col.2, Lines 47-54) comprising a multi-lens array (Fig. 5, Col.10, Lines 39-61) and a camera (Linescan Camera) having at least one camera sensor, wherein the multi-lens array(Fig. 5, Col.10, Lines 39-61 or Multiple Lenses Col. 38 Lines 50-67 and Col.39 Lines 1-4 ) is arranged in an intermediate image plane or in a vicinity of an intermediate image plane (Conjugate Image Plane, Fig. 1 and Col.3 Lines 58-64) or in a pupil plane or in a vicinity of a pupil plane of a detection beam path in front of the at least one camera sensor. According to Broadest Reasonable Interpretation (BRI) a Multi-lens Array can be multiple lenses of any type that can either be grouped together in the same plane or not. Therefore, Ben-Yaker teaches a Multi-lens array (Fig. 5, Col.10, Lines 39-61) as several lenses are present. Fig. 33A-B depict multiple Cylindrical lenses and a set of lenses (Col. 38 Lines 50-67 and Col.39 Lines 1-4). Ben-Yaker also teaches an intermediate image plane (Fig. 1 and Col.3 Lines 58-64) as lenses can be placed in a conjugate imaging plane of the image path, which is an intermediate image plane. Since, the multiple lenses are placed in the image path, it would in the vicinity to the intermediate image plane. Thus, Ben-Yaker teaches the multi-lens array is arranged in a vicinity of an intermediate image plane. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s Microscope to incorporate Ben-Yaker’s Multi-lens Array. Since doing so would provide the benefit of reducing scanning time of the images. Multiple lens can be used to scan multiple parts of the sample simultaneously.
Regarding claim 2, Wildner fails to teach wherein three-dimensional sample information is reconstructed at least from a selection of image data of the partial images. Wildner teaches assembling a one-dimensional or two-dimensional image from the subsampled individual images. (Col. 5 Para. 0019 Line 58, Col.6 Lines 1-11, and Col.1 Para. 0004 Lines 28-49)
However, Ben-Yaker teaches wherein three-dimensional sample information is reconstructed at least from a selection of image data of the partial images. (Fig. 4 and Fig. 11, Col.14, Lines 1-17) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s Assembly of Images to incorporate Ben-Yaker’s Reconstructing of the Sample in Three-Dimension. Since doing so would provide the benefit of reconstructing the image of the sample in three-dimension. This increases the ability to study the sample in greater detail by creating multiple dimensional views of the sample.
Regarding claim 3, Wildner teaches wherein the number of the image data sets (Subsampling Individual Images) of the camera sensor (Image Sensor) which are recorded per unit time is increased by reducing a number of the pixels to be read. (Col.3 Para 0014 Line 58 and Col.4 Lines 1-22)
Regarding claim 4, Wildner teaches wherein pixels in one region (Area of Interest of Individual Images, Col.3 Para. 0011 Lines 17-22) or in a plurality or all of the regions of the camera sensor (Captured Image data) which satisfy one or more of the following conditions are not read or the measurement data thereof are not, or not completely, evaluated: (Col.20 Para. 0073, Lines 5-49)
the measurement data of the pixels in the region (Area of Interest of Individual Image) or the regions contain no image information; (Col.16 Para. 0058 Lines 11-21) The user chooses the area of interest of the individual images (Col.3 Para. 0010 Lines 4-16) and the post-processes performed on the individual images (Col.3 Para 0013 Lines 39-57). Thus, a user can choose a region that contains no image information and can choose how the individual image data is evaluated.
the measurement data of the pixels in the region or the regions cannot contain image information;
the region comprises or the regions comprise an intermediate region or regions with crosstalk or a region(Area of Interest of Individual Image) or regions in which individual partial images overlap;
and, the measurement data of the pixels in the region or the regions are not required for the reconstruction of three-dimensional sample information.
Regarding claim 5, Wildner teaches wherein at least one of the method steps of:
selecting pixels of the camera sensor (Image Sensor) that are to be read; (Col.13, Para. 0049, Lines 41-58)
selecting pixels (n-th Image Pixel) of the camera sensor that are to be evaluated; (Col.3 Para 0014 Line 58 and Col.4 Lines 1-22)
and processing (Post-Processing Filters) measurement data from pixels of the camera sensor that are to be evaluated; (Col.3 Para 0013 Lines 39-57)
is carried out partly or completely in one or more of the following components:
camera controller;
camera driver (Image Sensor Driver 124) of the control unit (Control Unit 90, Col.15 Para. 0057 Lines 53-58 and Col.16 Lines 1-10); (Col.16, Para. 0059, Lines 22-43)
and frame grabber of the control unit.
Regarding claim 6, Wildner teaches wherein a number of partial images (Subsampling Individual Images) of an image data set (Individual Images of the Sample) that are to be evaluated is reduced in order to increase a number of the image data sets which are recordable per unit time. (Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) The number of subsampled images is directly based on the number of individual images, thus reducing the individual images can reduce the subsampled images.
Regarding claim 7, Wildner teaches wherein a size of the image field (Area of Interest) during the recording of one image data set (Individual Images of the Sample, Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) or a series of image data sets in the case of one partial image (Subsampling Individual Images) or a plurality or all of the partial images is reduced in comparison with a maximum possible size of the image field (Area of Interest). (Col.18 Para. 0069 Line 58 and Col.19 Lines 1-16) The user can select the area of interest which can be a portion of the image field.
Regarding claim 8, Wildner teaches wherein the size of an image field (Area of Interest, Col.18 Para. 0069 Line 58 and Col.19 Lines 1-16) of at least one partial image (Subsampling Individual Images) or the size of an image field of a plurality or all of the partial images (Subsampling Individual Images) in the case of a sequence of recordings of image data sets (Individual Images of the Sample, Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) is altered from one recording to the following recording. (Col. 18, Para. 0068, Lines 41-57) Individual images can have different number of pixels and be in different resolutions. The area of interest is chosen by the user and can be different sizes.
Regarding claim 9, Wildner teaches wherein the measurement data from one partial image (Subsampling Individual Images) or from a plurality or all of the partial images of an image data set (Individual Images of the Sample, Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) are evaluated differently. (Col. 18, Para. 0068, Lines 41-57) Individual images are processed differently based on the resolution mode and the amount of blur present in the images, Col.5 Para. 0017 Lines 28-42)
Regarding claim 10, Wildner teaches wherein at least one of the parameters:
size of the image field (Area of Interest), (Col.18 Para. 0069 Line 58 and Col.19 Lines 1-16)
resolution, (Col. 18, Para. 0068, Lines 41-57)
and bit depth,
is defined individually in each case for one partial image (Subsampling Individual Images, Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) or a plurality or all of the partial images, depending on the properties of that lens (Col.2 Para. 0006 Lines 28-36) of the multi-lens array which has generated the relevant partial image (Subsampling Individual Images) on the camera sensor (Image Sensor).
However, Wildner fails to explicitly teach a multi-lens array.
Ben-Yaker teaches a multi-lens array. (Fig. 5, Col.10, Lines 39-61) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s Microscope to incorporate Ben-Yaker’s Multi-lens Array. Since doing so would provide the benefit of reducing scanning time of the images. Multiple lens can be used to scan multiple parts of the sample simultaneously. Each lens can have specialized properties to improve evaluating the images.
Regarding claim 11, Wildner fails to teach wherein a region illuminated on and/or in the sample is varied temporally and/or spatially.
However, Ben-Yaker teaches wherein a region illuminated on and/or in the sample is varied temporally and/or spatially. (Col.2 Lines 18-26) The sample is illuminated based on the excitation beam and the sample is scanned line by line. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s Microscope to incorporate Ben-Yaker’s Excitation Beam that Illuminates Sections of the Sample. Since doing so would provide the benefit of focusing the illumination on specific regions of the sample. Depending on the evaluation being performed the entire sample does not need to be illuminated, thus resources can be pulled to focus on specific areas of the sample.
Regarding claim 12, Wilder fails to teach wherein the sample is illuminated with a light sheet.
However, Ben-Yaker teaches wherein the sample is illuminated with a light sheet. (Col.8, Lines 33-46) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s Microscope to incorporate Ben-Yaker’s Light Sheet. Since doing so would provide the benefit of minimizing light exposure and selectively illuminating specific regions of the sample.
Regarding claim 13, Wildner fails to teach wherein the light sheet is scanned through the sample, different regions on or in the sample being illuminated depending on the scanning position of the light sheet.
However, Ben-Yaker teaches wherein the light sheet (Col.8, Lines 33-46) is scanned through the sample, different regions (Excitation Line, (Col.9 Lines 9-26) on or in the sample being illuminated depending on the scanning position (Scanning Direction, Col. 39 Lines 15-25) of the light sheet. (Col. 10 Lines 39-61) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s Microscope to incorporate Ben-Yaker’s Light Sheet that Controls the Scanning Position. Since doing so would provide the benefit of utilizing a light sheet. A light sheet allows for selecting a specific region to be illuminated in the sample.
Regarding claim 14, Wildner teaches wherein the partial images (Subsampling Individual Images) of an image data set (Individual Images of the Sample, Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) in each case show an only partly illuminated sample. A partial image shows a portion of the sample this portion can be a part of the illuminated sample.
Regarding claim 15, Wildner fails to teach wherein a region of pixels of the camera sensor that are to be read and/or evaluated is coordinated with a region illuminated on or in the sample in such a way that only measurement data from pixels in regions of the camera sensor which correspond to illuminated regions of the sample are read out and/or evaluated.
However, Ben-Yaker teaches wherein a region of pixels of the camera sensor (linescan camera, Col. 2 Lines 47-54) that are to be read and/or evaluated is coordinated with a region (line, Col.4, Lines 4-12) illuminated on or in the sample in such a way that only measurement data from pixels in regions (lines) of the camera sensor (linescan camera) which correspond to illuminated regions (lines) of the sample are read out and/or evaluated. (Col. 20, Lines 51-61) The sample is illuminated and scanned in lines. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s illumination of the entire sample to incorporate Ben-Yaker’s illumination of lines of the sample. Since doing so would provide the benefit of focusing on specific portions of the sample, which would enhance capturing specific portions of the sample.
Regarding claim 18, Wildner teaches wherein the resolution of the camera is reduced (Reduced Resolution Mode) in comparison with a maximum possible resolution. (Col. 15 Para. 0057 Lines 53-58 and Col.16 Lines 1-10)
Claim(s) 19-20, 22-28, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Ben-Yaker et al. United States Patent US 12133714 B2 (hereinafter Ben-Yaker) in further view of Wildner et al. European Patent Application EP 3822686 A1(hereinafter Wildner).
Regarding claim 19, Ben-Yaker teaches a device for light field microscopy comprising:
a light source for emitting excitation light (Excitation Lazer/Excitation Line, Col.10 Lines 39-61),
an illumination beam path for guiding the excitation light (Excitation Lazer/Excitation Line) onto or into a sample, (Col.10 Lines 39-61)
a detection beam path at least comprising a microscope objective (Excitation Objective) and a multi-lens array (Fig.5) for guiding emission light (Excitation Lazer/Excitation Line) to a camera, wherein the multi-lens array(Fig. 5, Col.10, Lines 39-61 or Multiple Lenses Col. 38 Lines 50-67 and Col.39 Lines 1-4 ) is arranged in an intermediate image plane or in a vicinity of an intermediate image plane (Conjugate Image Plane, Fig. 1 and Col.3 Lines 58-64) or in a pupil plane or in a vicinity of a pupil plane of a detection beam path in front of the at least one camera sensor, said emission light being emitted by the sample as a consequence of being impinged on by the excitation light, (Col.10 Lines 39-61)
However, Ben-Yaker fails to teach:
the camera for sequentially recording image data sets which each contain at least one partial image of the sample, the camera having at least one camera sensor and a camera controller,
and a control unit for interacting at least with the camera controller, and for evaluating image data supplied by the camera,
wherein in order to increase a number of image data sets which are recordable per unit time, at least one of the following features is realized:
A) the camera controller is configured to read out measurement data only from a first number of pixels of the at least one camera sensor, the first number being less than a total number of the pixels;
B) the control unit and/or the camera controller are/is configured to process further measurement data only from a second number of pixels of the at least one camera sensor, the second number being less than the total number of the pixels or the first number of the pixels read;
and C) the control unit and/or the camera controller are/is configured to process further measurement data from some or all of the pixels read with a bit depth which is reduced in comparison with a maximum possible bit depth.
Wildner teaches:
the camera (Digital Camera, Col. 2 Para. 0009 Lines 55-58) for sequentially recording image data sets which each contain at least one partial image of the sample, the camera having at least one camera sensor (Image Sensor) and a camera controller (Image Sensor Driver 124 Col.16 Para. 0050 Lines 22-43),
and a control unit (Control Unit 90 Col.18 Para. 0067 Lines 28-39) for interacting at least with the camera controller (Image Sensor Driver 124 Col.16 Para. 0050 Lines 22-43), and for evaluating image (post-processing) data supplied by the camera,
wherein in order to increase a number of image data sets (Individual Images of the Sample, Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) which are recordable per unit time, at least one of the following features is realized:
A) the camera controller (Image Sensor Driver 124 Col.16 Para. 0050 Lines 22-43) is configured to read out measurement data only from a first number of pixels (Reduced Number of Pixels) of the at least one camera sensor (Image Sensor), the first number being less than a total number of the pixels; (Col.3 Para 0014 Line 58 and Col.4 Lines 1-22)
B) the control unit (Control Unit 90 Col.18 Para. 0067 Lines 28-39) and/or the camera controller (Image Sensor Driver 124 Col.16 Para. 0050 Lines 22-43) are/is configured to process further measurement data only from a second number of pixels (n-th Image Pixel) of the at least one camera sensor (Image Sensor), the second number being less than the total number of the pixels or the first number of the pixels read; (Col.3 Para 0014 Line 58 and Col.4 Lines 1-22)
and C) the control unit and/or the camera controller are/is configured to process further measurement data from some or all of the pixels read with a bit depth which is reduced in comparison with a maximum possible bit depth.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Yaker’s Microscope to incorporate Wildner’s Camera Sensor and Control Unit. Since doing so would provide the benefit of controlling the camera and increasing the flexibility of the microscope. By enabling the sample to be read in various different pixel amounts, which can increase the speed of scanning a sample.
Regarding claim 20, Ben-Yaker teaches wherein the multi-lens array is arranged in a plane optically conjugate to the back focal plane of the microscope objective, or in the vicinity of such a plane (Fig. 4 and Fig. 1), or wherein the multi-lens array (Col.10, Lines 39-61) is arranged in the intermediate image plane or in the vicinity of the intermediate image plane. (Fig. 4 and Fig. 1)
Regarding claim 22, Ben-Yaker teaches wherein the control unit (Image Reconstruction System, Col. 2 Lines 55-67) is configured for reconstructing three-dimensional sample information (Fig. 4 and Fig. 11) at least from a selection of the image data supplied by the camera. (Col.14 Lines 1-17)
Regarding claim 23, Ben-Yaker fails to teach:
wherein the partial images of an image data set are rectangular,
and wherein the camera sensor is oriented relative to the partial images such that pixel lines and pixel columns of the camera sensor are aligned parallel to the edges of the partial images.
However, Wildner teaches:
wherein the partial images (Subsampling Individual Images, Col.3 Para 0014 Line 58 and Col.4 Lines 1-22) of an image data set (Individual Images of the Sample) are rectangular, A user can choose the area of interest and the images can have various resolution/pixel amounts. One could easily choose the images to be rectangular in shape.
and wherein the camera sensor (Image Sensor) is oriented relative to the partial images (Subsampling Individual Images) such that pixel lines and pixel columns of the camera sensor are aligned parallel to the edges of the partial images. (Col.3 Para. 0012 Lines 23-38 and Col.1 Para. 0004 Lines 28-49) The image sensor aligns with the area of interest being captured through zooming and lateral navigation.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Yaker’s Microscope to incorporate Wildner’s Camera Sensor that Captures Partial Images. Since doing so would provide the benefit of increasing the flexibility of the microscope. By enabling the sample to be read in various different pixel amounts, which can increase the speed of scanning a sample. As well as produce partial images in various pixel amounts.
Regarding claim 24, Ben-Yaker fails to teach wherein the control unit has one or more of the following component parts:
camera driver
and frame grabber.
However, Wildner teaches wherein the control unit (Control Unit 90, Col. 18 Para. 0067 Lines 28-39) has one or more of the following component parts:
camera driver (Image sensor Driver 124, Col.16 Para. 0059, Lines 22-43);
and frame grabber.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Yaker’s Camera to incorporate Wildner’s Camera Driver. Since doing so would provide the benefit of controlling the camera in the microscope to enhance the flexibility of the microscope.
Regarding claim 25, Ben-Yaker teaches wherein the multi-lens array has different lenses. (Col.10 Lines 39-61)
Regarding claim 26, Ben-Yaker teaches wherein the illumination beam path (Excitation Beam) comprises a scanner for the purpose of spatially and temporally manipulating a region illuminated on and/or in the sample. (Col.2 Lines 18-26 and Col. 10 Lines 39-61)
Regarding claim 27, Ben-Yaker teaches wherein the control unit (Control System, Col. 10 Lines 29-38) is configured to control one or both of the following components:
scanner; (Col. 2 Lines 18-26)
and field stop.
Regarding claim 28, Ben-Yaker teaches wherein the control unit (Control System, Col. 10 Lines 29-38) is configured to coordinate a region (line, Col.4, Lines 4-12) of pixels of the camera sensor (linescan camera, Col. 2 Lines 47-54) that are to be evaluated with a region (line, Col.4, Lines 4-12) illuminated on or in the sample in such a way that only measurement data from pixels in regions (line, Col.4, Lines 4-12) of the camera sensor (linescan camera, Col. 2 Lines 47-54) which correspond to illuminated regions(line, Col.4, Lines 4-12) of the sample are evaluated. (Col. 20, Lines 51-61)
Regarding claim 30, Ben-Yaker teaches illumination beam path is configured for illuminating the sample with a light sheet oriented obliquely with respect to the optical axis. (Fig. 1 and Fig. 5)
Claim(s) 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Wildner et al. European Patent Application EP 3822686 A1(hereinafter Wildner) in further view of Ben-Yaker et al. U.S. Patent US 12133714 B2 (hereinafter Ben-Yaker) and Wagner et al. U.S. Patent Application Publication US 20230065504 A1 (hereinafter Wagner).
Regarding claim 16, Wildner and Bey-Yaker fail to teach wherein in the case of a selection of image data of an image data set or in the case of all image data of an image data set, a dynamic range is reduced in comparison with a dynamic range with which the image data were originally recorded.
Wildner, Bey-Yaker, and Wagner are analogous to the claimed invention because all of them are in the same field of acquiring images through microscopy.
However, Wagner teaches wherein in the case of a selection of image data (Para. 0014) of an image data set or in the case of all image data of an image data set (Para. 0514), a dynamic range is reduced (Compressing Image Data) in comparison with a dynamic range with which the image data were originally recorded. (Para. 0575) Image pre-processing can be performed on the image data before the specific image data is transferred to another location. The pre-processing can include compressing image data, which could reduce the dynamic range. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s Microscope altered by Bey-Yaker’s Multi-Lens Array to incorporate Wagner’s Pre-Processing Operations. Since doing so would provide the benefit of providing the ability to perform various data processing on the images. This enhances the researcher’s ability to manipulate the image data for different research purposes.
Regarding claim 17, Wildner and Bey-Yaker fail to teach wherein the image data are compressed in order to reduce the dynamic range.
However, Wagner teaches wherein the image data (Para. 0014) are compressed in order to reduce the dynamic range. (Para. 0575) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s Microscope altered by Bey-Yaker’s Multi-Lens Array to incorporate Wagner’s Pre-Processing Operation of Compressing Data. Since doing so would provide the benefit of providing the ability to perform various data processing on the images, such as reducing the dynamic range by compressing. By providing the ability to reduce dynamic range researchers can manipulate the image data for different research purposes.
Claim(s) 21, 29, and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Ben-Yaker et al. U.S. Patent US 12133714 B2 (hereinafter Ben-Yaker) in further view of Wildner et al. European Patent Application EP 3822686 A1(hereinafter Wildner) and Wagner et al. U.S. Patent Application Publication US 20230065504 A1 (hereinafter Wagner).
Regarding claim 21, Ben-Yaker and Wildner fail to teach wherein the camera controller contains a microcontroller or an FPGA or is realized by a microcontroller or an FPGA.
However, Wagner teaches wherein the camera controller contains a microcontroller or an FPGA or is realized by a microcontroller or an FPGA. (Para. 0770)Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s Microscope Altered by Ben-Yaker’s Multi-lens Array to incorporate Wagner’s FPGA Controller. Since doing so would provide the benefit of having a camera controller that can be reconfigurable to a specific task and provides higher processing power compared to a generic controller.
Regarding claim 29, Ben-Yaker and Wildner fails to explicitly teach wherein the illumination beam path comprises a microscope objective and a stop for setting a numerical aperture of the microscope objective.
However, Wagner teaches wherein the illumination beam path comprises a microscope objective and a stop (Aperture Stop 13714, Para. 0852) for setting a numerical aperture of the microscope objective. (Para. 0856)Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s Microscope Objective altered by Ben-Yaker’s Multi-lens Array to incorporate an Aperture Stop. Since doing so would provide the benefit of limiting the amount of light that reaches the image plane in a microscope. This prevents too much light being sent to the image plane and modifies brightness levels.
Regarding claim 31, Ben-Yaker and Wildner fail to teach wherein a field stop is present for the purpose of setting a size of the image field in the detection beam path and/or in the illumination beam path.
However, Wagner teaches wherein a field stop (Field Stop 13710) is present for the purpose of setting a size of the image field in the detection beam path and/or in the illumination beam path. (Para. 0852)Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wildner’s Microscope altered by Ben-Yaker’s Multi-lens to incorporate Wagner’s Field Stop. Since doing so would provide the benefit of controlling the area of the sample that is illuminated.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIANNA R COCHRAN whose telephone number is (571)272-4671. The examiner can normally be reached Mon-Fri. 7:30am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRIANNA RENAE COCHRAN/Examiner, Art Unit 2615
/ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615