Prosecution Insights
Last updated: April 19, 2026
Application No. 18/240,147

METHOD, LIGHT MICROSCOPE, AND COMPUTER PROGRAM FOR LOCALIZING OR TRACKING EMITTERS IN A SAMPLE

Non-Final OA §103
Filed
Aug 30, 2023
Examiner
LEE, SHUN K
Art Unit
2884
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Abberior Instruments GmbH
OA Round
1 (Non-Final)
42%
Grant Probability
Moderate
1-2
OA Rounds
3y 9m
To Grant
58%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
294 granted / 701 resolved
-26.1% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
61 currently pending
Career history
762
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
23.8%
-16.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 701 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because it contains an embedded hyperlink and/or other form of browser-executable code. Applicant is required to delete the embedded hyperlink and/or other form of browser-executable code; references to websites should be limited to the top-level domain name without any prefix such as http:// or other browser-executable code. See MPEP § 608.01. The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant's cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Interpretation The specification (e.g., see “… projections through an optical system (in particular objective lens and further lenses of a light microscope as well as at least one pinhole) into the sample are not congruent. That is, the respective projection areas may be disjoint or overlapping, but not completely congruent …” in the last paragraph on pg. 6) serves as a glossary (MPEP § 2111.01) for the claim term “not congruent”. The specification (e.g., see “… first partial area covers a central circular area of the detection plane. In particular, the second partial area covers an annular region of the detection plane arranged around the central circular region. Depending on the number, size, shape, and arrangement of the detector elements (e.g., Cartesian or hexagonal), the detector elements approximate the partial areas more or less accurately. For example, with relatively few detector elements arranged on a Cartesian grid, the central circular area can be covered by a square first partial area …” in the second paragraph on pg. 13) serves as a glossary (MPEP § 2111.01) for the claim term “first partial area covers a central circular area of the detection plane”. The specification (e.g., see “… background is estimated in parallel with the determination of the position of the emitter, in particular wherein the value representing the background is determined in parallel with the determination of the position of the emitter. That is, the determination of the position of the emitter and the determination of the background are performed simultaneously or alternating with each other during the illumination sequence …” in the last paragraph on pg. 14) serves as a glossary (MPEP § 2111.01) for the claim term “background is estimated in parallel with the determination of the position of the emitter”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were effectively filed absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned at the time a later invention was effectively filed in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-12 and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jovin et al. (US 2021/0003834) in view of Heine et al. (Adaptive-illumination STED nanoscopy, Proceedings of the National Academy of Sciences of the United States of America Vol. 114, no 37 (September 2017), pp. 9797-9802 and Supporting Information, pp. 1-8). In regard to claim 1, Jovin et al. disclose a method for localizing or tracking emitters in a sample, comprising the steps of: (a) performing an illumination sequence with a plurality of illumination steps, wherein the sample is illuminated in the illumination steps in each case with an intensity distribution of an illumination light comprising a local minimum, wherein the illumination light induces or modulates light emissions of the emitters, and wherein the local minimum of the intensity distribution is positioned in a region around a presumed position of an emitter in the sample in the illumination steps (e.g., “… embodiment allows the application of advanced fluorescence imaging techniques, such as RESOLFT, MINFLUX, SIM and/or SMLM … achieving resolution in fluorescence microscopy substantially below 100 nm. The molecular localization methods … creation of a depletion or photoconversion illumination (equivalent to the "donuts" in STED) is automatically and precisely achieved by exposing the sample to activation (and readout) light from one side and to depletion (or photoconversion) light from the opposite side, using the same pattern(s). The light sources can be employed simultaneously or displaced in time depending on the particular protocol and probe …” in paragraphs 28 and 81), (b) detecting light emissions of the emitter for the respective illumination steps (e.g., “… embodiment allows the application of advanced fluorescence imaging techniques, such as RESOLFT, MINFLUX, SIM and/or SMLM … achieving resolution in fluorescence microscopy substantially below 100 nm. The molecular localization methods … creation of a depletion or photoconversion illumination (equivalent to the "donuts" in STED) is automatically and precisely achieved by exposing the sample to activation (and readout) light from one side and to depletion (or photoconversion) light from the opposite side, using the same pattern(s). The light sources can be employed simultaneously or displaced in time depending on the particular protocol and probe …” in paragraphs 28 and 81); and (c) determining the position of the emitter in the sample from the light emissions detected for the respective illumination steps (e.g., “… embodiment allows the application of advanced fluorescence imaging techniques, such as RESOLFT, MINFLUX, SIM and/or SMLM … achieving resolution in fluorescence microscopy substantially below 100 nm. The molecular localization methods … creation of a depletion or photoconversion illumination (equivalent to the "donuts" in STED) is automatically and precisely achieved by exposing the sample to activation (and readout) light from one side and to depletion (or photoconversion) light from the opposite side, using the same pattern(s). The light sources can be employed simultaneously or displaced in time depending on the particular protocol and probe …” in paragraphs 28 and 81), wherein light emanating from the sample is detected with a plurality of detector elements, the detector elements comprising respective active areas whose projections into a focal plane in the sample are not congruent, wherein a background is estimated based on the light detected with the plurality of detector elements (e.g., “… Each respective conjugate or non-conjugate camera pixel mask is subjected to a dilation and estimations of respective background conjugate or non-conjugate signals are obtained from the dilated conjugate or non-conjugate camera pixel masks for use as corrections of the conjugate (Ic) and non-conjugate (Inc) images. Advantageously, the formation and dilation of the mask provides additional background information improving the image quality … define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraphs 20 and 67), and wherein, in determining the position of the emitter or for the determined position of the emitter, a background correction is performed based on the estimated background (e.g., “… corresponding background and shading images are collected for correction …” in paragraph 85). The method of Jovin et al. lacks an explicit description of details of the “… advanced fluorescence imaging techniques, such as RESOLFT, MINFLUX, SIM and/or SMLM … STED …” such as illumination positions in the sample are illuminated with different light intensities of the illumination light in the illumination steps. However, “… advanced fluorescence imaging …” details are known to one of ordinary skill in the art (e.g., see “… At one of the next scan positions, the fluorophore emits again, and the STED power is further increased, and so on, until the desired STED power (i.e., resolution) is reached …” on pg. 9798 of Heine et al.). It should be noted that “when a patent claims a structure already known in the prior art that is altered by the mere substitution of one element for another known in the field, the combination must do more than yield a predictable results”. KSR International Co. v. Teleflex Inc., 550 U.S. 398 at 416, 82 USPQ2d 1385 (2007) at 1395 (citing United States v. Adams, 383 U.S. 39, 40 [148 USPQ 479] (1966)). See MPEP § 2143. In this case, one of ordinary skill in the art could have substituted a known conventional STED (e.g., comprising details such as “At one of the next scan positions, the fluorophore emits again, and the STED power is further increased, and so on”, in order to achieve “desired STED power (i.e., resolution)”) for the unspecified STED of Jovin et al. and the results of the substitution would have been predictable. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to provide a known conventional STED (e.g., comprising details such as illumination positions in the sample are illuminated with different light intensities of the illumination light in the illumination steps) as the unspecified STED of Jovin et al. In regard to claim 2 which is dependent on claim 1, Jovin et al. also disclose that the plurality of detector elements comprise at least one first detector element and at least one second detector element, wherein a value representing the light emissions of the emitter is determined based on the light detected with the at least one first detector element (e.g., see “core pixels” in “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraph 67), and wherein a value representing the background is determined based on the light detected with the at least one second detector element (e.g., see “ring of pixels” in paragraph 67). In regard to claim 3 which is dependent on claim 2, Jovin et al. also disclose that a plurality of second detector elements are provided (e.g., “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraph 67). In regard to claim 4 which is dependent on claim 3, Jovin et al. also disclose that a plurality of first detector elements are also provided (e.g., “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraph 67). In regard to claim 5 which is dependent on claim 1, Jovin et al. also disclose that a location-dependent background light distribution is determined from the light detected by the plurality of detector elements, wherein the background correction is performed depending on the respective position of the minimum of the intensity distribution by means of respective associated values of the background light distribution (e.g., “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraph 67). In regard to claim 6 which is dependent on claim 1, Jovin et al. also disclose that respective values representing the background are determined for the illumination steps based on the detected background light, wherein the background correction is carried out by means of the values representing the background (e.g., “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraph 67). In regard to claim 7 which is dependent on claim 2, Jovin et al. also disclose that the plurality of detector elements are arranged in a detection plane (e.g., “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraph 67). In regard to claim 8 which is dependent on claim 7, Jovin et al. also disclose that the at least one first detector element covers a first contiguous partial area of the detection plane, wherein the second detector elements cover a second contiguous partial area of the detection plane (e.g., “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraph 67). In regard to claim 9 which is dependent on claim 8, Jovin et al. also disclose that the second partial area encloses the first partial area (e.g., “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraph 67). In regard to claim 10 which is dependent on claim 9, Jovin et al. also disclose that the first partial area covers a central circular area of the detection plane (e.g., “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraph 67). In regard to claim 11 which is dependent on claim 10, Jovin et al. also disclose that the second partial area covers an annular area of the detection plane arranged around the central circular area (e.g., “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraph 67). In regard to claim 12 which is dependent on claim 10, Jovin et al. also disclose that the first partial area or the central circular area has an extent of 0.5 Airy Units to 1.0 Airy Units (e.g., “… Advantageously, the PAM illumination apertures have a diameter equal to or below the diameter of an Airy disk (representing the best focused, diffraction limited spot of light that a perfect lens with a circular aperture could create), thus increasing the lateral spatial resolution compared with conventional PAMs and confocal microscopes …” in paragraph 17). In regard to claim 16 which is dependent on claim 2, Jovin et al. also disclose that a weighted sum or difference is formed between intensities or photon numbers of the light detected by the at least one first detector element and the light detected by the at least one second detector element, wherein the position of the emitter and/or the value representing the background is determined based on the weighted sum or difference (e.g., “… every signal at every position in the image resulting from overlapping camera responses to an entire pattern sequence is represented with the linear equation with coefficients known from the calibration procedure, and the corresponding emission signals impinging on the corre­sponding modulator elements are obtained by the solution to the system of linear equations describing the entire image. Accordingly, the camera signals representing the responses of individual modulator elements are mapped back to their corresponding coordinates in the modulator matrix, such that the signal at every position in the image resulting from the overlapping responses to an entire pattern sequence can be represented as a linear equation with known coefficients and the emission signals impinging on the corresponding modulator elements contributing to the particular position (coordinates), wherein these signals are evaluated by the solution to the system of linear equations describing the entire image. Advantageously, by employing the system of linear equations, the fluorescence imaging is obtained with improved precision …” in paragraph 27). In regard to claim 17 which is dependent on claim 1, Jovin et al. also disclose that the background is estimated in parallel with the determination of the position of the emitter (e.g., “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) …” in paragraph 67). In regard to claim 18 which is dependent on claim 1, Jovin et al. also disclose that light is detected with the plurality of detector elements before performing the illumination sequence, wherein the background is estimated from the detected light (e.g., “… calibration in RES1 mode is a preferred, but optional feature of RES2 and RES3 modes, which alternatively can be conducted on the basis of other prestored reference data … this value is small or negligible if one subtracts a global background (dark state) signal beforehand …” in paragraphs 41 and 73 or alternatively it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to provide a “prestored” “global background” estimated from light detected before performing the illumination sequence when additional background processing is “negligible”). In regard to claim 19, Jovin et al. disclose a light microscope for localizing or tracking emitters in a sample, comprising: (a) a light source configured to generate illumination light that induces or modulates light emissions from an emitter in a sample (e.g., “… embodiment allows the application of advanced fluorescence imaging techniques, such as RESOLFT, MINFLUX, SIM and/or SMLM … light sources 11, 12 … achieving resolution in fluorescence microscopy substantially below 100 nm. The molecular localization methods … creation of a depletion or photoconversion illumination (equivalent to the "donuts" in STED) is automatically and precisely achieved by exposing the sample to activation (and readout) light from one side and to depletion (or photoconversion) light from the opposite side, using the same pattern(s). The light sources can be employed simultaneously or displaced in time depending on the particular protocol and probe …” in paragraphs 28, 44, and 81); (b) a light modulator configured to generate an intensity distribution of the illumination light with a local minimum in the sample (e.g., “… embodiment allows the application of advanced fluorescence imaging techniques, such as RESOLFT, MINFLUX, SIM and/or SMLM … spatial light modulator device, like a DMD array 20 … achieving resolution in fluorescence microscopy substantially below 100 nm. The molecular localization methods … creation of a depletion or photoconversion illumination (equivalent to the "donuts" in STED) is automatically and precisely achieved by exposing the sample to activation (and readout) light from one side and to depletion (or photoconversion) light from the opposite side, using the same pattern(s). The light sources can be employed simultaneously or displaced in time depending on the particular protocol and probe …” in paragraphs 28, 44, and 81); (c) a control unit which is configured to perform an illumination sequence with a plurality of illumination steps, the control unit being configured to control the light source and/or the light modulator in such a way that the sample is illuminated in each case with an intensity distribution of the illumination light comprising a local minimum in the illumination steps, and such that the local minimum of the intensity distribution is positioned in a region around a presumed position of an emitter in the sample in the illumination steps (e.g., “… embodiment allows the application of advanced fluorescence imaging techniques, such as RESOLFT, MINFLUX, SIM and/or SMLM … control device 40 … functional software is running in the control device 40 (FIG. 1), that allows to control and setup all connected components, in particular units 10, 20 and 30, and performs fully automated image acquisition … achieving resolution in fluorescence microscopy substantially below 100 nm. The molecular localization methods … creation of a depletion or photoconversion illumination (equivalent to the "donuts" in STED) is automatically and precisely achieved by exposing the sample to activation (and readout) light from one side and to depletion (or photoconversion) light from the opposite side, using the same pattern(s). The light sources can be employed simultaneously or displaced in time depending on the particular protocol and probe …” in paragraphs 28, 44, 49, and 81); (d) at least one detector configured to detect light emanating from the sample for the illumination steps (e.g., “… embodiment allows the application of advanced fluorescence imaging techniques, such as RESOLFT, MINFLUX, SIM and/or SMLM … camera 31, 32 … achieving resolution in fluorescence microscopy substantially below 100 nm. The molecular localization methods … creation of a depletion or photoconversion illumination (equivalent to the "donuts" in STED) is automatically and precisely achieved by exposing the sample to activation (and readout) light from one side and to depletion (or photoconversion) light from the opposite side, using the same pattern(s). The light sources can be employed simultaneously or displaced in time depending on the particular protocol and probe …” in paragraphs 28, 44, and 81); and (e) a computing unit which is configured to determine the position of the emitter in the sample from the light emissions detected for the respective illumination steps (e.g., “… embodiment allows the application of advanced fluorescence imaging techniques, such as RESOLFT, MINFLUX, SIM and/or SMLM … control device 40 … functional software is running in the control device 40 (FIG. 1), that … also includes the further image processing (image distortion correction, registration and subtraction) that is provided to produce the optical sectioned PAM image. The control device 40 allows the integration of the PAM modes such as for example superresolution … achieving resolution in fluorescence microscopy substantially below 100 nm. The molecular localization methods … creation of a depletion or photoconversion illumination (equivalent to the "donuts" in STED) is automatically and precisely achieved by exposing the sample to activation (and readout) light from one side and to depletion (or photoconversion) light from the opposite side, using the same pattern(s). The light sources can be employed simultaneously or displaced in time depending on the particular protocol and probe …” in paragraphs 28, 44, 49, and 81), wherein the at least one detector comprises a plurality of detector elements, the detector elements comprising respective active areas whose projections into a focal plane in the sample are not congruent, the computing unit being configured to estimate a background based on the light detected by the plurality of detector elements, and perform a background correction based on the estimated background in determining the position of the emitter or for the determined position of the emitter(e.g., “… define a ring of pixels surrounding the response area established from the calibration (FIG. 5). In the c channel (camera 32), the intensities in the "ring" mask 3 correspond exclusively to the standard background of the camera image (electronic bias+offset), since by definition the in-focus (if) signal and any associated out-of-focus (of) signal corresponding to a given aperture are constrained to the "core" pixels defined by the initial binary mask. A mean background/pixel value (b) is computed from the "ring" pixels of mask 3 and used to calculate the total background contribution (b-number of core pixels) … corresponding background and shading images are collected for correction …” in paragraphs 67 and 85). The microscope of Jovin et al. lacks an explicit description of details of the “… advanced fluorescence imaging techniques, such as RESOLFT, MINFLUX, SIM and/or SMLM … STED …” such as illumination positions in the sample are illuminated with different light intensities of the illumination light in the illumination steps. However, “… advanced fluorescence imaging …” details are known to one of ordinary skill in the art (e.g., see “… At one of the next scan positions, the fluorophore emits again, and the STED power is further increased, and so on, until the desired STED power (i.e., resolution) is reached …” on pg. 9798 of Heine et al.). It should be noted that “when a patent claims a structure already known in the prior art that is altered by the mere substitution of one element for another known in the field, the combination must do more than yield a predictable results”. KSR International Co. v. Teleflex Inc., 550 U.S. 398 at 416, 82 USPQ2d 1385 (2007) at 1395 (citing United States v. Adams, 383 U.S. 39, 40 [148 USPQ 479] (1966)). See MPEP § 2143. In this case, one of ordinary skill in the art could have substituted a known conventional STED (e.g., comprising details such as “At one of the next scan positions, the fluorophore emits again, and the STED power is further increased, and so on”, in order to achieve “desired STED power (i.e., resolution)”) for the unspecified STED of Jovin et al. and the results of the substitution would have been predictable. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to provide a known conventional STED (e.g., comprising details such as illumination positions in the sample are illuminated with different light intensities of the illumination light in the illumination steps) as the unspecified STED of Jovin et al. In regard to claim 20, the cited prior art is applied as in claim 1 above. Jovin et al. disclose a non-transitory computer readable medium for storing computer instructions for localizing or tracking emitters in a sample that, when executed by one or more processors associated with a light microscope causes the one or more processors to perform the method (e.g., “… apparatus, e.g. the control device apparatus comprising a computer-readable storage medium containing program instructions for carrying out one of the inventive methods are described …” in paragraph 33). Allowable Subject Matter Claim(s) 13-15 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the instant application is deemed to be directed to a nonobvious improvement over the invention disclosed in US 2021/0003834. The improvement comprises in combination with other recited elements, light emanating from the sample passes through a pinhole to the at least one first detector element, wherein the pinhole comprises a reflective surface, and wherein the at least one second detector element is arranged on a side of the pinhole opposite the at least one first detector element so that light reflected from the reflecting surface is detected by the at least one second detector element. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Shun Lee whose telephone number is (571)272-2439. The examiner can normally be reached Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Porta can be reached on (571)272-2444. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SL/ Examiner, Art Unit 2884 /DAVID P PORTA/Supervisory Patent Examiner, Art Unit 2884
Read full office action

Prosecution Timeline

Aug 30, 2023
Application Filed
Sep 08, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12487336
CALIBRATION SYSTEM AND METHOD FOR INTEGRATED OPTICAL PHASED ARRAY CHIP
2y 5m to grant Granted Dec 02, 2025
Patent 12480865
GAS CELL
2y 5m to grant Granted Nov 25, 2025
Patent 12465297
MULTI-MODALITY DENTAL X-RAY IMAGING DEVICE AND METHODS
2y 5m to grant Granted Nov 11, 2025
Patent 12453521
MOBILE MEDICAL DEVICE, MOBILE DEVICE, AND METHOD
2y 5m to grant Granted Oct 28, 2025
Patent 12449554
Scintillator Detectors and Methods for Positron Emission Tomography
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
42%
Grant Probability
58%
With Interview (+15.7%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 701 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month