Prosecution Insights
Last updated: April 19, 2026
Application No. 17/643,562

DESIGN AND OPTIMIZATION OF DIFFRACTIVE LENSLESS CAMERAS FOR IMAGING AND COMPUTER VISION APPLICATIONS

Non-Final OA §103
Filed
Dec 09, 2021
Examiner
DUFFY, CAROLINE TABANCAY
Art Unit
2662
Tech Center
2600 — Communications
Assignee
William Marsh Rice University
OA Round
2 (Non-Final)
80%
Grant Probability
Favorable
2-3
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
62 granted / 78 resolved
+17.5% vs TC avg
Strong +27% interview lift
Without
With
+26.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
13.8%
-26.2% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 78 resolved cases

Office Action

§103
The previous Non-Final Rejection dated 11/20/2025 is vacated and the new Non-Final Rejection below is presented. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of prior-filed application 63/123033 filed December 9th 2020 under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on December 9th 2021 is being considered by the examiner. Response to Amendment The Amendment filed 02/20/2026 has been entered. Claims 1 and 3-19 remain pending. Claims 2 and 20 are cancelled. Claims 21 and 22 are new. The previous Non-Final Rejection dated 11/20/2025 is vacated and the new Non-Final Rejection below is presented. In response to the Amendment of Claim 9, the objection of record is withdrawn. In response to the Amendment of the Specification, objections of record are withdrawn. In response to the Amendment of Claim 6, the rejection under 35 U.S.C. 112(b) is withdrawn. Claim Objections The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. A series of singular dependent claims is permissible in which a dependent claim refers to a preceding claim which, in turn, refers to another preceding claim. A claim which depends from a dependent claim should not be separated by any claim which does not also depend from said dependent claim. It should be kept in mind that a dependent claim may refer to any preceding independent claim. In general, applicant's sequence will not be changed. See MPEP § 608.01(n). Claims 3-5 and 18 depend either directly or indirectly from canceled Claim 2. Applicant is advised to amend Claims 3 and 5 such that they depend from Claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 5, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Chi (Phase-coded aperture for optical imaging, published 2009) in view of Javidi et al. (US 2018/0247106 A1), further in view of Joshi et al. (PSF Estimation using Sharp Edge Prediction, published 2008). Regarding Claim 1, Chi discloses “A method for designing and optimizing a determining one or more optimal point spread functions for a particular application” (Chi, Section 4, paragraph 1 discloses “The important thing in the system design is to find an appropriate intensity pattern A(x, y) which is generated by some phase screen for a point source object.” Chi, Section 4.1, paragraph 1 discloses “In order to find the intensity point spread function A(x, y) (See Fig. 1) from Eq. (1), we need to define functions b(x, y) and t(x, y)”; where A(x, y) is an optimal point spread function; where changing A(x, y) dependent on b(x, y) and t(x, y), or bandlimited function and uniformly redundant array, respectively, is determining an optimal point spread function for a particular application); “and determining optimal phase masks based on the one or more optimal point spread functions and (Chi, Section 4.2 discloses “Calculate phase screen P(ξ, η); After intensity pattern A(x, y) is known, the next step in the system design is to calculate an aperture with a transmission function of P(ξ, η) that can be used to generate the specific intensity pattern A(x, y)… It is an iterative phase calculation method.”; where phase screen P(ξ, η) is a phase mask; where determining a phase mask iteratively to generate intensity pattern A(x, y) is determining optimal phase masks based on the optimal point spread function), PNG media_image1.png 442 1130 media_image1.png Greyscale Fig. 1 of Chi PNG media_image2.png 337 891 media_image2.png Greyscale Fig. 5 of Chi Chi does not explicitly teach “A method for designing and optimizing a lensless imaging device” and “using the designed optimal phase masks in a lensless camera,” and “wherein the one or more optimal point spread functions are contour-based point spread functions that are optimal for imaging applications.” Chi teaches applying the technique using a diffraction limited lens (Section 4.3, paragraph 3 of Chi). However, in an analogous field of endeavor, Javidi teaches “A method for designing and optimizing a lensless imaging device” and “using the designed optimal phase masks in a lensless camera” (Javidi, [0037] discloses “the disclosed lens-less imaging system 100, the converging coherent or partially coherent spherical beam can pass through the specimen and can be modulated, which carries information about the specimen. Any un-modulated part of the beam can be regarded as the reference beam. Likewise, part of this waveform may be unmodulated by the diffuser phase mask placed before the image sensor 110 (e.g., CMOS camera)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chi to incorporate the teachings of Javidi by applying a phase mask to a lensless imaging system. The prior art Chi contained a method which differed from the claimed device by the substitution of some components; that is, Chi teaches an imaging system using determined point spread functions and phase masks applied to a diffraction limited lens and Claim 1 requires a lensless imaging system. The substituted components and their functions were known in the art; Javidi teaches a lensless imaging system that uses a phase mask. One of ordinary skill in the art could have substituted one known element for another, and the results would have been predictable. That is, one of ordinary skill in the art could have substituted the determined phase mask of Chi for the phase mask of Javidi and obtained predictable results of a lensless imaging system with a particular phase mask. The combination of Chi and Javidi does not explicitly teach “wherein the one or more optimal point spread functions are contour-based point spread functions that are optimal for imaging applications.” However, in an analogous field of endeavor, Joshi teaches “wherein the one or more optimal point spread functions are contour-based point spread functions that are optimal for imaging applications” (Joshi, Section 4.1, paragraph 1 discloses “Thus, by localizing blurred edges and predicting sharp edge profiles, locally estimating a sharp image is possible.” Section 5, paragraph 1 discloses “Once the sharp image is predicted, we estimate the PSF as the kernel that, when convolved with the sharp image, produces the blurred input image”; where sharp edges are contours; where an estimated PSF from a sharp image is a contour-based optimal point spread function; where Joshi is directed to handling blur – Joshi, Abstract discloses “Our method handles blur due to defocus, slight camera motion, and inherent aspects of the imaging system” – and thus teaches a point spread function optimal for imaging applications). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Chi and Javidi to incorporate the teachings of Joshi by predicting sharp edges to then estimate a PSF. The prior art teaches a ‘base’ method upon which the claimed invention can be seen as an ‘improvement.’ Chi teaches a phase-coded aperture system that involves finding a point spread function of the system. The claimed invention recites a method of designing an imaging device that involves determining a contour-based point spread function. The prior art contained a ‘comparable’ method that has been improved in the same way as the claimed invention. Joshi teaches a method of estimating point spread function that measures and accounts for blur. Joshi teaches determining edge profiles to predict a sharp image; under the broadest reasonable interpretation, a contour is an edge or outline. One of ordinary skill in the art could have applied the known ‘improvement’ technique in the same way to the ‘base’ method and the results would have been predictable to one of ordinary skill in the art. Chi discloses the problems of image blur: Chi, Section 3, paragraph 1 recites “Also, we see the image of the coded aperture system is a little more blurry than that of the diffraction limited lens, this is because the point spread function of such a system, as in Eq. (12), is wider than that of the diffraction limited system. Some extra digital deconvolution can be applied to the middle image in Fig. 4 to remove the effect of triangle blur function in Eq. (12) and recover a diffraction limited result.” Chi resolves the blur using “extra digital deconvolution,” but it would be obvious to one of ordinary skill in the art to apply the technique of Joshi by performing sharp edge prediction and then estimating the point spread function. Finally, one of ordinary skill in the art would be motivated to combine the Chi, Javidi, and Joshi references in order to improve speed and accuracy: Joshi, Abstract discloses “Our method is completely automatic, fast, and produces accurate results.” Thus, the combination of Chi, Javidi, and Joshi teaches the method of Claim 1. Regarding Claim 5, the combination of Chi, Javidi, and Joshi teaches “The method according to claim 2, wherein the one or more optimal phase masks are computed by solving a phase retrieval algorithm using the contour-based point spread functions as an intensity at a sensor plane” (Chi, Section 2, paragraph 3 discloses “After A(x, y) is determined, one can use a standard phase retrieval algorithm [12], [13] to calculate the aperture function P(ξ, η)”; see also Fig. 3 and Fig. 3 caption “The pattern of intensity point spread function Aðx;yÞ at plane II in Fig. 1 and the corresponding phase screen Pðx;yÞ at plane I that can generate such pattern. (a) A digitally constructed pattern from uniformly redundant array shown in Fig. 2 using Eq. (1); and (b) the phase screen to generate the pattern in a calculated using phase retrieval algorithm.”) PNG media_image3.png 831 520 media_image3.png Greyscale Fig. 3 of Chi Regarding Claim 18¸ the combination of Chi, Javidi, and Joshi teaches “The method according to claim 5, wherein the phase retrieval algorithm is based on iteratively enforcing constraints on the sensor plane and a phase mask plane” (Chi, Section 4.2, paragraph 1 discloses “This is basically a phase retrieval problem with a diagram shown in Fig. 5. It is an iterative phase calculation method.”; see Fig. 1; where Plane I is a phase mask plane; where Plane II is a sensor plane) Regarding Claim 19, Claim 19 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 1. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Chi, Javidi, and Joshi references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Chi, Javidi, and Joshi references discloses “A non-transitory computer readable medium storing instructions, the instructions executable by a computer processor and comprising functionality” (Javidi, [0078] discloses “The computing device 1100 includes one or more non-transitory computer-readable media for storing one or more computer-executable instructions or software for implementing exemplary embodiments.”) Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Chi (Phase-coded aperture for optical imaging, published 2009) in view of Javidi et al. (US 2018/0247106 A1), further in view of Joshi et al. (PSF Estimation using Sharp Edge Prediction, published 2008), further in view of Chou et al. (US 2008/0253675 A1). Regarding Claim 6, the combination of Chi, Javidi, and Joshi does not explicitly teach the method of Claim 6. However, in an analogous field of endeavor, Chou teaches “The method according to claim 1, wherein the one or more optimal point spread functions are designed using edge detection (Chou, [0008] discloses “One embodiment of the present invention provides an image processing method, which comprises: (a) performing edge detection on an image to obtain a plurality of edge pixels of the image; (b) performing a partial point-spread function (PSF) estimation to each of the edge pixels to generate a plurality of partial PSF estimation results; and (c) generating a PSF estimation result according to the partial PSF estimation results”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Chi, Javidi, and Joshi to incorporate the teachings of Chou by generating a PSF based on detected edge pixels. One of ordinary skill in the art would be motivated to combine the Chi, Javidi, and Chou references in order to perform imaging without limiting lens type: Chou, [0007] discloses “Therefore, one objective of the present invention is to provide an image processing method to process a blurred image for generating a clear image without moving a lens and without limiting a lens type.” It would be obvious to one of ordinary skill in the art that “without limiting a lens type” may include the lensless configuration of Javidi. Accordingly, the combination of Chi, Javidi, Joshi, and Chou discloses the invention of Claim 6. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Chi (Phase-coded aperture for optical imaging, published 2009), in view of Javidi et al. (US 2018/0247106 A1), further in view of Joshi et al. (PSF Estimation using Sharp Edge Prediction, published 2008), further in view of Yu et al. (US 10663750 B2). Regarding Claim 7, the combination of Chi, Javidi, and Joshi does not explicitly teach the method of Claim 7. However, in an analogous field of endeavor, Yu teaches “The method according to claim 1, wherein the one or more optimal point spread functions are designed using template matching features which are optimal for template matching applications” (Yu, column 3, lines 33-37 discloses “Processor 140 iteratively compares each sub-image to combinations of PSF templates within a modeled dictionary of templates (see FIGS. 2-4) and each sub-image is matched and associated with a best-match solution.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Chi, Javidi, and Joshi to incorporate the teachings of Yu by determining a best match PSF using PSF templates. One of ordinary skill in the art would be motivated to combine the Chi, Javidi, Joshi, and Yu references in order to improve super-resolution: Yu, column 2, lines 41-46 discloses “Generally the best-match reconstructions of sub-images are then combined to form a super-resolution image of the object. In some cases overlapping sub-images are captured, in order to improve final image quality.” Accordingly, the combination of Chi, Javidi, Joshi, and Yu discloses the invention of Claim 7. Claims 8, 10, 11, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Chi (Phase-coded aperture for optical imaging, published 2009) in view of Javidi et al. (US 2018/0247106 A1), further in view of Joshi et al. (PSF Estimation using Sharp Edge Prediction, published 2008), further in view of Wu et al. (PhaseCam3D – Learning Phase Masks for Passive Single View Depth Estimation, published 2019). Regarding Claim 8, the combination of Chi, Javidi, and Joshi does not explicitly teach the method of Claim 8. However, in an analogous field of endeavor, Wu teaches “The method according to claim 1, wherein the one or more optimal point spread functions are learned to be optimal for vision and artificial intelligence tasks using data driven techniques” (Wu, Section II, B. a) discloses “Moreover, even though phase mask-based depth estimation relies on textures in the scene for depth estimation as well, PhaseCam3D’s use of the data-driven reconstruction network can help to provide depth estimation with implicit prior statistics and interpolation from the deep neural networks.” Wu, Section II, B. b) discloses “Secondly, the goal of designing the mask-based imaging system for depth estimation is to make the point spread functions (PSFs) of different depth to have maximum variability”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Chi, Javidi, and Joshi to incorporate the teachings of Wu by implementing a data-driven reconstruction network. One of ordinary skill in the art would be motivated to combine the Chi, Javidi, Joshi, and Wu references in order to optimize front-end optics (as applied to Javidi, a lensless system) and back-end reconstruction: Wu, Section III, paragraph 1 discloses “Our goal is to achieve state-of-the-art single image depth estimation results with jointly optimized front-end optics along with the back-end reconstruction algorithm. We achieve this via end-to-end training of a neural network for the joint optimization problem”). Accordingly, the combination of Chi, Javidi, Joshi, and Wu discloses the invention of Claim 8. Regarding Claim 10, the combination of Chi, Javidi, Joshi, and Wu discloses “The method according to claim 8, wherein a machine learning algorithm is directly used to compute computer vision and artificial intelligence task results based on sensor data acquired from the lensless imaging device” (Wu, Section I, A, paragraph 2 discloses “Thus in our system, the first layer corresponds to physical optical elements. All subsequent layers of our network are digital layers and represent the computational algorithm that reconstructs depth images. We run the back-propagation algorithm to update this network, including the physical mask, end-to-end”; where it would be obvious to one of ordinary skill in the art to substitute the optical elements of Wu with the lensless system of Javidi; where computational algorithms are a reconstruction network comprising a U-Net, as shown in Fig. 1; where a reconstruction network is a machine learning algorithm to compute computer vision and artificial intelligence task results). The proposed combination as well as the motivation for combining the Chi, Javidi, Joshi, and Wu references presented in the rejection of Claim 8, apply to Claim 10 and are incorporated herein by reference. Thus, the apparatus recited in Claim 10 is met by Chi, Javidi, Joshi, and Wu. PNG media_image4.png 674 1085 media_image4.png Greyscale Fig. 1 of Wu Regarding Claim 11, the combination of Chi, Javidi, Joshi, and Wu teaches “The method according to claim 8, wherein an optimality criterion for a point spread function design is maximum detection performance” (Wu, Section III, C., paragraph 5 discloses “The effectiveness of depth-varying PSF to capture the depth information can be expressed using a statistical information theory measure called the Fisher information. Fisher information provides a measure of the sensitivity of the PSF to changes in the 3D location of the scene point [49]. Using the Fisher information function, we can compute CRLB, which provides the fundamental bound on how accurately a parameter (3D location) can be estimated given the noisy measurements”; where Fisher information is an optimality criterion; where effectiveness of a PSF to capture depth information is a measure of detection performance, where the depth is detected. Wu also discloses determining a loss using CRLB, and Section III, C., paragraph 5 discloses “In theory, smaller LCRLB indicates better 3D localization”; thus, Wu teaches minimizing loss and thus maximizing localization, or detection performance). The proposed combination as well as the motivation for combining the Chi, Javidi, Joshi, and Wu references presented in the rejection of Claim 8, apply to Claim 11 and are incorporated herein by reference. Thus, the apparatus recited in Claim 11 is met by Chi, Javidi, Joshi, and Wu. Regarding Claim 14, the combination of Chi, Javidi, and Wu teaches “The method according to claim 8, wherein an optimality criterion for a point spread function design is minimizing an error in a defined computer vision or artificial intelligence task” (Wu, Section III, C., paragraph 5 discloses “The effectiveness of depth-varying PSF to capture the depth information can be expressed using a statistical information theory measure called the Fisher information. Fisher information provides a measure of the sensitivity of the PSF to changes in the 3D location of the scene point [49]. Using the Fisher information function, we can compute CRLB, which provides the fundamental bound on how accurately a parameter (3D location) can be estimated given the noisy measurements.” Wu also discloses determining a loss using CRLB, and Section III, C., paragraph 5 discloses “In theory, smaller LCRLB indicates better 3D localization”; where achieving a smaller loss LCRLB is minimizing an error in defined computer vision or artificial intelligence task; where Wu is directed to the task of depth estimation). The proposed combination as well as the motivation for combining the Chi, Javidi, Joshi, and Wu references presented in the rejection of Claim 8, apply to Claim 14 and are incorporated herein by reference. Thus, the apparatus recited in Claim 14 is met by Chi, Javidi, Joshi, and Wu. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Chi (Phase-coded aperture for optical imaging, published 2009) in view of Javidi et al. (US 2018/0247106 A1), further in view of Joshi et al. (PSF Estimation using Sharp Edge Prediction, published 2008), further in view of Jia et al. (Astronomical Image Restoration and Point Spread Function Estimation with Deep Neural Networks, published 2020). Regarding Claim 9, the combination of Chi, Javidi, and Joshi does not explicitly teach the method of Claim 9. However, in an analogous field of endeavor, Jia teaches “The method according to claim 8, wherein a learning algorithm to obtain the one or more optimal point spread functions are based on a neural network” (Jia, Figure 1 and Figure 1 caption discloses “The Generator which learns the point spread function is called PSF-Gen”). PNG media_image5.png 399 856 media_image5.png Greyscale Figure 1 of Jia It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Chi, Javidi, and Joshi to incorporate the teachings of Jia by using a Cycle-GAN to learn a point spread function from an input image. One of ordinary skill in the art would be motivated to combine the Chi, Javidi, and Jia references in order to obtain a PSF that is sensitive to variations: Jia, Section I, paragraph 2 discloses “For ground based telescopes, the PSF has highly spatial and temporal variations and can not be described by contemporary analytical PSF model” and “As more and more astronomical images are obtained by different telescopes, including high resolution images obtained by space based or ground based telescope with adaptive optic system and low resolution images obtained by ordinary ground based telescopes, is it possible to use the properties of these images to design an algorithm to restore blurred images and obtain the PSF model at the same time?” Jia is directed to learning a PSF dependent on the telescope images, and thus is directed to determining a PSF for “a particular application.” Accordingly, the combination of Chi, Javidi, Joshi, and Jia discloses the invention of Claim 9. Allowable Subject Matter Claims 3 and 4 are objected to as being dependent upon a rejected base claim, and are objected to for their dependency on canceled Claim 2, but would be allowable if: i) rewritten in independent form including all of the limitations of the base claim and any intervening claims and ii) rewritten to depend on a (non-canceled) claim. Claim 12, 13, 15-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 21 and 22 are allowed. The following is a statement of reasons for the indication of allowable subject matter: Regarding Claims 3 and 4, none of the previously cited prior art references explicitly teach “a two-dimensional procedural noise field” or “Perlin noise.” Although Joshi discloses estimating a noise level, but does not explicitly teach a two-dimensional procedural noise field. Perlin noise is known in the art (Perlin, Improving noise, published 2002, Abstract discloses “Two deficiencies in the original Noise algorithm are corrected: second order interpolation discontinuity and unoptimal gradient computation.”). However, the application of Perlin noise to realize the contour-based PSFs “by applying an edge filter on a two-dimensional procedural noise field” is not taught by any of the previously cited prior art. Thus, none of the previously cited prior art, alone or in combination, provide a motivation to teach the ordered combination of Claim 3: “The method according to claim 2, wherein the contour-based point spread functions are realized by applying an edge filter on a two-dimensional procedural noise field.” and Claim 4: “The method according to claim 3, wherein the procedural noise is a Perlin noise.” Regarding Claim 12, the combination of Chi, Javidi, Joshi, and Wu does not explicitly teach “The method according to claim 8, wherein an optimality criterion for a point spread function design is maximum classification performance.” Sitzmann et al. (End-to-end Optimization of Optics and Image Processing for Achromatic Extended Depth of Field and Super-resolution Imaging, published 2018) teaches a loss for measuring the optimization of a MSE loss in computational image reconstruction. (Sitzmann, Section 3.2, paragraph 2 discloses “Models are optimized on a dataset of RGB images (defining Iλ). The optimization variables in the optical element and reconstruction method are optimized with respect to the expected mean-squared error loss PNG media_image6.png 88 605 media_image6.png Greyscale over the dataset. The full optimization pipeline is shown in Figure 3. The approach easily generalizes to other data fidelity losses as well as losses on semantic content of images, such as cross-entropy loss for image classification”). However, even under the broadest reasonable interpretation, the mean-squared error loss does not teach an optimality criterion for a maximum classification performance. Sitzmann merely recites the loss is easily generalized to image classification applications, but does not explicitly teach maximizing classification performance. Wu is relied upon to teach learning optimal point spread functions using data driven techniques. Although Sitzmann is directed to simulating a point spread function and using an optimization framework to optimize a computational camera, it would not be obvious to one of ordinary skill in the art to combine the techniques of Wu and Sitzmann to minimize a classification loss (maximize classification). That is, Wu is directed to depth estimation and Sitzmann is directed to extended depth of field and super-resolution. The mere indication of application to maximum classification performance by Sitzmann does not explicitly teach the ordered combination of “The method according to claim 8, wherein an optimality criterion for a point spread function design is maximum classification performance.” Regarding Claim 13, none of the previously cited prior art explicitly teaches “The method according to claim 8, wherein an optimality criterion for a point spread function design is maximum recognition performance.” As stated above, Sitzmann broadly teaches a loss function that may be applied to a classification task, and as stated above even under the broadest reasonable interpretation does not explicitly teach a maximum classification performance. Wu teaches a loss function applied to determining depth location, or a detection of a depth location. However, even under the broadest reasonable interpretation determining a depth location is not equivalent to recognition. None of the previously cited prior art explicitly teaches a metric of recognition, identification, understanding or otherwise semantically segmenting as a criterion for a point spread function design. Thus, none of the previously cited prior art, alone or in combination, provides a motivation to teach the ordered combination of “The method according to claim 8, wherein an optimality criterion for a point spread function design is maximum recognition performance.” Regarding Claim 15, none of the previously cited references explicitly teach “The method according to claim 1, wherein the lensless imaging device is calibrated by: capturing a single point spread function; and extrapolating calibration matrices.” Although Wu teaches PSF calibration (Wu, Section V, C., disclose “Although the depth-dependent PSF response of the phase mask is known from simulation, we calibrate our prototype camera to account for any mismatch born out of physical implementation such as aberrations in fabricated phase mask and phase mask aperture alignment), Wu does not explicitly teach extrapolating calibration matrices, nor does Wu disclose obtaining a single point spread function for the calibration. Li et al. (Depth-dependent PSF calibration and aberration correction for 3D single-molecule localization, published 2019) discloses PSF calibration (Li, Fig. 1 caption discloses “In an SMLM experiment, the data is fitted with the same PSF model as the beads in gel, and the fitted z-positions of the fluorophores are corrected using the calibration from (c).”). However, even under the broadest reasonable interpretation, Li does not explicitly teach “extrapolating calibration matrices.” Additionally, it would not have been obvious to one of ordinary skill in the art to apply the methods of Li to the prior art teachings of Chi, Javidi, and Wu because Li uses stacks of beads to model depth of single points. Although Wu also uses PSFs at different depth, Wu is directed to determining depth of entire scenes (see Fig. 1 of Wu, above), and thus the calibration technique of the PSF of Li is not obviously applicable to the method of Wu. Thus, none of the previously cited prior art references, lone or in combination, provide a motivation to teach the ordered combination of “The method according to claim 1, wherein the lensless imaging device is calibrated by: capturing a single point spread function; and extrapolating calibration matrices.” Dependent Claims 16-17 contain all allowable subject matter of Claim 15 and thus are also allowable. Regarding Claim 21, the combination of Chi, Javidi, Joshi, and Wu teach “A method for designing and optimizing a lensless imaging device comprising: determining an optimal point spread function for a particular application” (Chi, Section 4, paragraph 1 discloses “The important thing in the system design is to find an appropriate intensity pattern A(x, y) which is generated by some phase screen for a point source object.” Chi, Section 4.1, paragraph 1 discloses “In order to find the intensity point spread function A(x, y) (See Fig. 1) from Eq. (1), we need to define functions b(x, y) and t(x, y)”; where A(x, y) is an optimal point spread function; where changing A(x, y) dependent on b(x, y) and t(x, y), or bandlimited function and uniformly redundant array, respectively, is determining an optimal point spread function for a particular application); “and determining the optimal phase mask based on the optimal point spread function and using the designed optimal phase mask in a lensless camera” (Chi, Section 4.2 discloses “Calculate phase screen P(ξ, η); After intensity pattern A(x, y) is known, the next step in the system design is to calculate an aperture with a transmission function of P(ξ, η) that can be used to generate the specific intensity pattern A(x, y)… It is an iterative phase calculation method.”; where phase screen P(ξ, η) is a phase mask; where determining a phase mask iteratively to generate intensity pattern A(x, y) is determining optimal phase masks based on the optimal point spread function. Javidi, [0037] discloses “the disclosed lens-less imaging system 100, the converging coherent or partially coherent spherical beam can pass through the specimen and can be modulated, which carries information about the specimen. Any un-modulated part of the beam can be regarded as the reference beam. Likewise, part of this waveform may be unmodulated by the diffuser phase mask placed before the image sensor 110 (e.g., CMOS camera)”), “wherein: the one or more optimal point spread functions are learned to be optimal for vision and artificial intelligence tasks using data driven techniques” (Wu, Section II, B. a) discloses “Moreover, even though phase mask-based depth estimation relies on textures in the scene for depth estimation as well, PhaseCam3D’s use of the data-driven reconstruction network can help to provide depth estimation with implicit prior statistics and interpolation from the deep neural networks.” Wu, Section II, B. b) discloses “Secondly, the goal of designing the mask-based imaging system for depth estimation is to make the point spread functions (PSFs) of different depth to have maximum variability”), “and None of the previously cited references explicitly teach “an optimality criterion for a point spread function design is maximum classification performance.” Sitzmann et al. (End-to-end Optimization of Optics and Image Processing for Achromatic Extended Depth of Field and Super-resolution Imaging, published 2018) teaches a loss for measuring the optimization of a MSE loss in computational image reconstruction. (Sitzmann, Section 3.2, paragraph 2 discloses “Models are optimized on a dataset of RGB images (defining Iλ). The optimization variables in the optical element and reconstruction method are optimized with respect to the expected mean-squared error loss PNG media_image6.png 88 605 media_image6.png Greyscale over the dataset. The full optimization pipeline is shown in Figure 3. The approach easily generalizes to other data fidelity losses as well as losses on semantic content of images, such as cross-entropy loss for image classification”). However, even under the broadest reasonable interpretation, the mean-squared error loss does not teach an optimality criterion for a maximum classification performance. Sitzmann merely recites the loss is easily generalized to image classification applications, but does not explicitly teach maximizing classification performance. Wu is relied upon to teach learning optimal point spread functions using data driven techniques. Although Sitzmann is directed to simulating a point spread function and using an optimization framework to optimize a computational camera, it would not be obvious to one of ordinary skill in the art to combine the techniques of Wu and Sitzmann to minimize a classification loss (maximize classification). That is, Wu is directed to depth estimation and Sitzmann is directed to extended depth of field and super-resolution. The mere indication of application to maximum classification performance by Sitzmann does not explicitly teach the ordered combination of “A method for designing and optimizing a lensless imaging device comprising: determining an optimal point spread function for a particular application; and determining the optimal phase mask based on the optimal point spread function and using the designed optimal phase mask in a lensless camera, wherein: the one or more optimal point spread functions are learned to be optimal for vision and artificial intelligence tasks using data driven techniques, and an optimality criterion for a point spread function design is maximum classification performance.” Claim 22 depends on Claim 21 and thus contains all allowable subject matter of Claim 21 and is also allowable. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLINE TABANCAY DUFFY whose telephone number is (703)756-1859. The examiner can normally be reached Monday - Friday 8:00 am - 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at 5712723382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAROLINE TABANCAY DUFFY/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Dec 09, 2021
Application Filed
Nov 17, 2025
Non-Final Rejection — §103
Feb 20, 2026
Response Filed
Mar 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602753
ULTRASOUND IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12602788
METHOD AND SYSTEM FOR FULLY AUTOMATICALLY SEGMENTING CEREBRAL CORTEX SURFACE BASED ON GRAPH NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12597130
IMAGE PROCESSING APPARATUS, OPERATION METHOD OF IMAGE PROCESSING APPARATUS, AND OPERATION PROGRAM OF IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12580081
SYSTEMS AND METHODS FOR DIRECTLY PREDICTING CANCER PATIENT SURVIVAL BASED ON HISTOPATHOLOGY IMAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12567130
REAL-TIME BLIND REGISTRATION OF DISPARATE VIDEO IMAGE STREAMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+26.9%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 78 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month