Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 9, 2026 has been entered.
Response to Arguments
Applicant's arguments filed February 9, 2026 with respect to claims 1-13 and 19 have been considered but are moot in view of new grounds of rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-7, 13, 19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Rout (PCT Patent Publication No.: WO 2014/08357 4 A2) hereinafter Rout, in view of Lai (Japan Patent Pub. No.: JP2013509022A), hereinafter Lai.
Regarding claim 1, Rout teaches a method for generating a projection image of a three-dimensional sample (During the digital processing process, index information of the images in the stack is collected and processed to generate a depth map and composite image (i.e., a projection image) or/and a 3D model of the scene. Page 1 line 24), the method comprising: obtaining a set of images of the sample (At step 102, the method obtains a plurality of image captured from a defined subject, where the images are taken from different positions for said subject. Page 4 line 26), wherein each image of the set of images corresponds to a respective focal plane within the sample (According to an embodiment, the system may have arrangement for obtaining images such that each image being obtained at a different focal distance. Page 10 line 29. According to yet another embodiment, the different positions include capturing images from different 'Z' level of the imaging device. The obtained images are arranged in a stack to form an image stack. Page 4 line 32); applying a filter to each image of the set of images (The method further includes, convolving the down sampled image with a
complex wavelet filter bank to generate an energy matrix for said image. The process may be repeated for all the images in the illuminated and color corrected image stack so as to have at least one energy matrix for each image in the stack. Page 8 line 22) to determine a respective depth value for each pixel of an output image of the sample (At step 116, the method generates a depth map using the raw index map and the degree of defocus map. Page 10 line 4), wherein a given depth value represents a depth, within the sample, at which the contents of the sample can be imaged in-focus (At step 110, the method generates a raw index map for the scene. The process of generating raw index map includes analyzing the energy matrix's pixel by pixel basis for all the images and identifying maximum focused pixel for a particular pixel in the image stack. The process is repeated for all the pixels of the scene and an index of all the focused pixels may be used to generate the raw index map. Page 8 line 30); applying the filter (According to an exemplary embodiment, complex wavelet decomposition method may be used for wavelet decomposition. Page 8 line 30) to a particular pixel of a particular image of the set of images (At step 108, the method computes energy content for each pixel of illuminated and color corrected stacked images to generate energy matrix of each image. Page 8 line 12) for determining a measure of variability of pixels of the particular image (At step 108, the method computes energy content (which is a measure of variability) for each pixel of illuminated and color corrected stacked images to generate energy matrix of each image. Page 8 line 12) that are in a neighborhood of the particular pixel (It is common knowledge that, instead of working on a single pixel in isolation, a wavelet filter applies transformation over a neighborhood of pixels.); and determining an image value for each pixel of the output image based on the depth value of the pixel of the output image (Using the digital image processing techniques, images taken at different depths of field of the same scene may be combined to produce a single composite image. The digital image processing techniques involve capturing multiple images of the same scene to form an image stack, identifying focused part from multiple images in the stack and recreating a single image with better depth of field by combining the focused parts. Page 1 line 19), wherein determining an image value for a particular pixel of the output image comprises: (i) identifying an image of the set of images that corresponds to the depth value of the particular pixel (Using the digital image processing techniques, images taken at different depths of field of the same scene may be combined to produce a single composite image. The digital image processing techniques involve capturing multiple images of the same scene to form an image stack, identifying focused part from multiple images in the stack and recreating a single image with better depth of field by combining the focused parts. Page 1 line 19); and (ii) determining the image value for the particular pixel based on a pixel, of the identified image, having a location within the identified image that corresponds to the particular pixel (Using the digital image processing techniques, images taken at different depths of field of the same scene may be combined to produce a single composite image. The digital image processing techniques involve capturing multiple images of the same scene to form an image stack, identifying focused part from multiple images in the stack and recreating a single image with better depth of field by combining the focused parts. Page 1 line 19).
Rout does not teach the following limitations as further recited, but Lai further teaches wherein the pixels in the neighborhood are used in a weighted manner (For a given sample position p in the depth map, the filtered output S ' p is a weighted average of neighboring samples at position q (within a range Ω centered at p). [0024]) to determine the measure of variability (The similarity between corresponding sample values, i.e. the similarity between Ip and Iq, determined by the range filter g (II Ip-Iq II). [0024]) by giving higher weighting to pixels closer to the particular pixel than to farther away pixels (The domain filter defines a spatial neighborhood around the position p in which the samples S nq are used in the filtering process. The domain filter also determines its weight based on the distance to p. Typically, the weight is smaller at a position farther from p. To illustrate, an example of a domain filter with a window of size 5 x 5 and with filter weights that decrease exponentially with the 2D Euclidean distances between p and q is shown. For example, the weight may decrease as exp (- II p-q II/ 2). [0122]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rout to incorporate the teachings of Lai to utilize the pixels in the neighborhood in a weighted manner to determine the measure of variability by giving higher weighting to pixels closer to the particular pixel than to farther away pixels in order to be used as part of a depth estimation method to achieve a better depth quality for application that uses a depth map.
Claims 2-3, 6-7, 13 and 19, unamended and are rejected based on the revised combination of Rout, in view of Lai as applied to claim 1 above. The grounds of rejection established in the last Office Action is fully incorporated herein.
Regarding claim 21, Lai in the combination teaches the method of claim 1, wherein the neighborhood comprises a square region of neighboring pixels centered on the particular pixel (To illustrate, an example of a domain filter with a window of size 5 x 5. [0122]), and wherein determining the measure of variability comprises calculating a measure of variability for every pixel in the square region (The similarity between corresponding sample values, i.e. the similarity between Ip and Iq, determined by the range filter g (II Ip-Iq II). [0024]).
Claims 4-5, unamended and are rejected based on the revised combination of Rout (PCT Patent Publication No.: WO 2014/08357 4 A2), hereinafter Rout, in view of Lai (Japan Patent Pub. No.: JP2013509022A), hereinafter Lai, as applied to claim 1 above, and further in view of Fang (Depth-Based Target Segmentation for Intelligent Vehicles: Fusion of Radar and Binocular Stereo, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 3, NO. 3, SEPTEMBER 2002), hereinafter Fang. The ground of rejection established in the last Office Action is fully incorporated herein.
Claims 8-12, unamended and are rejected based on the revised combination of Rout (PCT Patent Publication No.: WO 2014/08357 4 A2), hereinafter Rout, in view of Lai (Japan Patent Pub. No.: JP2013509022A), hereinafter Lai, as applied to claim 1 above, and further in view of Dekkers (High-resolution 3D imaging of fixed and cleared organoids, NATURE PROTOCOLS, VOL 14, JUNE 2019, 1756–1771), hereinafter Dekkers. The ground of rejection established in the last Office Action is fully incorporated herein.
Claims 22-26 are rejected under 35 U.S.C. 103 as being unpatentable over Rout (PCT Patent Publication No.: WO 2014/08357 4 A2) hereinafter Rout, in view of Lai (Japan Patent Pub. No.: JP2013509022A), hereinafter Lai, further in view of Yuan (US Patent No.: US 10,025,988 B2), hereinafter Yuan, further in view of Chai (EPO Patent Pub. No.: EP1355273A2), hereinafter Chai.
Regarding claim 22, Rout and Lai teach all of the elements of the claimed invention as stated in claim 1 except for the following limitations as further recited. However, Yuan teaches determining whether the depth value of the particular pixel is an outlier relative to depth values of neighboring pixels (A spatial outlier detection process 315A may be RMS (Root Mean Squared) based, or may involve producing a median filtered version and then differencing the present pixel from the filtered version. Column 5 line 5).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rout and Lai to incorporate the teachings of Yuan to determine whether the depth value of the particular pixel is an outlier relative to depth values of neighboring pixels in order to detecting dead, stuck, or otherwise defective pixels of an image generator or its image or video output.
The combination of Rout, Lai, and Yuan does not teach the following limitations as further recited, but Chai further teaches amending the depth value of the particular pixel based on a determination that the value of the particular pixel is an outlier (The method calculates an average 3D depth information associated with the plurality of pixels overlapped by the window, and assigns the calculated average 3D depth information to the 3D depth information of the subject pixel. Abstract).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Chai to amend the depth value of the particular pixel based on a determination that the value of the particular pixel is an outlier in order to remove and smooth out irregularities in an image.
Regarding claim 23, Yuan in the combination teaches the method of claim 22, wherein determining whether the depth value of the particular pixel is an outlier includes determining that the depth value of the particular pixel varies from a mean depth value of the neighboring pixels (A spatial outlier detection process 315A may be RMS (Root Mean Squared) based, or may involve producing a median filtered version and then differencing the present pixel from the filtered version. Column 5 line 5) by a threshold amount (When the data exceeds the threshold, the pixel within the current block is labeled as an RPN pixel in the operation 335. Column 6 line 9).
Regarding claim 24, Lai in the combination teaches the method of claim 22, wherein amending the depth value of the particular pixel includes setting the depth value of the particular pixel to a particular value, the particular value at least partially based on depth value of the neighboring pixels (For a given sample position p in the depth map, the filtered output S ' p is a weighted average of neighboring samples at position q (within a range Ω centered at p). [0024]).
Regarding claim 25, Chai in the combination teaches the method of claim 24, wherein the particular value comprises one of a mean of the depth values of the neighboring pixels (The method calculates an average 3D depth information associated with the plurality of pixels overlapped by the window, and assigns the calculated average 3D depth information to the 3D depth information of the subject pixel. Abstract) and a median of the depth values of the neighboring pixels (Note: the claim language is interpreted as disjunctive according to the specification [0054].).
Regarding claim 26, Lai in the combination teaches the method of claim 22, wherein amending the depth value of the particular pixel (For a given sample position p in the depth map, the filtered output S ' p is a weighted average of neighboring samples at position q (within a range Ω centered at p). [0024]) includes modifying one or more parameters of the filter (The second term is an adaptive selection between two range filters gI (II Ip-Iq II) and gS (II Sp-Sq II). In general, the weight of the range filter gI decreases as the difference between Ip and Iq increases, and similarly, the weight of the range filter gS decreases as the difference between Sp and Sq increases. [0111]) and applying the modified filter to the particular image (As an alternative to the above methods, an adaptive selection / combination of the above two range filters for video frames and depth maps may be implemented. [0106]), and determining an updated depth value for the particular pixel based on the modified filter (Another implementation filters a portion of a depth picture based on values for a range of pixels in the portion. For a given pixel in the portion being filtered, the filter weights the value of a particular pixel within the range by a weight based on one or more of a position distance, a depth difference, and an image difference. Overview).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEI ZHAO whose telephone number is (703)756-1922. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VU LE can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LEI ZHAO/Examiner, Art Unit 2668
/VU LE/Supervisory Patent Examiner, Art Unit 2668