DETAILED ACTION
Claims 1-4, 7-8, and 14-27 are pending.
Claims 14-27 are elected without traverse on 02/03/2026.
Claims 1-4 and 7-8 are non-elect claims and currently withdrawn from consideration.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of claims 14-17 in the reply filed on 02/03/2026 is acknowledged.
Specification
The disclosure is objected to because of the following informalities:
Paragraph [0062] references “sensor 28”. The drawings provided in the instant application provide no items labeled “28”. Sensors throughout the disclosure either state “sensor” or “sensor 46”. Amendments to the corresponding drawings may be required to reflect the correction. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 18 and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 18 recites the limitation "non-structured light sources" in line 4. There is insufficient antecedent basis for this limitation in the claim. For examination purposes claim 18 will be read as if dependent of claim 14, which recites “non-structured illumination sources”.
Claim 19 recites the limitation "non-structured light sources" in line 4. There is insufficient antecedent basis for this limitation in the claim. For examination purposes claim 19 will be read as if dependent claim of claims 18 and 14, which recites “non-structured illumination sources”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 14, 18-21, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over US 20180028063 A1 (Elbaz et al.; referred to as “Elbaz”, below) in view of US 20040217260 A1 (Bernardini et al.; referred to as “Bernardini”, below).
Regarding claim 14,
Elbaz teaches: An apparatus for intraoral scanning (Abstract “Described herein are intraoral scanning … apparatuses for generating a three-dimensional model of a subject's intraoral region (e.g., teeth)”), the apparatus comprising
an elongate wand comprising a probe at a distal end of the elongate wand (FIG. 1A, Intraoral scanner wand 103; FIG. 26A, 2801; ¶ [009] “…a scanning wand for the intraoral scanner that can be more easily positioned and moved around a subject's teeth.”
one or more cameras disposed within the probe ( ¶ [0019] “…any number of sensors may be included on the intraoral scanner, e.g., the wand of the intraoral scanner…Sensors may be referred to and may include detectors, cameras, and the like.”
one or more non-structured illumination sources disposed within the elongate wand, and arranged such that images of an intraoral surface are captured using the one or more cameras under non-uniform illumination from the one or more non-structured illumination sources (¶ [0077] “…one or more processors configured to: capture 3D surface model data… take a plurality of images into the teeth using light..”; ¶ [0024] “ …a hand-held wand having at least one sensor and a plurality of light sources, wherein the light sources are configured to emit light at a first spectral range and a second spectral range…”; ¶ [0007] “..light source or light sources that can illuminate in two or more spectral ranges: a surface-feature illuminating spectral range (e.g., visible light) and a penetrative spectral range (e.g. IR range, and particularly ‘near-IR,’ including but not limited to 850 nm).”; ¶ [0077] “…the second spectral range is within near-infrared (near-IR) range of wavelengths…”; ¶ [0074] “…taking a plurality of images into the teeth using a near-infrared (near-IR) wavelength…”; Elbaz does not describe structured light projection or pattern generation. The disclosed light sources are ordinary spectral illumination sources (e.g. visible, NIR), which qualify as non-structured illumination sources (i.e. not pattern projection structured light).
and a computer processor configured to analyze images captured by the one or more cameras under the non-uniform illumination from the one or more non-structured illumination sources (¶ [0025] “one or more processors configured to: determine surface information by using light in the first spectral range sensed by the hand-held wand, using a first coordinate system; generate a three-dimensional (3D) surface model of at least a portion of a subject's tooth using the surface information”).
Elbaz is not relied on for the below claim language:
In a related art, Bernardini teaches: a computer processor configured to analyze images captured by the one or more cameras under the non-uniform illumination from the one or more non-structured illumination sources (¶ [0044] “processor 180 is coupled to the camera 100 and receives captured images therefrom. The data processor 180 is programmed to implement the following method.”; Abstract “A method of using the calibration target operates, for each of the light sources to be calibrated, to capture an image of the target; process the captured image to derive light source calibration data and to store the calibration data.”; The method taught by Bernardini is equivalent to analyzing images captured by one or more cameras under non-illumination from non-illumination sources because the method includes a processor capturing an image under non-illumination from non-illumination sources and processing it based on a number of calibration parameters. Non-uniform illumination taught by Bernardini is explained below, in the claim rejection, and the use of it applies to the light sources described hereinabove.), wherein the computer processor is configured to compensate for a non-uniformity of the non- uniform illumination using calibration data generated based on a mathematical model of the non- uniform illumination from the one or more non-structured illumination sources of the apparatus, the mathematical model including a location of each of the one or more non-structured illumination sources as seen in a 3D world-coordinate space by the one or more cameras (Real light source deviating from isotropic emission and illumination intensity varying spatially, as taught by Bernardini and seen below, constitutes non-uniform illumination. Bernardini teaches compensating for a non-uniformity of the non-uniform illumination using calibration data from a model by comparing observations of light emitted from the non-structured illumination source to that would be emitted from an ideal source of illumination in order to compute a correction distribution (i.e. calibration data) (¶ [0016] “This invention provides a system and a method for the calibration of the position and directional distribution of light sources using only images of a predetermined target object.”; ¶ [0016] “The technique for determining light source position builds on the simple observation that a point source of light, a point on an object and the corresponding point in its cast shadow all lie on the same line. The position of a light source is found by determining the intersection of a plurality of such lines. Given the position of the light source, a description of its directional distribution can be obtained by comparing observations of light emitted from the source to the light that would be emitted from an ideal source.”; ¶ [0042] “…the correction for non-ideal light source distribution is computed by a simple interpolation. For an application to 3D objects, the point observed by the camera is identified in 3D, and the direction from the light source to the point is computed, as shown in FIG. 9B. The correction for the non-ideal distribution then is found by interpolating the corrections for the neighboring directions.”). Bernardini further teaches the model for comparing the non-uniform illumination includes a location of the surface in terms of the camera by calculating a coordinate system using locations detected and 3D points (¶ [0046] “In step 736 the location of the surface 300 in terms of the coordinate system of the calibrated camera 100 is calculated using the image locations detected, and the known distances between the 3D points on the surface 300… intensities of the pixels in the image acquired in step 720 are sampled at some preset number of points on the target (as shown in FIG. 8), and are compared to the intensities that would be observed in the presence of an ideal isotropic light source… a directional distribution for each light source 210-250 is computed as an interpolation function between the corrected light values found at the sample points… The location of the target surface 300 is then known and can be used in the calculations for the subsequent sources.”.) Thus, Bernardini teaches location of each of the one or more non-structured illumination sources as seen in a 3D world-coordinate space by the one or more cameras).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to apply the processors’ calibration and correction teachings of Bernardini to compensate for a non-uniformity of the non- uniform illumination source from the intraoral scanning wand taught by Elbaz. Both references teach a processor configured to analyze images captured by the one or more cameras under the non-uniform illumination from the one or more non-structured illumination sources. Applying Bernardini’s calibration and correction teachings performed by a processor to the processor and known invention of a wand with non-illumination sources and one or more cameras to capture the images, taught by Elbaz, would result in the predictable improvement of imaging accuracy, 3D reconstruction reliability, and compensate for light variation and deterioration inherent in handheld probes being used in compact spaces. Combining inventions would also address prior art techniques’ problems Bernardini (¶ [0013]) aims to solve including lowering costs, complexity, and user-introduced errors commonly associated with frameworks not implementing optimum light source calibration (Bernardini ¶ [0013]). Furthermore, Elbaz explains, “it would be beneficial to provide methods and apparatuses, including devices and systems, such as intraoral scanning systems, that may be used to model a subject's tooth or teeth and include both external (surface) and internal (within the enamel and dentin) structures and composition using non-ionizing radiation.” (Elbaz ¶ [0006]). Both inventions lie in the same field of endeavor of image processing and analysis based on images captured from cameras using non-uniform illumination, with specific applications in the medical field.
Based on the above, this is an example of “combining prior art elements according to known methods to yield predictable results.” MPEP 2143.
Regarding claim 18,
Elbaz and Bernardini teach the apparatus according to claim 14.
Bernardini further teaches: wherein the computer processor is configured to compensate for the non-uniformity of the non-uniform illumination using calibration data generated based on the mathematical model of the non-uniform illumination from the one or more non-structured light sources of the apparatus, the mathematical model including the location of each of the one or more non-structured illumination sources as seen in a 3D world-coordinate space by the one or more cameras (The limitation up to this point equally mirrors the limitation found in 9-13. For sake of brevity, please refer back to the U.S.C. 103 rejection for claim 14 (seen above) and Bernardini Abstract, ¶ [0044], ¶ [0016], ¶ [0042], and ¶ [0046], as detailed above.)
via images of a reflective calibration target (A camera is used to capture images to be processed (¶ [0044]); ¶ [0016] “This invention provides a system and a method for the calibration…using only images”; ¶ [0034] “The light source calibration target 20 includes a planar surface 300 that is preferably white and diffusely reflecting (Lambertian or substantially Lambertian) …”; According to a person of ordinary skill in the art, “diffusely reflecting surfaces” constitute a type of “reflective” surface. Thus, Bernardini teaches a mathematical model including the location of each of the one or more non-structured illumination sources as seen in a 3D world-coordinate space by the one or more cameras via images of a reflective calibration target.).
Regarding claim 19,
Elbaz and Bernardini teach the apparatus according to claim 18.
Bernardini further teaches: wherein the computer processor is configured to compensate for the non-uniformity of the non-uniform illumination using calibration data generated based on the mathematical model of the non-uniform illumination from the one or more non-structured light sources of the apparatus, the mathematical model including the location of each of the one or more non-structured illumination sources as seen in a 3D world-coordinate space by the one or more cameras via images of the reflective calibration target (The limitation up to this point equally mirrors the limitation found in claim 18, lines 1-7. For sake of brevity, please refer back to the U.S.C. 103 rejection for claims 14 and 18 (see above) and Bernardini Abstract, ¶ [0044], ¶ [0016], ¶ [0042], ¶ [0046] and ¶ [0034], as detailed above.)
Bernardini fails to explicitly disclose the mathematical model including the location of each of the one or more non-structured illumination sources as seen in a 3D world-coordinate space by the one or more cameras via images of the reflective calibration target that are acquired prior to the apparatus being packaged for commercial sale.
While Bernardini does not explicitly disclose a mathematical model using images of the reflective calibration target that are acquired prior to the apparatus being packaged for commercial sale, Bernardini does teach a calibration system using only images of a predetermined target object and assumes the camera has been previously calibrated, “The invention provides a system and a method for the calibration of the position and directional distribution of light sources using only images of a predetermined target object. The invention assumes that the camera has previously been geometrically calibrated.” (¶ [0016]). One of ordinary skill in the art would predictably equate a previously calibrated camera to include calibration done prior to the apparatus being packaged for commercial sale. Additionally, using only images of a predetermined target object for calibration does not constrain the images of the reflective calibration target being seen by time limitations.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Elbaz’s intraoral scanning apparatus and Bernardini’s calibration framework to utilize images of the reflective calibration target that are acquired prior to the apparatus being packaged for commercial sale since Bernardini already teaches including images of the reflective calibration target in the mathematical model, does not put a time constraint on when the reflective calibration target images are acquired, and teaches calibration being done to elements (e.g. the camera) of the system prior to the apparatus being packaged for commercial sale. Bernardini explains a motivation behind the invention is to lower the amount of time required for 3D scanning (¶ [0012]), and “while the method and apparatus described herein are provided with a certain degree of specificity, the present invention could be implemented with either greater or lesser specificity, depending on the needs of the user.” (¶ [0053]). The mathematical model including the location of each of the one or more non-structured illumination sources as seen in a 3D world-coordinate space by the one or more cameras via images of the reflective calibration target that are acquired prior to the apparatus being packaged for commercial sale would have represented a predictable variation within the disclosed detection framework in order to decrease time associated with acquiring images of the reflective calibration target.
Regarding claim 20,
Elbaz and Bernardini teach the apparatus according to claim 14.
Bernardini further teaches: wherein the mathematical model of the non-uniform illumination from the one or more non-structured illumination sources of the apparatus includes an estimated illumination intensity-per-angle emitted from each of the one or more non-structured illumination sources (As previously detailed in claim 14, Bernardini teaches a mathematical model of the non-uniform illumination from the one or more non-structured illumination sources of the apparatus. Bernardini further teaches an estimated illumination intensity-per-angle emitted from each of the one or more non-structured illumination sources used in the model by measuring reflected light from a diffuse calibration surface and comparing the measured intensity to the expected reflection from an ideal isotropic source (¶ [0042] “The diffuse nature of the target surface 300 facilitates the estimate of the variation of the light source distribution from the ideal. isotropic distribution. As shown in FIG. 8, the light magnitude of the light for different directions from a light source (e.g., 210) is sampled by observing the light reflected from the target surface 300 at points distributed across the light source calibration target 20. The sample of light reflected at the selected points may be taken as the pixel value at that point, or to reduce the effect of noise in the image, an average of values around the location may be used. For an ideal isotropic light source, the reflected light is given by the light source intensity times the cosine of the angle between a ray from the point to the light source and the surface normal, divided by the distance to the source squared. By computing how the observed reflected light varies from this ideal reflection, a ratio is formed characterizing the light in each direction, as shown in a 2D example in FIG. 9. For an application when all objects are to be scanned on the plane (the case shown in FIG. 9A), the correction for non-ideal light source distribution is computed by a simple interpolation. For an application to 3D objects, the point observed by the camera is identified in 3D, and the direction from the light source to the point is computed, as shown in FIG. 9B. The correction for the non-ideal distribution then is found by interpolating the corrections for the neighboring directions.”)).
Regarding claim 21,
Elbaz and Bernardini teach apparatus according to claim 20.
Bernardini further teaches: wherein the estimated illumination intensity-per-angle emitted from each of the one or more non-structured illumination sources is estimated based on calibration images captured using the one or more cameras of a diffusive calibration target illuminated with the one or more non-structured illumination sources (Refer back to the exert from Bernardini found above in claim 20 and the use of a diffuse calibration surface (i.e. “diffuse calibration target”) for estimating illumination intensity-per-angle. Furthermore, refer to FIG. 3, which shows a Lambertian (diffuse) calibration target used for light source calibration (FIG. 3, ¶ [0021] “FIG. 3 is a front view of a white, Lambertian (diffuse) calibration target used for light source calibration”). A Lambertian calibration target is a type of diffusive calibration target.).
Regarding claim 25,
Elbaz and Bernardini teach the apparatus according to claim 14.
Elbaz further teaches: wherein the one or more non-structured illumination sources comprise one or more Near Infra-Red (NIR) illumination sources (¶ ¶ [0024] “ …a hand-held wand having at least one sensor and a plurality of light sources, wherein the light sources are configured to emit light at a first spectral range and a second spectral range…”; ¶ [0007] “..light source or light sources that can illuminate in two or more spectral ranges: … a penetrative spectral range (e.g. IR range, and particularly ‘near-IR,’ including but not limited to 850 nm).”; ¶ [0077] “…the second spectral range is within near-infrared (near-IR) range of wavelengths…”; ¶ [0074] “…taking a plurality of images into the teeth using a near-infrared (near-IR) wavelength…”; Elbaz does not describe structured light projection or pattern generation. The disclosed light sources are ordinary spectral illumination sources (e.g. NIR), which qualify as non-structured illumination Near Infra-Red (NIR) sources (i.e. not pattern projection structured light).).
Claim(s) 15, 22, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over US 20180028063 A1 (Elbaz et al.; referred to as “Elbaz”, below), in view of US 20040217260 A1 (Bernardini et al.; referred to as “Bernardini”, below), and in further view of US 20180089855 A1 (Rodrigues et al.; referred to as “Rodrigues”, below).
Regarding claim 15,
Elbaz and Bernardini teach: the apparatus according to claim 14, including a mathematical model of the non-uniform illumination from the one or more non-structured illumination sources.
Elbaz further teaches: the wand is used to scan an intraoral surface of a “subject” (Abstract “…intraoral scanning methods and apparatuses for generating a three-dimensional model of a subject's intraoral region (e.g., teeth)…”) and “the resulting 3D model including surface and internal structures may be used in a variety of ways to benefit subject (e.g., patient) health care” ( ¶ [0185]). Thus, Elbaz teaches the use of the elongate wand to scan an intraoral surface of a patient.
Elbaz and Bernardini are not relied on for the below claim language:
Rodrigues teaches: wherein the computer processor is configured to update the mathematical model of the non-uniform illumination (Rodrigues teaches a processor configured to estimate and adjust parameters of a mathematical model associated with non-uniform illumination, based on calibration images, see ¶ [0113] “the memory 1008 may comprise an image processing module 1010 that may be accessed and implemented by processor 1002….the image processing module 1010 may estimate the camera response function and the vignetting in case of non-uniform illumination using one or more calibration images”. Rodrigues further teaches the processor is configured to receive input images and output calibrated images, see ¶ [0115] “the processor 1002 may be operatively coupled to an input interface 1004 configured to obtain one or more images and output interface 1006 configured to output and/or display the calibrated images”, indicating the estimation and calibration are performed in connection with images obtained by the system. Rodrigues further teaches instructions and real-time monitoring may be done via processor executed software, see ¶ [0114] “Implementing instructions, real-time monitoring, and other functions by loading executable software into a computer and/or processor can be converted to a hardware implementation by well-known design rules and/or transform a general-purpose processor to a processor programmed for a specific application.” Thus, Rodrigues teaches recalculating parameters of the illumination model based on captured calibration images during operation of the apparatus. Because the model parameters (e.g. vignetting and response function) are re-estimated based on subsequently obtained images, via a processor, Rodrigues teaches the processor is updating the mathematical model of the non-uniform illumination after a given image acquisition and before a subsequent image acquisition.).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to apply the technique of performing an update to the mathematical model of non-illumination after and before a scan by implementing a real-time mathematical model input parameter, as taught by Rodrigues, to the modified intraoral scanning wand that uses non-illumination sources, taught by Elbaz, as modified for calibration and correction improvements by Bernardini’s teachings. Doing so would provide an apparatus with an elongated wand for intraoral scanning, using non-structured illumination sources, that yields the predictable results of continuous updates to a mathematical model between scans, thus capturing more accurate data for users to rely on. Elbaz, Bernardini, and Rodrigues all lie in the same field of endeavor of image processing and analysis based on images captured from cameras using non-uniform illumination, with specific applications in the medical field. The motivation to combine references is to improve the accuracy of an intraoral apparatus by accounting for updates after a scan, and before the next scan, and, as explained in paragraph [0012] of Bernardini, to decrease time requirements and additional devices beyond cameras and light sources required to determine light source directional distribution.
Regarding claim 22,
Elbaz and Bernardini teach the apparatus according to claim 14, including non-structured illumination sources, and wherein the computer processor is configured to
Elbaz and Bernardini are not relied on for the below claim language:
In a related art, Rodrigues teaches: wherein the computer processor is configured (Rodrigues teaches a processor is configured to perform the imaging processing system using various types of data (¶ [0016]; ¶¶ [0111]-[0113]); ¶ [0115]) to further compensate for the non-uniformity of the non-uniform illumination of the one or more non-structured illumination sources using camera-vignette calibration data indicative of a measure of relative illumination for each of the one or more cameras (Rodrigues teaches images are captured from a camera, used for calibration, and calibrated (¶ [0115] “the processor 1002 may be operatively coupled to an input interface 1004 configured to obtain one or more images and output interface 1006 configured to output and/or display the calibrated images”). Rodrigues further teaches compensating for non-uniformity for the non-uniformity of the non-uniform illumination of the one or more non-structured illumination sources using camera-vignette calibration data indicative of a measure of relative illumination for each of the one or more cameras through using the captured images from the camera to estimate camera response and vignetting using non-uniform illumination setups, including calibration images in the presence of non-uniform illumination (Abstract “Methods and systems for … camera response function and vignetting, and in terms of color, suitable for non-uniform illumination set-ups. It estimates the camera response function and the camera color mapping from a single image of a generic scene with two albedos. With a second same-pose image with a different intensity of the near-light the vignetting is also estimated... For the modelling the vignetting there are three steps: computing the albedo-normalized irradiance, finding points of equal vignetting, when needed, and estimation.”; ¶ [0113] “…the memory 1008 may be used to house the instructions for carrying out various embodiments described herein…the image processing module 1010 may be stored and accessed within memory embedded in processor 1002 (e.g., cache memory). Specifically, the image processing module 1010 may estimate the camera response function and the vignetting in case of non-uniform illumination using one or more calibration images.”) and non-structured illumination sources like near-lighting (¶ [0038] “…estimate the vignetting under near-lighting…”; ¶ [0106] “…our approach benefits from the effects of near-lighting and vignetting”). Vignetting is well known to one of ordinary skill in the art to embody spatial variation in brightness across the camera field caused by illumination and optical features. Thus, estimating vignetting results in calibration data indicative of relative illumination received by the camera across different image regions, which may be used to correct the captured images.).
With the exception of “near-lighting” sources, Rodrigues does not explicitly disclose other types of non-structured illumination sources (e.g. NIR, broad spectrum, etc.).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the intraoral apparatus, including its NIR sources and mathematical model, taught by Elbaz and Bernardini to incorporate the teachings of a processor configured to compensate for the non-uniformity of the non-uniform illumination of the one or more non-structured illumination sources using camera-vignette calibration data indicative of a measure of relative illumination for each of the one or more cameras, taught by Rodrigues, because the apparatus taught by Elbaz and Bernardini already has a configurable processor, uses non-illumination sources, and is configured to compensate for the non-uniformity of the non-uniform illumination using calibration data. Doing so would only require an additional framework that uses camera-vignette calibration data and calibration estimation as taught by Rodrigues and known before the effective filling date of the claimed invention. Elbaz, Bernardini, and Rodrigues all lie in the same field of endeavor of image processing and analysis based on images captured from cameras using non-uniform illumination, with specific applications in the medical field. The motivation to combine references is to improve the methods of scanning an intraoral cavity of a patient, identified by Elbaz ¶ [0106], by increasing the accuracy of calibration techniques, by accounting for data indicative of a measure of relative illumination for each of the one or more cameras.
Regarding claim 24,
Elbaz, Bernardini, and Rodrigues teach the apparatus according to claim 22.
Bernardini further teaches: wherein the mathematical model of the non-uniform illumination from the one or more non-structured illumination sources of the apparatus includes an estimated illumination intensity-per-angle emitted from each of the one or more non-structured illumination sources (This limitation equally mirrors the limitation found in claim 20 and is rejected based on the prior art taught in claim 20 and 22.).
Claim(s) 23 is rejected under 35 U.S.C. 103 as being unpatentable over US 20180028063 A1 (Elbaz et al.; referred to as “Elbaz”, below), in view of US 20040217260 A1 (Bernardini et al.; referred to as “Bernardini”, below), in further view of US 20180089855 A1 (Rodrigues et al.; referred to as “Rodrigues”, below), and in further view of US 20210243369 A1 (Mutto & Marin; referred to as “Mutto”, below).
Regarding claim 23,
Elbaz, Bernardini, and Rodrigues teach the apparatus according to claim 22.
Rodrigues further teaches: wherein the camera-vignette calibration data is generated by: capturing, using the one or more cameras, calibration images (Rodrigues teaches images are captured from a camera, used for calibration, and calibrated (¶ [0115] “the processor 1002 may be operatively coupled to an input interface 1004 configured to obtain one or more images and output interface 1006 configured to output and/or display the calibrated images”))
and fitting a relative illumination model to each of the one or more cameras (¶ [0113] “Specifically, the image processing module 1010 may estimate the camera response function and the vignetting in case of non-uniform illumination using one or more calibration images.”).
Elbaz and Rodrigues are not relied on for the below claim language:
In a related art, Bernardini further teaches: wherein the
capturing, using the one or more cameras, calibration images of a 2D calibration target (Bernardini teaches a camera is used to capture calibration data from “target surfaces” (Abstract “A light source calibration target has a surface in view of a camera of an image capture system, The target includes a substrate having a substantially Lambertian surface… A method of using the calibration target operates, for each of the light sources to be calibrated, to capture an image of the target; process the captured image to derive light source calibration data and to store the calibration data.”; ¶ [0015] “the target surface also has a uniform, diffuse, light dispersive Lambertian coating”). The “target surface” for calibration taught by Bernardini constitutes a planar 2D calibration target because a “surface” constitutes a planar 2D object and the “target surface” can be used to reflect light at multiple locations to determine illumination features; thus it is a calibration target, and specifically a “2D calibration target”.)
having a plurality of distinct calibration features (Abstract “A light source calibration target…The target includes…a visually distinct polygonal shape having corners formed on said surface so as to be visually distinct from the surface, and a plurality of objects each having an upstanding tip mounted on the surface”),
the capturing of the calibration images performed while the 2D calibration target is ¶ [0015] “the target surface also has a uniform, diffuse, light dispersive Lambertian coating. By observing the distribution of light reflected from the target surface from each light source the directional distribution of the light sources can be measured.”),
and fitting a relative illumination model to each of the one or more cameras ((¶ [0115] “the geometries of the camera and of the target are known, the acquired images are used to geometrically characterize the cast shadows of the target, and thus deduce the locations of the light sources… By observing the distribution of light reflected from the target surface from each light source the directional distribution of the light sources can be measured.”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the intraoral apparatus taught by Elbaz, Bernardini, and Rodrigues by applying Rodrigues’ camera-vignetting estimation to the 2D calibration target images taught by Bernardini in order to generate camera-vignette calibration data. The motivation to combine inventions would be to improve the accuracy of the illumination model and subsequent image correction by accounting for camera dependent illumination variation data, derived from camera-vignetting techniques, in the illumination model. The inventions taught by Elbaz, Bernardini, and Rodrigues all lie in the same field of endeavor of image processing and analysis based on images captured from cameras using non-uniform illumination, with specific applications in the medical field.
Elbaz, Bernardini, and Rodrigues are not relied on for the below claim language:
In a related art, Mutto teaches: calibration target is back-lit with uniform illumination (¶ [0015] “The first calibration target may include a backlit calibration target including a plurality of light emitting diodes configured to emit light through a calibration pattern.”; ¶ [0117] “As shown in FIG. 6, in one embodiment, the back illuminated calibration target includes one or more strips of infrared light emitting diodes 602 and color light emitting diodes (e.g., white light emitting diodes, or red, green, and blue light emitting diodes) 604 mounted in a housing and configured to emit light toward a diffuser 606. A calibration target 200 may then be applied to an opposite side of the diffuser 606, such that diffused color and infrared light is emitted through the calibration pattern 200, thereby improving the ability of the cameras 100 to detect the calibration pattern. However, embodiments of the present invention are not limited thereto and may be used with other arrangements for generating a backlit calibration target (e.g., different light sources capable of generating visible and infrared light).”; The backlighting of a calibration target with a pattern of LEDs and the use of a diffuser is known by one of ordinary skill in the art to create a uniform illumination by spreading the pattern evenly across the targeted calibration area.).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the intraoral apparatus and camera-vignetting framework taught by Elbaz, Bernardini, and Rodrigues to incorporate the backlighting of calibration targets taught by Mutto because Elbaz, Bernardini, and Rodrigues already teach capturing of the calibration images while the 2D calibration target is lit with uniform illumination and providing a back-lit uniform illumination would predictably result in a higher degree of contrast for calibration purposes, which would also increase the accuracy of the subsequent camera-vignette data the illumination model relies on, and increase the entire intraoral apparatus’ calibration framework as a whole. The inventions all lie in the same field of endeavor of optical imaging systems that use cameras and calibration frameworks to capture and process images and determine illumination.
Claim(s) 16 is rejected under 35 U.S.C. 103 as being unpatentable over US 20180028063 A1 (Elbaz et al.; referred to as “Elbaz”, below), in view of US 20040217260 A1 (Bernardini et al.; referred to as “Bernardini”, below), in further view of “Flat Refractive Geometry” (Treibitz et al.; referred to as “Treibitz”, below; copy provided by examiner), and in further view of “Mathematical Modelling of Optical Glazing Performance” (Peter A. van Nijnatten; copy provided by examiner).
Regarding claim 16,
Elbaz and Bernardini teach the apparatus according to claim 14.
Elbaz further teaches: wherein the probe has a transparent window through which the one or more non-structured illumination sources are configured to emit light onto an intraoral surface (FIG. 26A, 2801, shows wand with sleeve; FIG 29A & FIG 29B, ¶ [0266] “ the wand of an intra-oral scanner is shown with a sleeve 3101 disposed around the end of the wand 3105… the sleeve 3105 slips over the end of the wand so that the light sources and cameras (sensors) already on the wand are able to visualize through the sleeve.”; ¶ [0269] “The sleeve may be assembled… including the overall sleeve, windows for illumination and image capture,…and one or more LED holding regions (e.g., injection of an IR and visible-light transparent material forming windows through the sleeve…”. The window comprises a light-transmissive optical component because the window allows transmission of light into and out of the probe. Under the broadest reasonable interpretation, a light-transmissive window, as taught by Elbaz, constitutes a transparent window.)
Elbaz and Bernardini are not relied on for the below claim language:
In a related art, Treibitz teaches: the mathematical model includes (i) a distance of a calibration target from the transparent window of the probe (Abstract “Our physics based model is parameterized by the distance of the lens from the medium interface…The physical parameters are calibrated…”; p. 1, Column right-side, rows 12-14 “In field operations, extensive studies deal with…scene recovery…which commonly use a flat port (window).”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the teachings of an elongated wand with a transparent window and processor with a mathematical model taught by Elbaz and Bernardini to incorporate the teachings of including a distance of a calibration target from the transparent window of the probe, as taught by Treibitz to account for the impact on distance from the probe to the object or area being modeled. Doing so would provide a predictable increase in accuracy of the collected data by accounting for distance of calibration target in the mathematical model calibration. The inventions lie in the same field of endeavor of optical systems used for modeling and are pertinent for solving problems caused by relationships between light interacting with transparent windows. The motivation to combine references is to improve accuracy of intraoral models, and address, as Elbaz states, “a need for improved methods and systems for scanning an intraoral cavity of a patient, and/or for automating the identification and analysis of dental caries.” (Elbaz ¶ [0006]).
Elbaz, Bernardini, and Treibitz are not relied on for the below claim language:
In a related art, van Nijnatten teaches: the mathematical model includes (ii) Fresnel reflections from the transparent window (van Nijnatten teaches modeling optical behavior of glass glazing using Fresnel equations, see (Abstract “Mathematical modelling can be a powerful tool in the design and optimalisation of glazing. By calculation, the specifications of a glazing design and the optimal design parameters can be predicted… properties which are difficult to measure, like for instance solar and visible light properties for oblique or diffuse irradiation, can be determined accurately by calculation”; p. 753, Section “1. Introduction”, paragraph 3, “An alternative way of determining angular properties, is by calculation. This is possible using a computer model based upon Fresnel's equations and the optical constants of all optical media involved (glass and coatings). The optical constants (spectral complex refractive index) can be derived from the transmittance and reflectance spectra…”). Van Nijnatten further teaches Fresnel equations and defining amplitude reflectance and transmittance coefficients (see p. 756, Sub-section “4.1 Basic equations”) for light interacting with glass. These equations mathematically describe reflection and transmission of light at the glass. The Fresnel-based model taught by van Nijnatten includes Fresnel reflections from the transparent window because glass panes for glazing constitute transparent glass windows.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the teachings of an elongated wand with a transparent window and processor with a modified mathematical model taught by Elbaz, Bernardini, and Treibitz to incorporate the teachings of Nijnatten and include Fresnel reflections from the transparent window in the mathematical model. Doing so would predictably enhance the mathematical model’s accuracy by accounting for transparent windows and lights’ impact on reflection in an intraoral space. Elbaz, Bernardini, Treibitz, and Nijnatten all lie in the same field of endeavor of optical systems used for modeling and are pertinent for solving problems caused by relationships between light interacting with transparent windows. The motivation to combine references is to improve accuracy of intraoral models, and address, as Elbaz states, “a need for improved methods and systems for scanning an intraoral cavity of a patient, and/or for automating the identification and analysis of dental caries.” (Elbaz ¶ [0006]). Accordingly, Bernardini states, “while the method and apparatus described herein are provided with a certain degree of specificity, the present invention could be implemented with either greater or lesser specificity, depending on the needs of the user.”; thus, incorporating the model parameter taught by Treibitz and Nijnatten, and with the predictable increase in data accuracy (i.e. “greater specificity”), would be allowed, according to Bernardini.
Claim(s) 17 is rejected under 35 U.S.C. 103 as being unpatentable over US 20180028063 A1 (Elbaz et al.; referred to as “Elbaz”, below) in view of US 20040217260 A1 (Bernardini et al.; referred to as “Bernardini”, below), and in further view of US 20200404243 A1 (Saphier et al.; referred to as “Saphier”, below).
Regarding claim 17,
Elbaz and Bernardini teach the apparatus according to claim 14.
Elbaz and Bernardini are not relied on for the below claim language:
In a related art, Saphier teaches: wherein the one or more non-structured illumination sources comprise one or more broad spectrum illumination sources (Saphier teaches generating a 3D image of an intraoral surface using a handheld wand and structured light projectors (Abstract; ¶ [0040]; FIG. 1) and in one embodiment, Saphier teaches determining features in an intraoral cavity using “unstructured light” (e.g. broad spectrum light) (¶ [0041] “…whether a feature (e.g., spot) has been projected on moving or stable tissue within the intraoral cavity may be determined on image frames of unstructured light (e.g., which may be broad spectrum light).”; ¶ [0151] “the unstructured light comprises broad spectrum light”); Examiner interprets “non-strucutred illumination source” to be equivalent to “the unstructured light” in the context of the instant application and as taught by Saphier).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the teachings of an intraoral apparatus with an elongated wand and non-structured illumination sources taught by Elbaz and Bernardini to incorporate the teachings of one or more non-structured illumination sources to comprise one or more broad spectrum illumination sources, as taught by Saphier. Doing so would make the apparatus more robust by providing an additional illumination source that predictably provides different results than other illumination sources (e.g. determining features like spots (Saphier ¶ [0041])). Further, since Elbaz teaches other illumination sources may be used for the mode being used (¶ [0146]- [0147] “Although separate illumination sources are shown in FIG. 1B, in some variations a selectable light source may be used. The light source may be any appropriate light source…any appropriate light source may be used, in particular, light sources matched to the mode being detected.”) and because Elbaz teaches an intraoral apparatus comprising of non-illumination sources for intraoral scanning (refer back to claim 14), it is predictable to use one or more broad spectrum illumination sources, taught by Saphier, in Elbaz’s apparatus for intraoral scanning. All three inventions lie in the same field of endeavor of image processing and analysis based on images captured from cameras using non-uniform illumination, with specific applications in the medical field. The motivation to combine includes improving the capture of an intraoral scan, when using a digital intraoral scanner (Saphier ¶ [0004]) by making the apparatus more robust to account for different data picked up from varying light sources (Saphier ¶ [0041]).
Claim(s) 26 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over US 20180028063 A1 (Elbaz et al.; referred to as “Elbaz”, below), in view of US 20040217260 A1 (Bernardini et al.; referred to as “Bernardini”, below), in further view of US 20180089855 A1 (Rodrigues et al.; referred to as “Rodrigues”, below), in further view of US 20210243369 A1 (Mutto & Marin; referred to as “Mutto”, below), and in further view of “Flat Refractive Geometry” (Treibitz et al.; referred to as “Treibitz”; copy provided by examiner).
Regarding Claim 26,
Elbaz, Bernardini, Rodrigues, and Mutto teach: An apparatus for intraoral scanning, the apparatus comprising:
an elongate wand comprising a probe at a distal end of the elongate wand;
one or more cameras disposed within the probe;
one or more non-structured illumination sources disposed within the elongate wand, and arranged such that images of an intraoral surface are captured using the one or more cameras under non-uniform illumination from the one or more non-structured illumination sources; and
a computer processor configured to analyze images captured by the one or more cameras under the non-uniform illumination from the one or more non-structured illumination sources, wherein the computer processor is configured to compensate for non-uniformity of the non- uniform illumination using calibration data generated
(These limitations equally mirror the limitation found in claim 14, found above. Thus, these limitations are rejected based on the same prior art taught and motivations to combined found in claim 14.)
by: capturing, using the one or more cameras, calibration images of a 2D calibration target,
(These limitations equally mirror a limitation found in claim 23, found above. Thus, these limitations are rejected based on the same prior art taught and motivations to combined found in claim 23.)
Bernardini further teaches:
the capturing of the calibration images performed while the 2D calibration target is disposed at a respective (Refer to claim 23 for teachings of 2D calibration target. Bernardini further teaches a calibration setup in which the geometry of the camera and calibration target is known and the captured calibration images are used to determine illumination characteristics (¶ [0015] “the geometries of the camera and of the target are known, the acquired images are used to geometrically characterize the cast shadows of the target, and thus deduce the locations of the light sources.”). The calibration target must be positioned in a 3D space, which accounts for a given z direction, based on the calibration method relying on a spatial relationship between the camera and the calibration target. Thus, Bernardini teaches capturing calibration images performed while the 2D calibration target is disposed at a distance from the cameras in the z direction.),
and (Bernardini further teaches, “By observing the distribution of light reflected from the target surface from each light source the directional distribution of the light sources can be measured.” (¶ [0015). Bernardini’s method obtains light intensity values at corresponding sensor coordinates (u,v) because the reflected light is being sampled at multiple locations across the captured image. The spatial relationship between the camera and target (i.e. “distance in the z direction”) is used to model the illumination distribution based on measured light intensities at the sensor locations.)
Elbaz, Bernardini, Mutto and previous teachings of Rodrigues are not relied on for the following claim language:
Rodrigues further teaches: fitting a mathematical function corresponding to an amount of light received at each point (u,v) on a sensor (Rodrigues teaches the camera captures images for camera response and vignetting using calibration images (¶ [0115] “ the processor 1002 may be operatively coupled to an input interface 1004 configured to obtain one or more images and output interface 1006 configured to output and/or display the calibrated images.”; Abstract “modelling the vignetting there are three steps: computing the albedo-normalized irradiance, finding points of equal vignetting, when needed, and estimation.”; ¶ [0113] “ the image processing module 1010 may estimate the camera response function and the vignetting in case of non-uniform illumination using one or more calibration images.”) Vignetting represents spatial variation of brightness across the camera sensor, and estimating vignetting therefore involves evaluating light intensity at multiple pixel locations across the captured image. Therefore, pixel locations correspond to sensor coordinates (i.e. (u,v)). Thus, Rodrigues determines a mathematical function by describing the amount of light received at different points on the sensor.)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the intraoral apparatus and calibration framework taught by Elbaz, Bernardini, Mutto, and Rodrigues to incorporate the determining of a mathematical function by describing the amount of light at different points on a sensor, thus fitting of a corresponding mathematical function corresponding to an amount of light, as taught by Rodrigues. Doing so would provide the predictable benefit of a more accurate intraoral apparatus calibration system by accounting for light received at each point from the sensor. The inventions all lie in the same field of endeavor of optical imaging systems that use cameras and calibration frameworks to capture and process images and determine illumination.
Elbaz, Bernardini, Rodrigues, and Mutto are not relied on for the following claim language:
In a related art, Treibitz teaches: capturing calibration images when calibration target is disposed at a respective plurality of distance“An alternative to using multiple calibration objects is to use a single object in multiple known ranges,” and that “the same object is projected to different coordinates when imaged from different distances” (p. 58, sub-section “5.1 Well Posednes and Stability”, ¶ [0004]). These multiple known ranges correspond to different distances of the calibration object relative to the camera in the depth direction (z), thereby providing calibration images captured at a plurality of distances.
It would have been obvious to modify the intraoral apparatus and calibration techniques of Elbaz, Bernardini, and Mutto, and fitting of a mathematical function corresponding to an amount received at reach point on a sensor, as modified by Rodrigues, to incorporate the further taught calibration techniques of Treibitz to improve modeling of illumination and camera response in captured images by capturing calibration images of the target at multiple distances. A person of ordinary skill in the art would have been motivated to incorporate Treibitz’s technique of imaging the calibration object at multiple known ranges so that the calibration model could be determined for different distances in the depth (z) direction relative to the cameras, thereby improving the accuracy of the calibration across varying object distances, because the references already use calibration images of a target to determine illumination and camera response characteristics. The inventions all lie in the same field of endeavor of optical imaging systems that use cameras and calibration frameworks to capture and process images and determine illumination.
Regarding claim 27,
Elbaz, Bernardini, Rodrigues, Mutto, and Treibitz teach apparatus according to claim 26.
Bernardini further teaches: wherein capturing comprises capturing calibration images of a solid-color 2D calibration target (FIG. 3, ¶ [0034] “…an important aspect of this invention is the light source calibration target 20. The light source calibration target 20 includes a planar surface 300 that is preferably white and diffusely reflecting (Lambertian or substantially Lambertian)…The target 20 includes…a visually contrasting color, e.g., black, on the surface 300.” The color “black” used for the calibration target, as cited as an example taught by Bernardini, is considered equivalent to a “solid-color”. For the sake of brevity, refer back to claims 20, 21, and 23 for further explanation of how the planar surface taught by Bernardini is considered a 2D calibration target and how the calibration target taught by Bernardini is also considered a diffusive calibration target, as disclosed in the instant application.).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL DAVID BAYNES whose telephone number is (571)272-0607. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408)918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.D.B./
Samuel D. Baynes
Art Unit 2665
/Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665