DETAILED ACTION
Notice of Pre–AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This communication is in response to the Application Filed on 10/24/2023
Claims 1–22 are pending in this application.
Drawings
The drawing(s) filed on 10/24/2023 are accepted by the Examiner.
Claim Rejections – 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre–AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre–AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1–14, 20–22 are rejected under 35 U.S.C. 102 as being unpatentable over Elbaz et al. (US 10380212 B2, hereafter, "Elbaz").
Regarding claim 1, Elbaz discloses a method comprising: providing an intra–oral camera comprising a confined light injector (See Elbaz, [Abstract], Described herein are intraoral scanning methods and apparatuses for generating a three–dimensional model of a subject's intraoral region (e.g., teeth) including both surface features and internal features);
projecting a spatially confined light in a first direction via the confined light injector, at an illumination point of the at least one tooth (See Elbaz, [Col. 25, ln. 64–67 – Col. 26, ln. 6–15], FIGS. 4B–4F illustrate other emitters and detectors for use with of any of the penetrating wavelengths that may be used to take images into the object having semi–transparent strongly scattering regions (e.g., teeth), ..., In FIGS. 4C–4F the angle of the ray of light emitted and collected is very small ( e.g., around 0°) and can be collected by placing the emitter 403, 403' and detector 405 assembly (e.g., CMOS, CCD, etc.) adjacent to each other, as shown in FIG. 4C, combined with each other, as shown in FIG. 4D, or simply sharing a common or near–common beam path, as shown in FIGS. 4E and 4F, which may use reflection or waveguides to direct emitted and/or received light, including the use of beam splitters (dichroic beam splitters) and/or filters);
recording, by an image sensor, one or more surface light distribution images received via captured light backscattered from the tooth (See Elbaz, [Col. 25, ln. 67 – Col. 26, ln. 1–4], These images typically collect reflective mode (e.g., light at a penetrative wavelength that has passed into the tooth, and been scattered/ reflected from internal structures so that it can be collected by the detector);
computing, by a tooth defect detection module, a defect condition of the at least one tooth by classifying the one or more surface light distribution images based on computation of a change in light distribution in the one or more surface light distribution images caused by a defective tooth material (See Elbaz, [Col. 26, ln. 36–40], The use of penetration imaging, and particularly small angle illumination/imaging, which may also be described as reflective imaging, may provide information about internal regions (such as cracks, caries, lesions, etc.) of the teeth that would not otherwise be available),
wherein the spatially confined light is projected at the illumination point in a contactless manner (See Elbaz, [Col. 24, ln. 63–67], The use of a small angle for penetration imaging may include imaging into the tooth using the wand in a way that enables unconstraint movement around the tooth, and may enable capturing the internal structure data while also scanning for 3D (surface) model data. Note: the unconstraint movement implies that it is contactless).
Regarding claim 2, Elbaz discloses the method of claim 1, wherein the defect condition is defined by caries, demineralization, and/or cracks in the at least one tooth (See Elbaz, [Col. 26, ln. 36–40], The use of penetration imaging, and particularly small angle illumination/imaging, which may also be described as reflective imaging, may provide information about internal regions (such as cracks, caries, lesions, etc.) of the teeth that would not otherwise be available).
Regarding claim 3, Elbaz discloses the method of claim 1, further comprising: scanning an entire jaw by projecting the spatially confined light at a plurality of illumination points on a plurality of teeth (See Elbaz, [Col. 43, ln. 18–26], In any of these methods, an intraoral scanner 2801 capable of measuring both surface (including, in some variations color, e.g., R–G–B color) and internal structures may be used to scan the patient's teeth (e.g., taking images and scans of the jaw, including the teeth). The apparatus may scan in different modalities, including surface (non–penetrative or not substantially penetrating, e.g., visible light, white light) and penetrative (e.g., near IR/IR) wavelengths).
Regarding claim 4, Elbaz discloses the method of claim 1, projecting the spatially confined light in a plurality of directions, by the confined light injector, at a plurality of illumination points of the at least one tooth (See Elbaz, [Col. 15, ln. 2–6], taking a plurality of near–infrared (near–IR) images into the subject's teeth at different orientations using the intraoral scanner emitting both a near–IR wavelength and a non–penetrative wavelength);
recording a plurality of surface light distribution images from different directions (See Elbaz, [Col. 15, ln. 2–6], taking a plurality of near–infrared (near–IR) images into the subject's teeth at different orientations using the intraoral scanner emitting both a near–IR wavelength and a non–penetrative wavelength);
computing the defect condition based on at least two surface light distribution images from the change in light distribution caused by the defective tooth material (See Elbaz, [Col. 14, ln. 65–67 – Col. 15, ln. 1–6], Also described herein are methods of imaging cracks and caries in teeth. For example, described herein are methods of imaging into a subject's teeth to detect cracks and caries using an intraoral scanner, the method comprising: scanning the intraoral scanner over the subject's teeth; taking a plurality of near–infrared (near–IR) images into the subject's teeth at different orientations using the intraoral scanner emitting both a near–IR wavelength and a non–penetrative wavelength);
wherein the computing the defect condition converges to a mutual tooth defect condition for the at least two surface light distribution images (See Elbaz, [Col. 15, ln. 6–16], determining a position of the intraoral scanner relative to the subject's teeth for each location of an image from the plurality of near–IR images using the non–penetrative wavelength; and generating a three–dimensional (3D) volumetric model of the subject's teeth using the plurality of 10 near–IR images and the position of the intraoral scanner relative to the subject's teeth for each near–IR image of the plurality of near–IR images. Any of these methods may include analyzing the volumetric model to identify a crack or caries ( or other internal regions of the teeth). Note: the 3D model is being interpreted as the convergence of the images to show the defect).
Regarding claim 5, Elbaz discloses the method of claim 4, wherein the mutual tooth defect condition comprises at least one of a presence of the tooth defect, a position of the tooth defect, and a geometry of the tooth defect (See Elbaz, [Col. 33, ln. 30–34], The 3D model may be used, for example, to measure size shape and location of lesion including decay, to assess the type of decay based on translucently, color, shape, and/or to assess the type of surface issues based on surface illumination e.g. cracks, decay, etc. 609).
Regarding claim 6, Elbaz discloses the method of claim 4, further comprising: masking out back reflection from the one or more surface light distribution images to obtain one or more corresponding evaluable volume scattering image data for remaining areas of the one or more surface light distribution images (See Elbaz, [Col. 25, ln. 22–29], Alternatively or additionally, the apparatuses and/or methods may reduce or eliminate the problems arising from saturation with direct reflection by using only the nonsaturated pixels. In some variations, the surface information may be subtracted from the penetration images as part of the process. For example, visible light images ("viewfinder images") or surface imaging may be used to remove direct surface reflections. Note: Examiner is interpreting subtraction as the masking out. The volume scattering image is the updated penetrative image),
wherein the back reflection is caused by a surface reflection or close–to–surface volume scattering at the illumination point (See Elbaz, [Col. 25, ln. 22–29], Alternatively or additionally, the apparatuses and/or methods may reduce or eliminate the problems arising from saturation with direct reflection by using only the nonsaturated pixels. In some variations, the surface information may be subtracted from the penetration images as part of the process. For example, visible light images ("viewfinder images") or surface imaging may be used to remove direct surface reflections).
Regarding claim 7, Elbaz discloses the method of claim 6, further comprising: for at least one surface light distribution image of the one or more surface light distribution images, reconstructing tooth information masked out from said masking using at least one other surface light distribution image (See Elbaz, [Col. 25, ln. 7–13], These direct reflections may be problematic if they saturate the sensor, or if they show surface information but obscure deeper structure information. To overcome these problems, the apparatus and 10 methods of using them described herein may capture and use multiple illumination orientations taken from the same position. Note: Elbaz is using multiple images to compensate for the back light using other images).
Regarding claim 8, Elbaz discloses the method of claim 4, further comprising: combining a plurality of the one or more surface light distribution images to form an overall image (See Elbaz, [Col. 28, ln. 4–9], For example a 3D reconstruction of the tooth data may be reconstructed by an algorithm combining several (e.g., multiple) 2D images using the any of the internal feature imaging techniques described herein, typically taken at several different angles or orientations).
Regarding claim 9, Elbaz discloses the method of claim 1, further comprising: overlaying the one or more surface light distribution images with visible light information as a live or stored video stream (See Elbaz, [Col. 11, ln. 63–67 – Col. 12, ln. 1–4], In any of these methods and apparatuses, the 3D surface model may be concurrently captured using a non–penetrative wavelength (e.g., surface scan) while capturing the penetrative images. For example, capturing may comprise capturing surface images of the subject's teeth while capturing the plurality of images of the interior of the subject's teeth. The method may also include forming the three dimensional model of the subject's teeth from the captured surface images. [Col. 12, ln. 20–24], In any of the methods and apparatuses described herein, the 3D model including the internal structure(s) may be displayed while the scanner is operating. This may advantageously allow the user to see, in real–time or near real–time the internal structure(s) in the subject's teeth. Note: the Examiner is interpreting the penetrative images as the surface light distribution images and the visible light information as the surface scan).
Regarding claim 10, Elbaz discloses the method of claim 1, wherein the illumination point is chosen to be at a point that is not in an image field of the sensor, thereby reducing or eliminating a masking out process of back reflection (See Elbaz, [Col. 25, ln. 53–60], Alternatively, the wand may be configured with multiple imaging sensors (cameras) and multiple light sources, allowing multiple penetration images may be taken at approximately the same time, e.g., by turning on multiple sensors when illuminating from one or more LED orientations (e.g., FIGS. 5G and 5E, etc.). In FIGS. 5A–5I, at least nine different orientations of penetration images may be taken, as shown. See also [FIG. 5A–5E]. Note: the figures show orientations where the illumination the arrow going towards the teeth and the arrow going away from the tooth is going towards the sensor in the same image field).
Regarding claim 11, Elbaz discloses the method of claim 1, wherein the spatially confined light is a laser beam in a near– infrared (NIR) wavelength range (See Elbaz, [Col. 26, ln. 16–19], As mentioned above, any appropriate sensor may be used, including CMOS or CCD cameras, or any other sensor that is capable of detecting the appropriate wavelength, such as near–IR wavelength detectors).
Regarding claim 12, Elbaz discloses the method of claim 1, wherein the spatially confined light is polarized and a cross– polarized filter is disposed in front of the image sensor to suppress direct back reflection from the tooth surface (See Elbaz, [Col. 31, ln. 41–50], For example, FIG. 2E shows a schematic of intraoral scanner configured to do both surface scanning (e.g., visible light, non–penetrative) and penetrative scanning using a near infra–red (NIR) wavelength (at 850 nm in this example). In FIG. 2E, the scanner includes a near–IR illumination light 289 and a first polarizer 281 and a second polarizer 283 in front of the image sensor 285 to block near–IR light reflected off the surface of the tooth 290 (P–polarization light) while still collecting near–IR light scattered from internal tooth structures/regions (S–polarization light)).
Regarding claim 13, Elbaz discloses the method of claim 11, wherein the spatially confined light causes a diffuse illumination from inside of the at least one tooth with a highest intensity at the illumination point and decreasing intensity into a periphery of the illumination point (See Elbaz, [Col. 26, ln. 24–33], In penetration imaging conditions, the light generating the captured image has traveled though the object, and the longer the path, the longer the scattering that will occur, resulting in a more smoothed–out illumination when compared to direct illumination. In front illumination, as results with small–angle illumination, the strongest amount of light will be present in the region nearest to the illuminator (e.g., LED), which will back scatter; this nearby region (e.g., the first 1–2 mm) is an important region for detecting caries. Note: the light is strongest at the region nearest the illuminator which the examiner is interpreting as the illumination point and it implies that it gets weaker the further away it is).
Regarding claim 14, Elbaz discloses the method of claim 1, wherein the image sensor is sensitive to a wavelength range of the spatially confined light (See Elbaz, [Col. 25, ln. 64–67], FIGS. 4B–4F illustrate other emitters and detectors for use with of any of the penetrating wavelengths that may be used to take images into the object having semi–transparent strongly scattering regions (e.g., teeth)).
Regarding claim 20, Elbaz discloses the method of claim 1, further comprising: projecting the spatially confined light in the first direction into a neighboring tooth to provide an indirect illumination of an interproximal caries or crack (See Elbaz, [Col. 23, ln. 7–21], Any of the apparatuses and methods described herein may be used to scan for and/or identify internal structures such as cracks, caries (decay) and lesions in the enamel and/or dentin. Thus, any of the apparatuses described herein may be configured to perform scans that may be used to detect internal structures using a penetrative wavelength or spectral range of penetrative wavelengths. Also described herein are methods for detecting cracks, caries and/or lesions or other internal feature such as dental fillings, etc. A variety of penetrative scanning techniques (penetration imaging) may be used or incorporated into the apparatus, including but not limited to trans illumination and small–angle penetration imaging, both of which detect the passage of penetrative wavelengths of light from or through the tissue (e.g., from or through a tooth or teeth). Note: the examiner is interpreting the indirect illumination as the trans–illumination through a tooth to detect caries and cracks).
Regarding claim 21, claim 21 is rejected the same as claim 1 and the arguments similar to that presented above for claim 1 are equally applicable to the claim 21, and all of the other limitations similar to claim 1 are not repeated herein, but incorporated by reference. Furthermore, Elbaz teaches a system comprising: intra–oral camera including a confined light injector, a processor, and a memory storing instructions that, when executed by the processor, configure the system to (See Elbaz, [FIG. 16], 508 Memory Subsystem, 502 Processor(s)).
Regarding claim 22, claim 22 is rejected the same as claim 1 and the arguments similar to that presented above for claim 1 are equally applicable to the claim 22, and all of the other limitations similar to claim 1 are not repeated herein, but incorporated by reference. Furthermore, Elbaz teaches a non–transitory computer readable storage medium storing one or more programs that when executed by a processor cause the intra–oral camera to (See Elbaz, [FIG. 16], 508 Memory Subsystem, 502 Processor(s)).
Claim Rejections – 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre–AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre–AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non–obviousness.
Claim(s) 15 is rejected under 35 U.S.C. 103 as being unpatentable over Elbaz et al. (US 10380212 B2, hereafter, "Elbaz") in view of Chang (US 20150289954 A1, hereafter, "Chang").
Regarding claim 15, Elbaz teaches The method of claim 1, further comprising: [computing a location of the illumination point based on a position of the intra–oral camera relative to the at least one tooth using 3D geometry information captured by the intra– oral camera to generate a 3D data set of the at least one tooth].
However, Elbaz fail(s) to teach computing a location of the illumination point based on a position of the intra–oral camera relative to the at least one tooth using 3D geometry information captured by the intra– oral camera to generate a 3D data set of the at least one tooth.
Chang, working in the same field of endeavor, teaches: computing a location of the illumination point based on a position of the intra–oral camera relative to the at least one tooth using 3D geometry information captured by the intra– oral camera to generate a 3D data set of the at least one tooth (See Chang, ¶ [0086], Active triangulation, or structured light methods, overcomes the stereo correspondence issue by projecting known patterns of light onto an object to measure its shape. The simplest structured light pattern is simply a spot of light, typically produced by a laser. The geometry of the setup between the light projector and the position of the camera observing the spot of light reflected from the target object's surface enables the calculation of the relative range of the point on which the light spot falls by trigonometry. [0087] The overall accuracy of a 3D laser triangulation scanning system is based primarily upon its ability to meet two objectives: 1) accurately measure the center of the illumination light reflected from the target surface and 2) accurately measure the position of the illumination source and the camera at each of the positions used by the scanner to acquire an image. ¶ [0089], determining the position of the one or more image apertures using the fixed external coordinate reference frame; capturing one or more images of the dental structure through one or more of the image apertures; and generating a 3D model of the dental structure based on the captured images).
Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Elbaz’s reference to computing a location of the illumination point based on a position of the intra–oral camera relative to the at least one tooth using 3D geometry information captured by the intra– oral camera to generate a 3D data set of the at least one tooth based on the method of Chang’s reference. The suggestion/motivation would have been to quickly and accurately process the three–dimensional model of teeth for further processing (See Chang, ¶ [0002–0010]).
Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Chang with Elbaz to obtain the invention as specified in claim 15.
Claim(s) 16 is rejected under 35 U.S.C. 103 as being unpatentable over Elbaz et al. (US 10380212 B2, hereafter, "Elbaz") in view of Saphier et al. (US 20200404243 A1, hereafter, "Saphier").
Regarding claim 16, Elbaz teaches The method of claim 1, further comprising: [computing a location of the illumination point based on pixels of the one or more surface light distribution images with exposure values that exceed a threshold].
However, Elbaz fail(s) to teach computing a location of the illumination point based on pixels of the one or more surface light distribution images with exposure values that exceed a threshold.
Saphier, working in the same field of endeavor, teaches: computing a location of the illumination point based on pixels of the one or more surface light distribution images with exposure values that exceed a threshold (See Saphier, ¶ [0061], In a further implementation of the second method, the processor sets a threshold, such that a detected feature that is below the threshold is not considered by the correspondence algorithm, and to search for the feature corresponding to projector ray r1 in the identified search space, the processor lowers the threshold in order to consider features that were not considered by the correspondence algorithm. For some implementations, the threshold is an intensity threshold. Note: Examiner is interpreting the projection ray r1 as the illumination point).
Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Elbaz’s reference to computing a location of the illumination point based on pixels of the one or more surface light distribution images with exposure values that exceed a threshold based on the method of Saphier’s reference. The suggestion/motivation would have been to accurately capture the intraoral scan when using structured light imaging (See Saphier, ¶ [0003–0005]).
Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Saphier with Elbaz to obtain the invention as specified in claim 16.
Claim(s) 17–19 are rejected under 35 U.S.C. 103 as being unpatentable over Elbaz et al. (US 10380212 B2, hereafter, "Elbaz") in view of Atiya et al. (US 20230025243 A1, hereafter, "Atiya").
Regarding claim 17, Elbaz teaches The method of claim 1, [wherein the computing is performed by comparison of the one or more surface light distribution images with a database of stored surface light distribution images that include defective and healthy tooth material data].
However, Elbaz fail(s) to teach wherein the computing is performed by comparison of the one or more surface light distribution images with a database of stored surface light distribution images that include defective and healthy tooth material data.
Atiya, working in the same field of endeavor, teaches: wherein the computing is performed by comparison of the one or more surface light distribution images with a database of stored surface light distribution images that include defective and healthy tooth material data (See Atiya, ¶ [0119], For example, a machine learning model can be trained with input data including images and/or 3D models with or without caries or cracks and corresponding output data identifying whether a crack(s) and/or caries are located in the images and/or 3D models. The trained machine learning model can receive as input the scan data (e.g., images) and or 3D model created from a particular scan and output particular features of the images and/or 3D models that may be cracks and/or caries and a level of confidence (e.g., probability) that the identified features is a crack and/or caries. Note: Examiner is being interpreted as the being trained based on the surface light distribution images).
Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Elbaz’s reference to wherein the computing is performed by comparison of the one or more surface light distribution images with a database of stored surface light distribution images that include defective and healthy tooth material data based on the method of Atiya’s reference. The suggestion/motivation would have been to accurately scan and improving the quality of scans (See Atiya, ¶ [0003–0004] and ¶ [0051]).
Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Atiya with Elbaz to obtain the invention as specified in claim 17.
Regarding claim 18, Elbaz teaches The method of claim 1, [wherein the computing is performed by a machine–learned model that is trained based on at least a plurality of test surface light distribution images that include defective and healthy tooth material data].
However, Elbaz fail(s) to teach wherein the computing is performed by a machine–learned model that is trained based on at least a plurality of test surface light distribution images that include defective and healthy tooth material data.
Atiya, working in the same field of endeavor, teaches: wherein the computing is performed by a machine–learned model that is trained based on at least a plurality of test surface light distribution images that include defective and healthy tooth material data (See Atiya, ¶ [0119], For example, a machine learning model can be trained with input data including images and/or 3D models with or without caries or cracks and corresponding output data identifying whether a crack(s) and/or caries are located in the images and/or 3D models. The trained machine learning model can receive as input the scan data (e.g., images) and or 3D model created from a particular scan and output particular features of the images and/or 3D models that may be cracks and/or caries and a level of confidence (e.g., probability) that the identified features is a crack and/or caries).
Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Elbaz’s reference wherein the computing is performed by a machine–learned model that is trained based on at least a plurality of test surface light distribution images that include defective and healthy tooth material data based on the method of Atiya’s reference. The suggestion/motivation would have been to accurately scan and improving the quality of scans (See Atiya, ¶ [0003–0004] and ¶ [0051]).
Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Atiya with Elbaz to obtain the invention as specified in claim 18.
Regarding claim 19, Elbaz teaches the method of claim 17, [wherein the database is generated based on light propagation in extracted or in–situ teeth and/or Monte–Carlo simulation of light propagation in virtual tooth models].
However, Elbaz fail(s) to teach wherein the database is generated based on light propagation in extracted or in–situ teeth and/or Monte–Carlo simulation of light propagation in virtual tooth models.
Atiya, working in the same field of endeavor, teaches: wherein the database is generated based on light propagation in extracted or in–situ teeth and/or Monte–Carlo simulation of light propagation in virtual tooth models (See Atiya, ¶ [0102], Via such scanner application, the scanner 150 may provide intraoral scan data 135A–N to computing device 105. The intraoral scan data 135A–N may be provided in the form of intraoral scan data sets, each of which may include 2D intraoral images (e.g., color 2D images) and/or 3D intraoral scans of particular teeth and/or regions of an intraoral site. Note: Examiner is interpreting the in–situ as teeth in the mouth. The scanner is making a dataset based on teeth scanned from the mouth).
Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Elbaz’s reference wherein the database is generated based on light propagation in extracted or in–situ teeth and/or Monte–Carlo simulation of light propagation in virtual tooth models based on the method of Atiya’s reference. The suggestion/motivation would have been to accurately scan and improving the quality of scans (See Atiya, ¶ [0003–0004] and ¶ [0051]).
Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Atiya with Elbaz to obtain the invention as specified in claim 19.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Schnabel (US 20240189065 A1) teaches the present teachings relate to a method for assisting an intraoral scan including providing an intraoral image of a patient, and providing an extraoral image; the extraoral image being representative of the position of an extraoral scanner part. The teaches further include generating, using the intraoral image and the extraoral image, a mapping function correlating the position of the extraoral scanner part with the position of the intraoral scanner part; and computing, using the mapping function, a desired extraoral position of the extraoral scanner part; the desired extraoral position corresponding to a preferable intraoral position of the intraoral scanner part. The present teachings also relate to a system, a device, a use, data, and a storage medium.
Kaneda (US 11109752 B2) teaches a dental caries diagnosis device comprising a light source which is configured to emit examination light (R) and a light receiving unit (4f) which is configured to receive the examination light (R) with which a tooth has been irradiated includes: a head–side casing (4a1) which is inserted into a mouth in a contactless manner with respect to a tooth or a gum and which projects the examination light (R) toward a tooth; and a filter (4e) which is disposed in front of the light receiving unit (4f) and which is configured to remove a noise component from the received light, wherein the light receiving unit (4f) is configured to receive the examination light (R) which has been transmitted through the tooth.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DION J SATCHER whose telephone number is (703)756–5849. The examiner can normally be reached Monday – Thursday 5:30 am – 2:30 pm, Friday 5:30 am – 9:30 am PST.
Examiner interviews are available via telephone, in–person, and video conferencing using a USPTO supplied web–based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272–4637. The fax phone number for the organization where this application or proceeding is assigned is 571–273–8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent–center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866–217–9197 (toll–free). If you would like assistance from a USPTO Customer Service Representative, call 800–786–9199 (IN USA OR CANADA) or 571–272–1000.
/DION J SATCHER/Patent Examiner, Art Unit 2676
/Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676