DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 5 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 5 recites the limitation "the optical window” in line 3. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pesach et al. (U.S. Patent Application Publication 2020/0205942) in view of Saphier et al. (U.S. Patent Application Publication 2019/0388193).
Regarding claim 1, Pesach et al. discloses an intraoral scanning device for scanning a dental object (Fig. 4 – intraoral scanner (IOS) 100; paragraph [0006] – the use of intraoral scanners (IOS) to size and acquire a 3D image of a tooth or teeth requiring restoration (e.g., for prostheses or model preparation) has become prevalent in dental practices), comprising: an elongated probe defining a longitudinal axis of the scanning device (Fig. 4; paragraph [0066] – the IOS head includes a probe and a plurality of imagers and a plurality of light projectors located around the probe); a first scan unit (Fig. 4) comprising: a first projector unit configured to project a light pattern onto a surface of the dental object, wherein the first projector unit defines a first projector optical axis (Fig. 4; paragraph [0031] – an IOS including: an imager; a plurality of pattern projectors including a first pattern projector distanced from the imager by a first baseline length and a second pattern projector distanced from the imager by a second baseline length greater than the first baseline length; a processor configured to determine a working distance of a ROI from the imager, and select a pattern projector from the plurality of pattern projectors based on the working distance); at least one camera comprising an image sensor for acquiring images, wherein the at least one camera defines a camera optical axis (Fig. 4; paragraph [0031] – an IOS including: an imager; a plurality of pattern projectors including a first pattern projector distanced from the imager by a first baseline length and a second pattern projector distanced from the imager by a second baseline length greater than the first baseline length; a processor configured to determine a working distance of a ROI from the imager, and select a pattern projector from the plurality of pattern projectors based on the working distance; paragraph [0066] – the IOS head includes a probe and a plurality of imagers and a plurality of light projectors located around the probe); and a reflecting element configured to reflect light from the first projector unit and/or reflect light from the surface of the dental object and onto the image sensor(s) of each camera (paragraph [0086] – an image received by at least one lens disposed on the second and/or third surfaces is imaged on at least a portion of the imager via a mirror mor a prism; paragraph [0281] – the IOS head may include a single image sensor 1012, a plurality of light emitters disposed on IOS head first and second surfaces 1006, 1014 respectively facing a first, second and third region of interest (ROI) respectively – first and second surface 1006, 1014 may each include at least one image acquisition lens 1002 – light incident on one or more image acquisition lenses 1002 disposed on first surface 1006 travels directly to and is incident on a first portion of image sensor 1012 pixel area – light incident on one or more image acquisition lenses 1002 disposed on second surfaces 1014 travels to image sensor 1012 via one or more mirrors or prisms 1004 an is incident on a second and third portions of image sensor 1012 pixel area); a second scan unit (Fig. 4) comprising: a second projector unit configured to project a light pattern onto a surface of the dental object (Fig. 4; paragraph [0031] – an IOS including: an imager; a plurality of pattern projectors including a first pattern projector distanced from the imager by a first baseline length and a second pattern projector distanced from the imager by a second baseline length greater than the first baseline length; a processor configured to determine a working distance of a ROI from the imager, and select a pattern projector from the plurality of pattern projectors based on the working distance; paragraph [0066] – the IOS head includes a probe and a plurality of imagers and a plurality of light projectors located around the probe), wherein the second projector unit defines a second projector optical axis (Fig. 4; paragraph [0066] – the IOS head includes a probe and a plurality of imagers and a plurality of light projectors located around the probe); and at least one camera comprising an image sensor for acquiring images, wherein the at least one camera defines a camera optical axis (Fig. 4; paragraph [0031] – an IOS including: an imager; a plurality of pattern projectors including a first pattern projector distanced from the imager by a first baseline length and a second pattern projector distanced from the imager by a second baseline length greater than the first baseline length; a processor configured to determine a working distance of a ROI from the imager, and select a pattern projector from the plurality of pattern projectors based on the working distance; paragraph [0066] – the IOS head includes a probe and a plurality of imagers and a plurality of light projectors located around the probe); wherein each scan unit defines a field of view (FOV) (Fig 4; paragraph [0226] – the IOS head 150 may include one, two or more image sensors 112, 412, and imaging lens 110, 410 respectively imaging respective FOVs 434, 436). However, Pesach et al. fails to disclose wherein the projector optical axis and the camera optical axis of each scan unit define a camera-projector angle of between 5° to 10°.
Referring to the Saphier et al. reference, Saphier et al. discloses an intraoral scanning device for scanning a dental object, wherein the projector optical axis and the camera optical axis of each scan unit define a camera-projector angle of between 5° to 10° (Figs. 2A-2C and 32; paragraph [0280] – in order to improve the overall field of view and field of illumination of the intraoral scanner, cameras 24 and the structured light projectors 22 are positioned such that they do not all face the same direction; paragraph [0281] – Fig. 2C, which is a chart depicting a plurality of different configurations for the position of structured light projectors 22 and cameras 24 in probe 28 – structured light projectors 22 are represented in Fig. 2C by circles and cameras 24 are represented in Fig. 2C by rectangles – similarly to as shown in Fig. 2A, column (b) of Fig. 2C shows cameras 24 positioned so as to have optical axes 46 at an angle of 90 degrees or less, e.g., 35 degrees or less, with respect to each other – column (c) shows a side view of cameras 24 of the various configurations as viewed from a line of sight that is perpendicular to the central longitudinal axis of probe 28; the 5° to 10° range falls within the range disclosed by Saphier et al. – Saphier et al. does not want the cameras and projectors facing the same direction in order to expand their fields of view or fields of illumination).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have had the projector optical axis and the camera optical axis of each scan unit define a camera-projector angle of between 5° to 10° as disclosed by Saphier et al. in the device disclosed by Pesach et al. in order to improve the overall field of view and field of illumination of the intraoral scanner, cameras and the structured light projectors are positioned such that they do not all face the same direction.
Regarding claim 2, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein the projector unit is configured to project unpolarized white light (Pesach et al.: paragraph [0177] – the IOS includes a micro light emitter that projects light at two or more wavelengths – in some embodiments, the projecting microlens is a diffractive optical element (DOE) that creates a given pattern at two or more wavelengths such that the patterns differ by the wavelengths ratio – in some embodiments, the image sensor receives three separate pattern images at three different wavelengths cast on the field of view by the RGB micro light emitter – optionally a processor may estimate a contrast of a projected pattern on a ROI – for example, the contrast estimate may account for incidence angle of the pattern on the ROI, distance from the ROI, power of a light emitter, and/or direction of features in the pattern).
Regarding claim 3, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein the light pattern is static in time (Pesach et al.: paragraph [0031] – an IOS including: an imager; a plurality of pattern projectors including a first pattern projector distanced from the imager by a first baseline length and a second pattern projector distanced from the imager by a second baseline length greater than the first baseline length; a processor configured to determine a working distance of a ROI from the imager, and select a pattern projector from the plurality of pattern projectors based on the working distance; paragraph [0068] – at least one light parameter includes at least one of: a structured light, a continuous light; paragraph [0177] – the IOS includes a micro light emitter that projects light at two or more wavelengths – in some embodiments, the projecting microlens is a diffractive optical element (DOE) that creates a given pattern at two or more wavelengths such that the patterns differ by the wavelengths ratio – in some embodiments, the image sensor receives three separate pattern images at three different wavelengths cast on the field of view by the RGB micro light emitter – optionally a processor may estimate a contrast of a projected pattern on a ROI – for example, the contrast estimate may account for incidence angle of the pattern on the ROI, distance from the ROI, power of a light emitter, and/or direction of features in the pattern; static is implied because intensities and patterns are not changed during illumination).
Regarding claim 4, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein the scanning device comprises an optical window made of a polymer or glass, wherein the optical window is located in a distal end of the elongated probe (Pesach et al.: Figs. 1 and 8; paragraph [0268] – spacer frame 808 may be attached to a glass wafer 814 that includes a plurality of structured light pattern transparencies 816, each corresponding to a light emitter 118 - glass wafer 814 may also include a clear portion 818 corresponding the image receiving area of image sensor 112; paragraph [0269] – glass wafer 814 may be attached to spacer frame 810, which in turn may be attached to layer 806; Saphier et al.: Figs. 1 and 32; paragraph [0042] – sensing surface; paragraph [0317] – glass surface of handheld wand 20, through which structured light projectors 22 project and cameras 24 view, as probe 28 enters the intraoral cavity).
Regarding claim 5, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein the first and second scan units are arranged to project the light pattern through the optical window (Pesach et al.: Figs. 1 and 8; paragraph [0268] – spacer frame 808 may be attached to a glass wafer 814 that includes a plurality of structured light pattern transparencies 816, each corresponding to a light emitter 118 - glass wafer 814 may also include a clear portion 818 corresponding the image receiving area of image sensor 112; paragraph [0269] – glass wafer 814 may be attached to spacer frame 810, which in turn may be attached to layer 806; Saphier et al.: Figs. 1 and 32; paragraph [0042] – sensing surface; paragraph [0317] – glass surface of handheld wand 20, through which structured light projectors 22 project and cameras 24 view, as probe 28 enters the intraoral cavity).
Regarding claim 6, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein the second scan unit further comprises a reflecting element configured to reflect light from the second projector unit and/or reflect light from the surface of the dental object and onto the image sensor(s) of each camera of the second scan unit (Pesach et al.: Figs. 37A-38B; paragraph [0086] – an image received by at least one lens disposed on the second and/or third surfaces is imaged on at least a portion of the imager via a mirror mor a prism; paragraph [0281] – the IOS head may include a single image sensor 1012, a plurality of light emitters disposed on IOS head first and second surfaces 1006, 1014 respectively facing a first, second and third region of interest (ROI) respectively – first and second surface 1006, 1014 may each include at least one image acquisition lens 1002 – light incident on one or more image acquisition lenses 1002 disposed on first surface 1006 travels directly to and is incident on a first portion of image sensor 1012 pixel area – light incident on one or more image acquisition lenses 1002 disposed on second surfaces 1014 travels to image sensor 1012 via one or more mirrors or prisms 1004 an is incident on a second and third portions of image sensor 1012 pixel area; paragraph [0414] – Fig. 37A is a side view schematic illustration of an IOS with a fixed head – in this figure an exemplary scanner is composed from an imager (cam) and a projector that are one above the other and a mirror 3702 to transform the FOV from looking forward to looking down; paragraph [0415] – Fig. 37B is a side view schematic illustration of a FOV splitting attachment 3701 to an IOS with a fixed – in Fig. 37B an attachment 3701 is connected to the bottom of the scanner – in some embodiments, the attachment splits the FOV of the camera and that of the projector e.g. in two different directions – for example, sideways).
Regarding claim 7, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein the scan units are positioned along the longitudinal axis of the scanning device (Pesach et al.: Fig. 32A and 32B).
Regarding claim 8, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein the first and second projector optical axes are substantially parallel (Pesach et al.: Fig. 32C; Saphier et al.: Fig. 32).
Regarding claim 9, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein a minimum bounding box of a first scan unit overlaps a minimum bounding box of a second scan unit (Pesach et al.: Fig. 4 - a first bounding box and a second bounding box can be seen in the Figure - both bounding boxes are overlapping; Saphier et al.: Fig. 32 - a first bounding box and a second bounding box can be seen in the Figure - both bounding boxes are overlapping).
Regarding claim 10, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein the first and second scan unit are configured to project the light pattern on the same side of the scanning device (Pesach et al.: Fig. 4; Saphier et al.: Fig. 32).
Regarding claim 11, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein each scan unit comprises two or more cameras, wherein said cameras are configured to acquire a set of images (Pesach et al.: Fig. 4 -image sensors 112, 412; paragraph [0033] – acquiring a plurality of small scale images with an IOS scanner – stitching together the plurality of small scale images to form a large scale 3D model; Saphier et al.: Fig 32; paragraph [0013] – each camera may be configured to capture a plurality of images that depict at least a portion of the projected pattern of light on an intraoral surface).
Regarding claim 12, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claims 1 and 11 including that wherein the number of images in the set of images corresponds to the number of cameras in a given scan unit (Pesach et al.: Fig. 4 -image sensors 112, 412; paragraph [0033] – acquiring a plurality of small scale images with an IOS scanner – stitching together the plurality of small scale images to form a large scale 3D model; Saphier et al.: Fig 32; paragraph [0013] – each camera may be configured to capture a plurality of images that depict at least a portion of the projected pattern of light on an intraoral surface).
Regarding claim 13, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claims 1 and 11 including that wherein the cameras are synchronized such that images within the set of images are acquired simultaneously (Pesach et al.: paragraph [0152]; paragraphs [0379] and [0380] – images are captured simultaneously - furthermore, pairs of imagers may be activated together too in order to achieve a goal in imaging).
Regarding claim 14, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein each scan unit comprises four cameras (Pesach et al.: Figs. 44A and 44B; paragraph [0444] – in some embodiments, more than two FOVs capture two or more dental objects – for example, as illustrated in Fig. 44B where, in some embodiments, IOS 4411 includes four imager-projector pairs (e.g., pair 4413, 4415); paragraph [0478] – Fig. 44B, in some embodiments, IOS 4411 includes n FOVs – for example, 4 imager FOVs 4415 and 4 associated projector FOVs 4413 (e.g. the IOS including 4 imagers or less than four imagers, where one or more FOV is split e.g. using mirror/s); Saphier et al.: Fig. 2C; paragraph [0281] – Fig. 2C, which is a chart depicting a plurality of different configurations for the position of structured light projectors 22 and cameras 24 in probe 28 – structured light projectors 22 are represented in Fig. 2C by circles and cameras 24 are represented in Fig. 2C by rectangles).
Regarding claim 15, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein each scan unit comprises one projector unit and a plurality of cameras, wherein the cameras are arranged on two axes to form a cross in the plane, wherein the projector unit is located in the center of the cross when viewed along the projector optical axis, wherein the two axes are substantially parallel with at least two edges of the reflecting element (Pesach et al.: paragraphs [0270]-[0279]; Saphier et al.: Figs. 2C and 32; paragraph [0281] – Fig. 2C, which is a chart depicting a plurality of different configurations for the position of structured light projectors 22 and cameras 24 in probe 28 – structured light projectors 22 are represented in Fig. 2C by circles and cameras 24 are represented in Fig. 2C by rectangles).
Regarding claim 16, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein each scan unit defines an angular field of view (AFOV) of between 60° to 75° (Pesach et al.: Figs. 6 and 9A).
Regarding claim 17, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein for a given scan unit, the center-to-center distance between the projector unit and a given camera of that scan unit is between 2 to 5 mm, preferably between 3 to 4 mm (Pesach et al.: paragraph [0159] – the IOS head width may be between 5-25 mm – in some embodiments, the IOS head width may be between 3-20 mm – in some embodiments, the IOS head width may be between 1-15 mm – in some embodiments, the IOS head length may be between 3-20 cm; based on the length and width of the IOS head, the number of projector units, and cameras provided on the head, the center-to-center distance between the projector unit and a given camera could be between 2 to 5 mm; Saphier et al.: Figs. 2C and 32; paragraph [0281] – Fig. 2C, which is a chart depicting a plurality of different configurations for the position of structured light projectors 22 and cameras 24 in probe 28 – structured light projectors 22 are represented in Fig. 2C by circles and cameras 24 are represented in Fig. 2C by rectangles).
Regarding claim 18, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein the field of view of each scan unit is at least 400 mm2 at a working distance of between 15 mm and 50 mm (Pesach et al.: paragraph [0334] – the focus distance is between 5 and 50 mm; paragraph [0389] – the FOV of the scanner may cover an area between 10 to 300 mm2 and/or between 300 to 1000 mm2).
Regarding claim 19, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein the scanning device employs a triangulation-based scanning principle (Saphier et al.: paragraph [0009] – since the pattern is coded, correspondences between image points and points of the projected pattern may be easily found – the decoded points can be triangulated and 3D information recovered; paragraph [0305] – each spot is located on at least one camera sensor 58 – since the cameras 24 are calibrated, the three-dimensional spot location of each spot is computed by triangulation based on images of the spot in multiple different cameras).
Regarding claim 20, Pesach et al. in view of Saphier et al. discloses all of the limitations as previously discussed with respect to claim 1 including that wherein the scanning device comprises one or more processors configured for generating a three- dimensional (3D) representation of the dental object (Pesach et al.: paragraph [0006] – the use of intraoral scanners (IOS) to size and acquire a 3D image of a tooth or teeth requiring restoration (e.g., for prostheses or model preparation) has become prevalent in dental practices; paragraph [0033] – acquiring a plurality of small scale images with an IOS scanner – stitching together the plurality of small scale images to form a large scale 3D model; paragraph [0227] – provide the best possible 3D image of ROI 438; paragraph [0393] – produce a 3D model).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATHER R JONES whose telephone number is (571)272-7368. The examiner can normally be reached Mon. - Fri.: 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HEATHER R JONES/Primary Examiner, Art Unit 2481
February 19, 2026