Prosecution Insights
Last updated: April 19, 2026
Application No. 17/434,064

SCANNER DEVICE WITH REPLACEABLE SCANNING-TIPS

Non-Final OA §103
Filed
Aug 26, 2021
Examiner
BURKE, TIONNA M
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
3Shape A/S
OA Round
5 (Non-Final)
54%
Grant Probability
Moderate
5-6
OA Rounds
4y 9m
To Grant
73%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
233 granted / 431 resolved
-0.9% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
46 currently pending
Career history
477
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 431 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant’s Response In Applicant’s Response dated 1/6/26, the Applicant amended Claim 1 and argued the claims previously rejected in the Office Action dated 10/10/25. Claims 1-21 are pending examination. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/6/26 has been entered. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-11, 15-21 are rejected under 35 U.S.C. 103 as being unpatentable over Okawa et al., United States Patent No. 7129472 (hereinafter “Okawa”), in view of Tchouprakov et al., United States Patent Publication 2014/0248576 (hereinafter “Tchouprakov”) and, Elbaz et al., United States Patent Publication No 2018/0028063 (hereinafter “Elbaz”) and in further view of Esbech et al., United States Patent Publication 20160022389 (hereinafter “Esbech”) and Fisker et al., United States Patent Publication 20140022356 (hereinafter “Fisker”). Claim 1: Okawa discloses: A scanning system for scanning an object, comprising: a scanner device comprising (see column 4 lines 47-64). Okawa teaches a scanner device: an image sensor for acquiring images (see column 4 lines 47-64). Okawa teaches an imaging device that performs imaging processing in which an image is obtained from the signal coming from the light source unit, a monitor that displays the image signal from the imaging device; a mounting-interface for detachably mounting at least one of a plurality of types of scanning-tips, wherein each of the plurality of types scanning-tips is configured for providing light to the object in an illumination-mode that differs for each of the plurality of types of scanning-tips (see column 2 lines 1-5). Moon teaches a detachable mounting on the scanner device that is able to attach different scanning tips for providing light to objects; and a recognition component for recognizing the type of scanning-tip mounted to the mounting-interface (see column 2 lines 6-8). Okawa teaches recognition means for recognizing the type of scanning probes; a processor configured for processing the images acquired by the image sensor into processed data (see column 30 lines 33-45). Okawa teaches a processor used to process the image acquired; and a controller configured for controlling the operation of the processor according to the type of the scanning-tip recognized by the recognition component, wherein the controller is further configured for controlling the processor such that when a first type of scanning-tip is mounted and recognized, the processor is controlled to operate in a first processing-mode corresponding to the first type of scanning-tip, and such that when a second type of scanning-tip is mounted and recognized, the processor is controlled to operate in a second processing-mode corresponding to the second type of scanning-tip, wherein the second processing-mode is different from the first processing- mode (see column 2 lines 9-12 and column 5 lines 42-46). Okawa teaches a controller that controls the operation based on the type of scanning tip recognized by the recognition unit, .and wherein: when in the first processing mode, the processor processes a first plurality of images acquired with a first illumination-mode to provide the processed data in the form of first data for 3D geometry and first data for texture of the object (see column 4 lines 47-64 and column 12 lines 31-42). Okawa teaches processing the images based on the processing mode based on the light mode. a first subset of the first plurality of images being selected according to the first type of scanning tip, thereby defining part of the first processing mode (see column 12 lines 31-42). Okawa teaches the first subset of images is based on the first scanning tip recognized by the identification module, and/or Okawa fails to expressly disclose processing images based on type of scanning tip. Tchouprakov discloses: a subset of pixels within said first plurality of images being selected according to the first type of scanning tip, thereby defining part of the first processing mode (see paragraphs [0027] and [0028]). Tchouprakov teaches the pixels are based on the scanning tip size and varying angles. It is implemented into the scanning based on the probe, and when in the second processing mode, the processor processes a second plurality of images acquired with a second illumination-mode to provide the processed data in the form of second data for the texture of the object (see paragraphs [0010], [0024] and [0028]). Tchouprakov teaches multiple illumination modes to provide the processed data in the form of second data for 3D images and determining texture, Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa to include multiple illumination modes and multiple scanning tips for the purpose of extending functionality of intraoral scanning with interchangeable tips, as taught by Tchouprakov. Okawa and Tchouprakov fail to expressly disclose an infrared probe that determines the internal texture. Elbaz discloses: the processor processes a second plurality of images acquired with a second illumination-mode using infrared light to provide the processed data in the form of second data for the texture of the object (see paragraphs [0020], [0034], [0074]). Elbaz teaches an infrared illumination-mode light used from the probe to determine the internal features of the object such as the texture. wherein the second data for the texture of the object is related to texture of internal structure (see paragraphs [0020], [0034], [0074]). Elbaz teaches the data is related to the internal features of the object such as the texture. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa and Tchouprakov to include using different probes such as infrared illumination probe tips for the purpose of determining the internal feature such as the texture, as taught by Elbaz. Okawa, Tscouprakov and Elbaz fail to disclose the scanning to determine the texture using red, blue and green pixels. Esbech discloses: a subset of pixels including a selected set of green, red and blue pixels within said first plurality of images being selected (see paragraphs [0121] and [0166]). Esbech teaches a subset of pixels in red, green and blue that define the texture within the images during oral scanning. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa, Tchouprakov and Elbaz to include using red, blue and green pixels to determine the texture for the purpose of determining the texture of the oral tissue, as taught by Esbech. Okawa, Tchouprakov, Elbaz and Esbech fails to expressly disclose obtaining the texture from a single image. Fisker discloses: causing the first processing mode to obtain the first data for texture of the object from a single image of the first plurality of images (see paragraphs [0054], [0381], [0384] and [0406]). Fisker teaches obtaining the texture of the object from the colors on a single image of the images. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa, Tchouprakov, Elbaz and Esbech to include determine the texture using a single image for the purpose of making a determination using one large field depth image, as taught by Fisker. Claim 2: Okawa discloses: wherein the processor is integrated in the scanner device (see column 4 lines 42-66). Okawa teaches the image processor is integrated in the scanner. Claim 3: Okawa discloses: wherein the controller is external to the scanner device (see column 5 lines 3-9). Okawa teaches the controller is externally connected to the scanner. Claim 4: Okawa discloses: wherein the type of scanning-tip, as recognized by the recognition component, is in the form of recognition-data, and wherein the scanner device is configured to transmit the recognition-data to the controller (see column 5 lines 60-65 and column 6 lines 17-23). Okawa teaches the identification circuit (recognition) identifies the connected optical probe as being the optical probe 112A or 112B based on whether a resistor R is connected to the electrical connector 118a as shown in FIGS. 2 and 4, an identification signal is applied to a selection Switch SW via a signal line 131a, and contact a or b is selected. The imaging device 115 is electrically connected to the control device 114 via the signal line 115b, and clock signals, for instance, can be transmitted to the control device 114. Clock signals that serve as a reference for the drive waveform at which the scanner is driven are inputted via the signal line 115c to the connector 136 of the imaging device 115. Claim 5: Okawa discloses: wherein the recognition element comprises a memory-reader configured to read recognition-data from an integrated memory on each of the plurality of types scanning-tips (see column 2 lines 9-12). Okawa teaches a memory reader configured to determining the type of scanning tip attached based on the information in the memory of the tip. Claim 6: Okawa discloses: wherein the illumination-mode for one type of scanning-tip is defined by the wavelength of the light and/or wherein the illumination-mode for one type of scanning tip is defined by the intensity of the light (see column 11 lines 53-64). Okawa teaches the intensity of the light is based on the scanner tip type. Claim 7: Okawa fails to expressly disclose processing images based on illumination types. Tchouprakov discloses: wherein the illumination-mode for one type of scanning-tip is defined by different wavelengths of the light, whereby one type of scanning-tip switches between the different wavelengths of the light (see paragraphs [0026] and [0027]). Tchouprakov teaches the different wavelengths of light are set by the illumination modes. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa to include multiple illumination modes and multiple scanning tips for the purpose of extending functionality of intraoral scanning with interchangeable tips, as taught by Tchouprakov. Claim 8: Okawa discloses: wherein the illumination-mode for one type of scanning-tip is defined by the field-of-view of the light and/or wherein the illumination-mode for one type of scanning-tip is defined by a pattern of the light (see column 18 lines 46-60). Okawa teaches the illumination mode is defined by field of view. Claim 9-11, 15-17: Okawa, Tchouprakov and Pesach fail to expressly disclose the images being different images but the pixels are the same . Elbaz discloses: wherein the first data for the 3D geometry is based on the first subset of the first plurality of images, and the first subset of pixels within said first plurality of images, and wherein the first data for the texture of the object is based on the second subset of the first plurality of images and the second subset of pixels within said first plurality of images, wherein the first subset of the first plurality of images is identical/different to the second subset of the first plurality of images, and wherein the first subset of pixels within said first plurality of images is different/identical from the second subset of pixels within said first plurality of images (see paragraph [0161], [0171], [0242). Elbaz teaches running diagnostics on different ways of scanning and comparing the images and pixels. The analysis of the images and pixels helps determine areas of interest during scanning and determining the clearest ways to achieve the images. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa, Tchouprakov and Pesach to include image and pixel diagnostic based on images from intraoral scanners for the purpose of effectively receiving the best oral images by running diagnostics on images and pixels, as taught by Elbaz. Claim 18: Okawa discloses: wherein the scanner device further comprises a lens configured to translate back and forth while the first and/or second plurality of images is acquired (see column 10 lines 51 -column 11 line 3). Okawa teaches a lens translates the data back and forth while the system is acquiring the images through the optical path. Claim 19: Okawa fails to expressly disclose generating the 3d model based on the first or second data. Tchouprakov discloses: wherein the scanning system further comprises a processor configured to generate a 3D- model of the object, and wherein the 3D-model is generated based on the first data for the 3D geometry, but wherein the 3D-model is not generated based on second data for the 3D geometry, or wherein the 3D-model is generated based on the second data for the 3D geometry, but wherein the 3D-model is not generated based on the first data for the 3D geometry. (see paragraphs [0010]). Tchouprakov teaches only generating the 3D model based on the data from one illumination mode. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa to include only generating the 3D model based on one type of data for the purpose of efficiently generating 3D model by reducing the amount of data being processed and only processing data needed for 3D model and not for the live preview, as taught by Tchouprakov. Claim 20: Okawa fails to expressly disclose generating the 3d model based on the first or second data. Tchouprakov discloses: wherein when the 3D-model is not generated based on the second data for the 3D geometry, then the second data for the 3D geometry is compared to the first data for the 3D geometry, whereby the second data for texture of the object is matched to the 3D-model (see paragraphs [0010]). Tchouprakov teaches only generating the 3D model based on the data from one illumination mode and using the second data for the texture of the object to match the 3D model. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa to include only generating the 3D model based on one type of data for the purpose of efficiently generating 3D model by reducing the amount of data being processed and only processing data needed for 3D model and not for the live preview, as taught by Tchouprakov. Claim 21: Okawa fails to expressly disclose generating the 3d model based on the first or second data. Tchouprakov discloses: wherein when the 3D-model is not generated based on the first data for the 3D geometry, then the first data for the 3D geometry is compared to the second data for the 3D geometry, whereby the first data for texture of the object is matched to the 3D-model. (see paragraphs [0010]). Tchouprakov teaches only generating the 3D model based on the second data from one illumination mode and using the first data for the texture of the object to match the 3D model. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa to include only generating the 3D model based on one type of data for the purpose of efficiently generating 3D model by reducing the amount of data being processed and only processing data needed for 3D model and not for the live preview, as taught by Tchouprakov. Claims 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Okawa, in view of Tchouprakov, Elbaz, Esbech, and Fisker in view of Moalem, United States Patent Publication 20200330195. Claim 12: Okawa, Tchouprakov, Elbaz, Esbech and Fisker fail to expressly disclose using non-chromatic light with a plurality of wavelengths. Moalem discloses: wherein the first subset of the first plurality of images is every second image of the plurality of images as recorded with non-chromatic light at a plurality of wavelengths, and wherein the second subset of the first plurality of images is the remaining images of the plurality of images recorded with monochromatic light at a first wavelength (see paragraph [0064]). Moalem teaches setting a starting point or at certain points to record images in chromatic light and the other images are taken in monochromatic light. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa, Tchouprakov, Elbaz, Esbech and Fisker to include having particular images captured in non-chromatic and others in monochromatic light for the purpose of allowing the user to control how pictures are captured from varying lengths, as taught by Maolem. Claim 13: Okawa, Tchouprakov, Elbaz, Esbech and Fisker fail to expressly disclose using non-chromatic light with a plurality of wavelengths. Moalem discloses: wherein the first subset of the first plurality of images is every third image of the first plurality of images as recorded with non-chromatic light defined by a plurality of wavelengths, and wherein the second subset of the first plurality of images is the remaining images of the first plurality of images recorded with monochromatic light at a first wavelength and at a second wavelength (see paragraph [0064]). Moalem teaches setting a starting point or at certain points to record images in chromatic light and the other images are taken in monochromatic light. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa, Tchouprakov, Elbaz, Esbech and Fisker to include having particular images captured in non-chromatic and others in monochromatic light for the purpose of allowing the user to control how pictures are captured from varying lengths, as taught by Maolem. Claim 14: Okawa, Tchouprakov, Elbaz, Esbech and Fisker fail to expressly disclose using non-chromatic light with a plurality of wavelengths. Moalem discloses: wherein the second subset of the first plurality of images is a single image as recorded with non-chromatic light defined by a plurality of wavelengths (see paragraph [0064]). Moalem teaches setting a starting point or at certain points to record images in chromatic light and the other images are taken in monochromatic light. Moalem teaches setting a single point to record an image. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Okawa, Tchouprakov, Elbaz, Esbech and Fisker to include having particular images captured in non-chromatic and others in monochromatic light for the purpose of allowing the user to control how pictures are captured from varying lengths, as taught by Maolem. Response to Arguments Applicant’s arguments, see REM, filed 1/6/26, with respect to the rejection of claim 1-21 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Okawa, Tchouprakov, Elbaz, Esbech and Fisker. Applicant argues None of the art cited by the PTO discloses or otherwise suggests this feature. In fact, none of the art cited by the PTO is concerned with providing any technical effect that is similar to obtaining texture data in a single image, let alone doing so by collecting both red, green, and blue data at the same time. The Examiner agrees that Okawa, Tchouprakov, Elbaz and Esbech do not teach the argued limitation. The Examiner uses new art, Fisker, to teach the new limitation (see the above rejection for Claim 1). Thus, the combination of art including Fisker, teaches the amended Claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIONNA M BURKE whose telephone number is (571)270-7259. The examiner can normally be reached M-F 8a-4p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571)272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TIONNA M BURKE/Examiner, Art Unit 2178 3/7/26
Read full office action

Prosecution Timeline

Aug 26, 2021
Application Filed
Aug 26, 2021
Response after Non-Final Action
Jun 15, 2024
Non-Final Rejection — §103
Oct 04, 2024
Response Filed
Dec 05, 2024
Final Rejection — §103
Apr 14, 2025
Request for Continued Examination
Apr 16, 2025
Response after Non-Final Action
Apr 19, 2025
Non-Final Rejection — §103
Jul 15, 2025
Response Filed
Oct 02, 2025
Final Rejection — §103
Jan 06, 2026
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596470
GESTURE-BASED MENULESS COMMAND INTERFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12591731
SYSTEM AND METHOD FOR SELECTING RELEVANT CONTENT IN AN ENHANCED VIEW MODE
2y 5m to grant Granted Mar 31, 2026
Patent 12572698
INFRASTRUCTURE METHODS AND SYSTEMS FOR EXTENDING CUSTOMER RELATIONSHIP MANAGEMENT PLATFORM
2y 5m to grant Granted Mar 10, 2026
Patent 12564152
SYSTEM AND METHOD FOR MANAGEMENT OF SENSOR DATA BASED ON HIGH-VALUE DATA MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12547823
DYNAMICALLY AND SELECTIVELY UPDATED SPREADSHEETS BASED ON KNOWLEDGE MONITORING AND NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
54%
Grant Probability
73%
With Interview (+19.3%)
4y 9m
Median Time to Grant
High
PTA Risk
Based on 431 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month