Prosecution Insights
Last updated: April 19, 2026
Application No. 18/323,161

METHOD, APPARATUS AND RECORDING MEDIUM STORING COMMANDS FOR PROCESSING SCANNED IMAGES OF 3D SCANNER

Non-Final OA §103
Filed
May 24, 2023
Examiner
SINHA, SNIGDHA
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Medit Corp.
OA Round
3 (Non-Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
2y 6m
To Grant
96%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
3 granted / 6 resolved
-12.0% vs TC avg
Strong +46% interview lift
Without
With
+45.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
26 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
2.0%
-38.0% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
16.2%
-23.8% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5, 11 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Safier (US 20210321872) in view of Herber (US 20070229850) and further in view of Wang (US 20190374155) and further in view of Liu (US 20100014781). Regarding claim 11, Safier teaches an electronic apparatus comprising: A communication circuit communicatively connected to a 3D scanner (Paragraph 209, computing device may be coupled to one or more intraoral scanner… via a wired or wireless connection); A memory (Paragraph 209, memory, secondary storage); A display (Paragraph 71, outputting to a display); One or more processors (Paragraph 209, computing device may each include one or more processing devices), where the one or more processors are configured to: Acquire, from the 3D scanner, a 2D image set of an object generated by scan of the 3D scanner (Paragraph 129, receiving one or more two-dimensional (2) images generated by the intraoral scanner), the 2D image set including: Detect a first region in the input image based on an output of the artificial neural network (Paragraph 37, processing data from the first plurality of intraoral scans using a trained machine learning model that has been trained to identify restorative dental objects); While Safier fails to disclose the following, Heber teaches: At least one patterned image acquired by irradiating the object with patterned light and including depth information and shape information used to generate 3D scan data of the object (Paragraph 35, the geometry camera 26 in conjunction with the pattern flash unit 30 captures a geometric image of the 3D object with the structured light pattern projected onto the 3D object); At least one non-patterned image acquired by irradiating the object with non-patterned light including color information used to generate the 3D scan data of the object (Paragraph 34, No structured light pattern is projected onto the 3D object during capture of the texture image file); Generate the 3D scan data based on the at least one patterned image and the at least one non-patterned image from which the data of the region corresponding to the first region is removed (Paragraph 35, The texture data is overlaid onto the 3D data in the geometric image data file to provide a composite image file with texture data and XYZ coordinates from the geometric data file). Herber and Safier are both considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Safier by using Herber and obtaining both a patterned image and non-patterned image to generate a 3D scan. Doing so would allow for creating a detailed 3D representation by combining the details of the non-patterned image and structure from the patterned image. While the combination of Safier and Herber fails to disclose the following, Wang teaches: Input an input image to an artificial neural network, which has been trained to detect at least one predetermined region in an image of the object (Paragraph 11, the object of interest can be determined by outlining a boundary of the object of interest by a user or an artificial intelligence process), the input image being generated based on the at least one non-patterned image included in the 2D image set (Paragraph 9, where the regular images are captured using the imaging apparatus by projecting non-structured light onto the body lumen); Wang and the combination of Safier and Herber are both considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Safier and Herber by using Wang to use an input image based on one non-patterned image to input into an artificial neural network that can detect at least one predetermined region. Doing so would allow for easily detecting regions in images taken with non-structured lighting. While the combination of Safier, Herber, and Wang fails to disclose the following, Liu teaches: Generate 3D scan data of the object based on the 2D image set such that the first region is not represented in the 3D scan data (Paragraph 16, converting an input two-dimensional image into three-dimensional image content… performing a subtraction operation on the input two-dimensional image and the rectified background image; and performing a foreground and background classification according to a result of the subtraction operation and the segmented image in order to classify the patches of the segmented image into a foreground patch group and a background patch group), Wherein, in generating the 3D scan data, the one or more processors are configured to: Remove data of a region corresponding to the first region from each of the at least one patterned image and the at least one non-patterned image (Paragraph 16, converting an input two-dimensional image into three-dimensional image content… performing a subtraction operation on the input two-dimensional image and the rectified background image; and performing a foreground and background classification according to a result of the subtraction operation and the segmented image in order to classify the patches of the segmented image into a foreground patch group and a background patch group); Note: Herber teaches identifying the same region in both the patterned image and non-patterned image and Liu teaches removing data of a specified region. Liu and the combination of Safier, Herber, and Wang are both considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Safier, Herber, and Wang by using Liu to generate 3D scan data based on the 2D image set while excluding a predetermined region. Doing so would allow for accurately and efficiently rendering a 2D image set in 3D while only maintaining the desired portions from the images. Method claim 1 corresponds to apparatus claim 11. Therefore, claim 1is rejected for the same reasons as used above. Regarding claim 15, the combination of Safier, Herber, Wang, and Liu teaches the electronic apparatus of claim 11, wherein the artificial neural network has been trained to output a segmentation result for the input image by classifying at least one pixel included in the input image into a corresponding region among the at least one predetermined region (Safier, Paragraph 381, an artificial neural network being trained to perform dental site classification, there may be a first class (excess material), a second class (teeth), a third class (gums), a fourth class (restorative objects…the class, prediction, etc. may be determined for each pixel in the image/scan/surface). Method claim 5 corresponds to apparatus claim 15. Therefore, claim 5 is rejected for the same reasons as used above. Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Safier in view of Herber and further in view of Wang and further in view of Liu as applied to claims 1, 5, 11, and 15 and further in view of Cutforth (US 20230326011). Regarding claim 12, the combination of Safier, Herber, Wang, and Liu teaches the electronic apparatus of claim 11. While the combination fails to disclose the following, Cutforth teaches: Wherein the first region is a region corresponding to the metal in the input image (Paragraph 82, the image processing circuitry uses an additional trained model to segment metal objects in the contract volume). Cutforth and the combination of Safier, Herber, Wang, and Liu are both considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Safier, Herber, Wang, and Liu to incorporate the teachings of Cutforth and detect regions in an image corresponding to metal. Doing so would allow for generating 3D dental models with or without foreign dental objects present in the scan, allowing customization of the 3D model. Method claim 2 corresponds to apparatus claim 12. Therefore, claim 2 is rejected for the same reasons as used above. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Safier in view of Herber and further in view of Wang and further in view of Liu as applied to claims 1, 5, 11, and 15 and further in view of Price (US 11676250). Regarding claim 14, the combination of Safier, Herber, Wang, and Liu teaches the electronic apparatus of claim 11, wherein the one or more processors are configured to: Input the image to the artificial neural network (see claim 11 above). However, while the combination fails to disclose the following, Price teaches: Generate an RGB image using two or more 2D images, which are included in the 2D image set and used to acquire monochrome information (Column 4, Paragraph 6, generate a high resolution full color output image… model then generates respective red-only, green-only, and blue-only color images as well as a high resolution texture map (i.e. a monochrome image)… pixel data from multiple images is extracted and aggregated together). Price and the combination of Safier, Herber, Wang, and Liu are both considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Safier, Herber, Wang, and Liu to incorporate the teachings of Price generate an RGB image by combining multiple input images and acquiring their monochrome information before inputting into the artificial neural network. Doing so would give the neural network a simpler input and allow it to process faster and more efficiently. Method claim 4 corresponds to apparatus claim 14. Therefore, claim 4 is rejected for the same reasons as used above. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable Safier in view of Herber and further in view of Wang and further in view of Liu as applied to claims 1, 5, 11, and 15 and further in view of Kuwabara (US 20240394905). Regarding claim 18, the combination of Safier, Herber, Wang, and Liu teaches the electronic apparatus of claim 11, wherein, in removing the data of the region corresponding to the first region from each of the at least one patterned image and the at least one non-patterned image. While the combination fails to disclose the following, Kuwabara teaches: Change values of pixels included in the region corresponding to the first region in each of the at least one patterned image and the at least one non-patterned image to a preset value (Paragraph 209, replaces a pixel value of the pixel of the background portion with a predetermined pixel value…obtained by cancelling the background). Note: Herber teaches identifying the same region in both the patterned image and non-patterned image and Liu teaches removing data of a specified region. Kuwabara and the combination of Safier, Herber, Wang, and Liu are both considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Safier, Herber, Wang, and Liu to incorporate the teachings of Kuwabara and remove data or a specified region by replacing pixel values to remove regions of images. Doing so would create a more accurate recognition result by performing recognition processing on the on the recognition image obtained by cancelling the background in this manner (Kuwabara, paragraph 210). Method claim 8 corresponds to apparatus claim 18. Therefore, claim 8 is rejected for the same reasons as used above. Response to Arguments Applicant’s arguments with respect to claims 1 and 11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Herber teaches capturing both patterned and non-patterned images and identifying corresponding points in both image types. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have combined Herber with Safier, Wang, and Liu to remove corresponding regions in both patterned and non-patterned images. Therefore, claims 1 and 11 are rejected under this combination. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SNIGDHA SINHA whose telephone number is (571)272-6618. The examiner can normally be reached Mon-Fri. 12pm-8pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SNIGDHA SINHA/Examiner, Art Unit 2619 /JASON CHAN/Supervisory Patent Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

May 24, 2023
Application Filed
May 05, 2025
Non-Final Rejection — §103
Aug 06, 2025
Response Filed
Oct 06, 2025
Final Rejection — §103
Jan 08, 2026
Request for Continued Examination
Jan 23, 2026
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567216
AUGMENTED-REALITY-INTERFACE CONFLATION IDENTIFICATION
2y 5m to grant Granted Mar 03, 2026
Patent 12406339
MACHINE LEARNING DATA AUGMENTATION USING DIFFUSION-BASED GENERATIVE MODELS
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
96%
With Interview (+45.8%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month