Prosecution Insights
Last updated: April 19, 2026
Application No. 17/472,589

METHOD OF PROCESSING THREE-DIMENSIONAL SCAN DATA FOR MANUFACTURE OF DENTAL PROSTHESIS

Non-Final OA §103
Filed
Sep 11, 2021
Examiner
RICHER, AARON M
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Medit Corp.
OA Round
5 (Non-Final)
51%
Grant Probability
Moderate
5-6
OA Rounds
4y 0m
To Grant
70%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
236 granted / 465 resolved
-11.2% vs TC avg
Strong +20% interview lift
Without
With
+19.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
28 currently pending
Career history
493
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 465 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 30 January 2026 have been fully considered but they are not persuasive. Applicant’s arguments with respect to the prior art have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3, 4, 9-12, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim (U.S. Publication 2018/0206950) in view of Barak (U.S. Publication 2019/0209274) and Je (U.S. Publication 2020/0383759). As to claim 1, Kim discloses a method for processing 3D scanned data to manufacture a dental prosthesis, comprising: acquiring image data from a target object, on which a structure for prosthesis treatment is installed, through a 3D scanner (p. 3, sections 0035-0043; p. 6, sections 0086-0091; p. 6, section 0094-p. 7, section 0097; an object such as the patient’s mouth or a model of the patient’s mouth is scanned using an oral scanner as well as a CT scan; the CT scan would be a 3D scanner since it produces volumes of voxels and the oral scan is also disclosed as creating a 3D image, meaning the oral scanner would be a 3D scanner; a structure for prothesis such as an installed abutment is also scanned); acquiring shape information of the structure from the image data (p. 6, section 0094-p. 7, section 0104; p. 8, section 0118; shape data of the outer portion of the abutment structure is determined so as to match with a reference abutment structure in a digital library); comparing the shape information of the scanned structure, acquired in the acquiring of the image data, to reference data for the structure (p. 6, section 0094-p. 7, section 0104; p. 7, section 0110-p. 8, section 0118; p. 8, sections 0124-0126; shape data of the outer portion of the abutment structure is determined so as to match with a reference abutment structure in a digital library; matching is done by comparing measured shapes, lengths, and conditions); and aligning, by aligning the locations of the reference data and the scanned structure, on the basis of the comparison result (p. 8, section 0125; p. 11, sections 0173-0183; matching positions and aligning the virtual/reference abutment data with the actual scanned abutment structure is performed), wherein the structure is a scanbody or abutment between a fixture and an artificial tooth and coupled to the fixture (p. 1, section 0005; p. 6, sections 0086-0091; p. 6, section 0094-p. 7, section 0097; the structure is an abutment both coupled to and installed between a crown/artificial tooth and a fixture). Kim does not disclose, but Barak does disclose wherein the reference data and the scanned structure are coupled and formed as corrected data, wherein the corrected data is displayed using predetermined patterns or colors, wherein the reference data for the structure is digital data of the scanned structure that forms the corrected data (p. 6, section 0062-p. 7, section 0065; a scanned structure is compared to a previous reference, reading on a coupling, and specified highlight colors are used to show changes in the new/corrected data; the previous reference is an original entity W and the newly scanned structure is a modified version of the entity after an operation has taken place). The motivation for this is to alert a user to deviations. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Kim to have the reference data and the scanned structure coupled and formed as corrected data, wherein the corrected data is displayed using predetermined patterns or colors, wherein the reference data for the structure is digital data of the scanned structure that forms the corrected data in order to alert a user to deviations as taught by Barak. Kim discloses before aligning, when a tooth location to which the reference data for the scanned structure is to be applied is decided, obtaining the reference data corresponding to the tooth location (p. 3, section 0040; p. 6, section 0089; p. 7, section 0110-p. 8, section 0118; p. 8, sections 0124-0126; p. 11, sections 0173-0183; a location/position is decided according to a plan; associated reference data for the location of a structure is obtained and output in an image; after the image output, correspondence analysis and alignment take place). Kim does not disclose, but Je discloses before aligning, when a tooth location to which the reference data for the scanned structure is to be applied is decided, the reference data corresponding to the tooth location and for correcting the scanned structure are listed on a library interface (fig. 13; p. 2, section 0035; p. 4, sections 0082-0083; p. 6, section 0124; p. 8, sections 0164-0172; a reference code for the prosthesis to correct tooth issues from a scan is shown in a library selection interface; the tooth location reference number “17” is also shown on the interface), wherein in the library interface, by selecting the reference data for correcting the scanned structure, a 3D model of the reference data is arranged in a scan interface where the scanned structure is displayed (fig. 13; a user can select the reference data from the pull down menu by selecting “apply”), so that locations and axial directions of the reference data and the scanned structure of the image data match (p. 8, section 0172; after design is confirmed, a tooth location is selected and directions are matched between the reference design data and the scanned oral cavity). The motivation for this is to reduce human error and working time (p. 1, sections 0003 and 0008). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Kim and Barak to before aligning, when a tooth location to which the reference data for the scanned structure is to be applied is decided, the reference data corresponding to the tooth location and for correcting the scanned structure are listed on a library interface wherein in the library interface, by selecting the reference data for correcting the scanned structure, a 3D model of the reference data is arranged in a scan interface where the scanned structure is displayed in order to reduce human error and working time as taught by Je. As to claim 3, Kim discloses wherein the target object is a model obtained by replicating an inside of an actual oral cavity of a patient or the internal structure of the oral cavity of the patent (p. 3, sections 0035-0043; p. 6, sections 0086-0091; p. 6, section 0094-p. 7, section 0097; the mouth/oral cavity internal structure of a patient is the target object). As to claim 4, Kim discloses wherein the acquiring of the image data comprises acquiring a plurality of 2D image data from a scan unit of the 3D scanner, and generating 3D image data on the basis of the plurality of acquired 2D data (p. 3, sections 0036-0039; a CT scan is acquired, which is, by definition, a plurality of 2D slices that generate a 3D volumetric image). As to claim 9, Kim discloses wherein the reference data and the image data are automatically aligned (p. 6, section 0094-p. 7, section 0104; p. 7, section 0110-p. 8, section 0118; matching is done by a program rather than a user). As to claim 10, Kim discloses wherein the aligning of the locations comprises aligning the reference data on the basis of the scanned location of the image data (p. 8, sections 0124-0126; alignment is performed based on matching reference points based on the scanned location of abutment positions in the image data). As to claim 11, Kim discloses wherein the aligning of the locations is performed by superimposing a reference point of the image data on a reference point of the reference data for the structure (fig. 6; p. 8, sections 0124-0126; a matching or superimposition of two reference points on each of the reference abutment and the image abutment is performed). As to claim 12, Kim discloses use of two reference points (p. 8, section 0126). Kim does not explicitly disclose wherein the number of the reference points is set to one or three. However, applicant’s specification makes clear that there is no criticality to the use of exactly one or three reference points- it is a design choice based on user preference (p. 32 of filed specification- “the number of reference points is not limited, but may be selected and used in consideration of the accuracy and speed of the alignment”). Regarding prior art where the only difference is the choice of size or proportion of an element, MPEP 2144.04 states “mere scaling up of a prior art process capable of being scaled up, if such were the case, would not establish patentability in a claim to an old process so scaled”. In other words, scaling up a prior art two-reference point method to three reference points would not be a patentable distinction because the invention would work in exactly the same way, and applicant’s specification admits that there is no particular advantage to the amount in the claim limitation. As to claim 19, see the rejection to claim 1. Further, Kim discloses an apparatus comprising a 3D scanner (see rejection to claim 1), a processor to perform the steps (p. 6, section 0091; a PC would inherently include a processor), and Barak discloses a display for the corrected data using predetermined patterns or colors (p. 6, section 0062-p. 7, section 0064; specified highlight colors are used to show changes in the new/corrected data using a signal sent to a display). Motivation for the combination is given in the rejection to claim 1. As to claim 20, see the rejection to claim 3. Claims 5-7 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Barak and Je and further in view of Lang (U.S. Publication 2020/0138518). As to claim 5, Kim does not disclose but Lang discloses wherein the acquiring of the image data further comprises updating the image data by scanning new image data in real time (p. 218, section 1629; the image data is constantly being scanned and registration is occurring in real time). The motivation for this is to allow new information in a surgical planning system. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Kim, Barak, and Je to update the image data by scanning new image data in real time in order to allow new information in a surgical planning system as taught by Lang. As to claim 6, Lang discloses wherein the updating of the image data comprises superimposing the newly acquired image data on the previously acquired image data (p. 248, section 1809; newly acquired image data is superimposed with previously acquired fluoroscopic image data). Motivation for the combination is given in the rejection to claim 5. As to claim 7, Lang discloses wherein the acquiring of the shape information of the scanned structure, the comparing of the shape information, and the aligning of the locations are performed at the same time as the acquiring of the image data (p. 247-248, section 1802; p. 248, sections 1808-1809; acquiring of shape information, comparison using markers, and alignment/registration of the fluoroscopic and recently acquired image data is performed at the same time as new image data is acquired). Motivation for the combination is given in the rejection to claim 5. As to claim 13, Lang discloses wherein the acquiring of the image data, the acquiring of the shape information, the comparing of the shape information, and the aligning of the locations are performed on a scan interface (p. 1, section 0010; p. 115, sections 0976-0977; p. 194, section 1526; an interface for scanning and registering/comparing/aligning is discussed; the interfaces described can be all a single interface). Motivation for the combination is given in the rejection to claim 5. As to claim 14, Kim discloses loading the reference data for the structure from a previously mounted library (p. 6, section 0094-p. 7, section 0104; p. 8, section 0118; shape data of the outer portion of the abutment structure is determined so as to match with a reference abutment structure in a digital library; the library would inherently be loaded/mounted somewhere before use, reading on a “previously mounted” library). As to claim 15, Lang discloses wherein in the loading of the reference data, the reference data for the structure is displayed on a library interface different from the scan interface (p. 1, section 0010; p. 2, section 0013; p. 2, section 0021; p. 115, sections 0976-0977; p. 123, section 0120; p. 125, section 1032 p. 194, section 1526; both an interface for scanning and an interface for selection of a structure from a library are discussed; the interfaces described can all be different interfaces). Motivation for the combination is given in the rejection to claim 5. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Barak and Je and further in view of Wang (U.S. Publication 2022/0020164). As to claim 8, Kim does not disclose, but Wang discloses after the comparing of the shape information of the image data to the reference data, determining whether the comparison result satisfies an end reference value, and ending the acquiring of the image data and the aligning of the locations according to an end condition (p. 3, section 0038; p. 3, section 0046; p. 4, section 0055; information from the radiograph image data is compared to a reference 3D image and reference X-ray images; the comparison of objects or regions would also include the shapes that make up those objects or regions; the radiograph image data is acquired and registered/aligned and it is determined whether a reference precision value has been reached; stopping occurs when an end condition is satisfied). The motivation to use a reference value and an end condition for this is to comply with an application requirement (p. 3, section 0048). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Kim, Barak, and Je to, after the comparing of the shape information of the image data to the reference data, determine whether the comparison result satisfies an end reference value, and end the acquiring of the image data and the aligning of the locations according to an end condition in order to comply with an application requirement as taught by Wang. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Barak, Je and Lang and further in view of Kim ‘652 (U.S. Publication 2018/0075652). As to claim 16, Kim discloses wherein the loading of the reference data comprises selecting and assigning reference data for the structure, needed by a user, and arranging a 3D model of the reference data for the structure, selected in the selecting and assigning of the reference data (p. 8, sections 0123-0126; reference data is selected based on a match, and the reference 3D model for the structure is arranged into the acquired 3D image). Kim does not disclose, but Kim ‘652 does disclose that the arrangement is at a random location on the scan interface (fig. 4e; p. 6, sections 0098-0099; 3D model reference data is generated with other data at a random location on a camera/image scan interface). The motivation for this so a user can align the models themselves. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Kim, Barak, Je and Lang to have the arrangement at a random location on the scan interface in order to allow a user to align the models themselves as taught by Kim ‘652. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Barak and Je and further in view of Sachdeva (U.S. Publication 2014/0379356). As to claim 18, Kim does not disclose, but Sachdeva wherein a tooth number corresponding to a portion where the corrected data is formed is displayed to confirm that the reference data and the scanned structure are accurately aligned (p. 16, section 0157; p. 16-17, section 0164; p. 19, section 0188; a user selects alignment of a 3D scanned teeth model and a 2D X-ray image, which can read on reference data; if a user is satisfied with alignment accuracy, other tasks are performed, such as measurement including tooth number display; thus, the tooth number display over corrected aligned data would confirm that the user is satisfied with alignment accuracy). The motivation for this is to allow a user to inspect and measure the tooth characteristics and contact points. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Kim, Barak, and Je to display a tooth number corresponding to a portion where the corrected data is formed to confirm that the reference data and the scanned structure are accurately aligned in order to allow a user to inspect and measure the tooth characteristics and contact points as taught by Sachdeva. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON M RICHER whose telephone number is (571)272-7790. The examiner can normally be reached 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON M RICHER/Primary Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Sep 11, 2021
Application Filed
Feb 09, 2024
Non-Final Rejection — §103
Jun 16, 2024
Response Filed
Aug 02, 2024
Final Rejection — §103
Dec 01, 2024
Request for Continued Examination
Dec 06, 2024
Response after Non-Final Action
Dec 11, 2024
Non-Final Rejection — §103
May 16, 2025
Response Filed
Jul 30, 2025
Final Rejection — §103
Jan 30, 2026
Request for Continued Examination
Feb 02, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586151
Frame Rate Extrapolation
2y 5m to grant Granted Mar 24, 2026
Patent 12579600
SEAMLESS VIDEO IN HETEROGENEOUS CORE INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12571669
DETECTING AND GENERATING A RENDERING OF FILL LEVEL AND DISTRIBUTION OF MATERIAL IN RECEIVING VEHICLE(S)
2y 5m to grant Granted Mar 10, 2026
Patent 12555305
Systems And Methods For Generating And/Or Using 3-Dimensional Information With Camera Arrays
2y 5m to grant Granted Feb 17, 2026
Patent 12548233
3D TEXTURING VIA A RENDERING LOSS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
51%
Grant Probability
70%
With Interview (+19.5%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 465 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month