Prosecution Insights
Last updated: April 19, 2026
Application No. 17/939,338

DATA PROCESSING APPARATUS AND METHOD THEREOF

Final Rejection §102§103
Filed
Sep 07, 2022
Examiner
AHN, CHRISTINE YERA
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Medit Corp.
OA Round
4 (Final)
69%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
11 granted / 16 resolved
+6.8% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
5.2%
-34.8% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority 2. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Amendment 3. The amendment filed January 22, 2026 has been entered. Claims 1 and 3-14 remain pending in the application. Response to Arguments 4. Applicant's arguments filed January 22, 2026 have been fully considered but they are not persuasive. 5. Applicant argues that Azernikov et al. (United States Patent Application Publication No. 2018/0028294 A1), hereinafter Azernikov, does not disclose “identifying the type of the 3D scan data by determining whether the 3D scan data of the object includes post-preparation 3D scan data, pre-preparation 3D scan data, or both post-preparation 3D scan data and pre-preparation 3D scan data” as claimed in the amended claim 1. Examiner replies that the claim language only requires identifying one of those types and does not claim identifying all those types since the claim language uses “or”. Thus, Azernikov teaching only identifying the post-preparation 3D scan data teaches the above limitation. Applicant is advised to use “and” to require all types to be checked for in the 3D scan data. 6. Applicant argues that Azernikov “merely selects a restoration type and an associated neural network within a single prosthesis design pipeline based on an analysis of the patient’s dentition derived from post-preparation scan data.” Applicant further argues that this does not constitute selecting a module according to the identified type of scan data where the identified type includes post-preparation, pre-preparation, or both post and pre-preparation scan data Examiner replies that the claim language only requires identifying one of those types and does not claim identifying all those types since the claim language uses “or”. Thus, Azernikov teaching only selecting a module based on identifying the post-preparation 3D scan data teaches the above limitation. Applicant is advised to use “and” to require all types to be checked for in the 3D scan data. 7. Applicant argues that Azernikov does not disclose selecting a module corresponding to the scenario in which both pre-preparation 3D scan data and post-preparation 3D scan data are available, and/or a scenario where pre-preparation 3D scan data is available. Applicant also argues that it does not disclose designing a prosthesis by selecting a module in a different manner depending on different scan data types. Examiner replies that the claim language only requires identifying one of those types and does not claim identifying all those types since the claim language uses “or”. Thus, Azernikov teaching only selecting a module based on identifying the post-preparation 3D scan data teaches the above limitation. Applicant is advised to use “and” to require all types to be checked for in the 3D scan data. Furthermore, Azernikov in Paragraphs 77- 78 does teach selecting a different neural network based on the category of dentition type identified from the post-preparation 3D scan data input. Thus, the module or neural network is selected in a different manner depending on the identified type of the 3D scan data. The claim as amended does not require the identified types of the 3D scan data to include post, pre, and both post and pre-preparation 3D scan data. The claim language only requires the type of 3D scan data to be identified by determining whether the 3D scan data is post, pre, or both post and pre-preparation 3D scan data. The claim can be broadly interpreted to mean after determining the 3D scan data is post-preparation data, one can then identify a type of the 3D scan data from that information. 8. Conclusion: The rejections set in the previous Office Action are shown to have been proper, and the claims are rejected below. New citations and parenthetical remarks can be considered new grounds of rejection and such new grounds of rejection are necessitated by the Applicant’s amendments to the claims. Therefore, the present Office Action is made final. Claim Rejections - 35 USC § 102 9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 10. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. 11. Claim(s) 1, 3, 6-11, and 14 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Azernikov et al. (United States Patent Application Publication No. 2018/0028294 A1), hereinafter referred to as Azernikov. 12. Regarding claim 1, Azernikov teaches a data processing method using a data processing apparatus, the data processing method comprising: obtaining a 3D oral model comprising 3D scan data of an object (Paragraph 56 mentions scanning the patient’s dental or oral anatomy which can be assembled into a digital model; Paragraph 78 mentions uploading the patient’s dentition scan data set which can be in its original 3D format); selecting one module from a plurality of modules according to a type of the 3D scan data of the object (Paragraph 12-13 mentions that the 3d scan data can be analyzed to determine a category of dentition or dental feature. The category can include restoration types such as crowns, inlays, bridge, and implants; Paragraph 78 mentions selecting one of the neural networks depending on selected ‘type of dental restoration’. Selecting the type of dental restoration and neural networks teaches selecting a module under broadest reasonable interpretation), and generating a prosthesis for the object by designing the prosthesis using the selected module (Paragraph 78 teaches that the client can choose a dental or restoration to be modeled and choose the neural network based on the selected restoration type which then generates the prosthesis. This designs a prosthesis using the selected module; Paragraph 79 teaches using the selected trained neural network, they can generate a 3d model of a crown), wherein the selecting one module from the plurality of modules, comprising: identifying the type of the 3D scan data by determining whether the 3D scan data of the object includes post-preparation 3D scan data, pre-preparation 3D scan data, or both post-preparation 3D scan data and pre-preparation 3D scan data (Paragraph 7 teaches scanning the prepared tooth and not the pre-prepared tooth. Thus, the 3D scan data is post-preparation 3D scan data; Paragraphs 77- 78 teaches selecting one of the neural networks depending on the category of the dentition identified from the post-preparation 3D scan. The category of dentition can be considered a type of 3D scan data. Thus, the type of 3D scan data is identified by determining the 3D scan data includes post-preparation 3D scan data. The selection of the module based on analyzing the post-preparation 3D scan data teaches selecting a module according to the post-preparation type of the 3D scan data); and selecting a module configured to design the prosthesis in a different manner according to the identified type of the 3D scan data (Paragraphs 77- 78 teaches selecting one of the neural networks depending on the category of the dentition identified from the post-preparation 3D scan. The category of dentition can be considered a type of 3D scan data. Thus, the module or neural network is selected differently according to the identified type of the 3D scan data). 13. Regarding claim 3, Azernikov teaches the limitations of claim 1. Azernikov further teaches the data processing method wherein the selecting of the module from the plurality of modules comprises selecting a first module on the basis that the 3D scan data of the object comprises the post-preparation 3D scan data of the object and does not comprise the pre-preparation 3D scan data of the object, wherein the generating of the prosthesis comprises (Paragraph 7 teaches scanning and uploading the prepared tooth. The prepared tooth scan can be considered post-preparation 3D scan data of the object; Paragraphs 77- 78 teaches selecting one of the neural networks depending on the category of the dentition identified which would only be from the post-preparation 3D scan): obtaining a library tooth corresponding to the object based on the selecting of the first module (Paragraph 58 teaches that a tooth library can be searched; Paragraph 73 mentions the design software can include the library-based automatic dental restoration to design the dental restoration or prosthesis); and generating a first prosthesis using the post-preparation 3D scan data of the object and the library tooth (Paragraph 73 teaches the design software can include the library-based automatic dental restoration along with the dental information from the 3D scan to design the dental restoration or prosthesis). 14. Regarding claim 6, Azernikov teaches the limitations of claim 1. Azernikov further teaches the data processing method further comprising receiving a selection of one module from the plurality of modules from a user (Paragraph 9 teaches that the user can decide which dental restoration type to be fabricated; Paragraph 78 teaches selecting one of the neural networks depending on selected restoration type). 15. Regarding claim 7, Azernikov teaches the limitations of claim 1. Azernikov further teaches the data processing method further comprising identifying, from the 3D scan data included in the 3D oral model, 3D scan data required as mandatory input data by the selected module (Paragraph 12-13 teaches that the 3D scan data can be analyzed to determine a category of dentition or dental feature. The category can include restoration types such as crowns, inlays, bridge, and implants; Paragraph 79 teaches requiring the patient’s dentition data set in order to use the neural network or selected module to generate the 3D model of a crown which is a prosthesis. This discloses that the 3D scan data is mandatory input data). 16. Regarding claim 8, Azernikov teaches the limitations of claim 7. Azernikov further teaches the data processing method wherein the 3D oral model comprises 3D scan data of a dental arch opposite to an arch including the object, and the dental arch comprises an antagonist tooth corresponding to the object (Paragraph 56 teaches taking a scan of the preparation and opposing jaws which inherently contains the antagonist tooth), wherein the identifying of the 3D scan data comprises identifying the 3D scan data of the dental arch as mandatory input data (Paragraph 79 teaches requiring the patient’s dentition data set in order to use the neural network or selected module to generate the 3D model of a crown which is a prosthesis. This discloses that the 3D scan data is mandatory input data). 17. Regarding claim 9, Azernikov teaches the limitations of claim 8. Azernikov further teaches the data processing method wherein the generating of the prosthesis for the object comprises generating the prosthesis for the object by using both the 3D scan data of the object and the 3D scan data of the dental arch (Paragraph 56 teaches taking a scan of the preparation and opposing jaws which inherently contains the antagonist tooth. It also teaches using the dental model of those scans to design a dental restoration or prosthesis). 18. Regarding claim 10, Azernikov teaches the limitations of claim 7. Azernikov further teaches the data processing method wherein the identifying of the 3D scan data required as mandatory input data comprises identifying a type of the 3D scan data from identification information on the 3D scan data (Paragraph 12-13 teaches that the 3D scan data can be analyzed to determine a category of dentition or dental feature; Paragraph 79 teaches requiring the patient’s dentition data set in order to use the neural network or selected module to generate the 3D model of a crown which is a prosthesis. This discloses that the 3D scan data is mandatory input data). 19. Regarding claim 11, Azernikov teaches the limitations of claim 10. Azernikov further teaches the data processing method wherein the type of the 3D scan data comprises at least one piece of information indicating whether the 3D scan data is about maxilla or mandible, or information indicating whether the 3D scan data is about a pre-preparation tooth or a post-preparation tooth (Paragraph 12-13 teaches that the 3D scan data can be analyzed to determine a category of dentition or dental feature; Paragraph 20 teaches that the data set has dental features identified which can comprise of a dental preparation, lower jaw, or upper jaw). 20. Regarding claim 14, claim 14 is the data processing apparatus claim (Azernikov Figure 2 teaches a processor at marker 202; Azernikov Paragraph 82 teaches the processor uses instructions and data from the memory) of method claim 1 and is accordingly rejected using substantially similar rationale as to that which is set for with respect to claim 1. Claim Rejections - 35 USC § 103 21. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. 22. Claim(s) 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Azernikov et al. (United States Patent Application Publication No. 2018/0028294 A1), hereinafter referred to as Azernikov, as applied to claim 1 above, and further in view of Joshi (United States Patent No. 10,098,715 B2) – IDS . 23. Regarding claim 4, Azernikov teaches the limitations of claim 1. Azernikov further teaches selecting a module based on the 3D scan data of the object (Paragraph 12-13 teaches that the 3D scan data can be analyzed to determine a category of dentition or dental feature. The category can include restoration types such as crowns, inlays, bridge, and implants; Paragraph 78 teaches using the category information identified from the 3D scan data to select one of the neural networks to generate the prosthesis). However, Azernikov fails to teach that the 3D scan data of the object comprises the pre-preparation 3D scan data of the object and does not comprise the post-preparation 3D scan data of the object, wherein the generating of the prosthesis comprises generating a second prosthesis based on the selecting of the second module by using the pre-preparation 3D scan data of the object. However, Joshi teaches that the 3D scan data of the object comprises the pre-preparation 3D scan data of the object and does not comprise the post-preparation 3D scan data of the object, wherein the generating of the prosthesis comprises generating a second prosthesis based on the selecting of the second module by using the pre-preparation 3D scan data of the object (Figure 7 marker 730 teaches generating a prosthesis using the 3D image of the dentition prior to the change in dentition; Column 1, lines 37-51 teach generating the prosthesis using the 3D scan of the teeth prior to a change in the dentition. This can be interpreted to mean they are using the scan data of the pre-preparation teeth). Azernikov and Joshi are considered analogous to the claimed invention as because both are in the same field of creating dental prosthesis using 3D scan data of the patient’s teeth. it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the selection of the module based on the 3D scan data with the pre-preparation 3D scan data specified in Joshi in order to design a prosthesis that is a close match to the initial tooth design prior to a change in dentition (Joshi Column 8, lines 1-11). 24. Regarding claim 5, Azernikov teaches the limitations of claim 1. Azernikov further teaches selecting a module based on the 3D scan data of the object (Paragraph 12-13 teaches that the 3D scan data can be analyzed to determine a category of dentition or dental feature. The category can include restoration types such as crowns, inlays, bridge, and implants; Paragraph 78 teaches using the category information identified from the 3D scan data to select one of the neural networks to generate the prosthesis). However, Azernikov fails to teach that the 3D scan data of the object comprises the post-preparation 3D scan data of the object and the pre-preparation 3D scan data of the object, wherein the generating of the prosthesis comprises generating a third prosthesis based on the selecting of the third module by using the post-preparation 3D scan data of the object and the pre-preparation 3D scan data of the object. However, Joshi teaches that the 3D scan data of the object comprises post-preparation 3D scan data of the object and pre-preparation 3D scan data of the object, wherein the generating of the prosthesis comprises generating a third prosthesis based on the selecting of the third module by using the post-preparation 3D scan data of the object and the pre-preparation 3D scan data of the object (Figure 6 teaches the original tooth shape scan at marker 610 and prepped tooth shape scan at marker 620 are both used at marker 630 in order to create an inlay which is a dental prosthesis at markers 640-660; Column 1, lines 52-58 teach generating a prosthesis using the 3D scan of teeth prior to the change in dentition and the post-change 3D image). Azernikov and Joshi are considered analogous to the claimed invention as because both are in the same field of creating dental prosthesis using 3D scan data of the patient’s teeth. it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the selection of the module based on the 3D scan data with both the post-preparation and pre-preparation 3D scan data specified in Joshi to better recommend a prosthesis based on a comparison between the pre-preparation and post-preparation 3D scans (Joshi Column 9, lines 9-25). 25. Claim(s) 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Azernikov et al. (United States Patent Application Publication No. 2018/0028294 A1), hereinafter referred to as Azernikov, as applied to claim 1 above and further in view of Ryu (United States Patent Application Publication No. 2021/0244518 A1). 26. Regarding claim 12, Azernikov teaches the limitations of claim 1. However, Azernikov fails to teach the data processing method wherein, when the object comprises a plurality of objects, the generating of the prosthesis for the object comprises generating prostheses together for the plurality of objects. Ryu teaches the data processing method wherein, when the object comprises a plurality of objects, the generating of the prosthesis for the object comprises generating prostheses together for the plurality of objects (Paragraph 178-179 and Figure 8 marker ‘C’ teaches that in the 3D scan model, we can mark two teeth to be the target of restoration to provide a plurality of prostheses to be generated; Paragraph 21, Figures 9A and 9B teach a user creating prosthesis for two objects). Azernikov and Ryu are considered analogous to the claimed invention as because both are in the same field of creating dental prostheses based on 3D scan data of the patient’s teeth. it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the data processing method to generate a prosthesis in Azernikov with the method of generating multiple prostheses in Ryu to reduce time for designing a prosthesis by designing multiple prostheses at a time (Ryu Paragraph 178). 27. Regarding claim 13, Azernikov teaches the limitations of claim 12. However, Azernikov fails to teach the data processing method wherein, when a portion of the object is included in maxilla and another portion of the object is included in mandible, the generating of the prosthesis for the object comprises generating together a prosthesis for the portion of the object included in the maxilla and a prosthesis for the other portion of the object included in the mandible. Ryu teaches the data processing method wherein, when a portion of the object is included in maxilla and another portion of the object is included in mandible, the generating of the prosthesis for the object comprises generating together a prosthesis for the portion of the object included in the maxilla and a prosthesis for the other portion of the object included in the mandible (Paragraph 178-179, Figure 8 marker ‘C’ teaches that a user can mark two teeth to be the target of restoration; Paragraph 21, Figures 9A and 9B show a user creating prostheses for two objects; Paragraph 82 teaches that you can take a scan of the upper and lower jaw which are the maxilla and mandible; Paragraph 96 teaches the design module can display both the oral image for the upper and lower jaw. Since the user can see both the upper and lower jaw, they can mark teeth in both the maxilla and mandible to create a prosthesis). Azernikov and Ryu are considered analogous to the claimed invention as because both are in the same field of creating dental prostheses based on 3D scan data of the patient’s teeth. It would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the data processing method to generate a prosthesis in Azernikov with the method of generating multiple prostheses in the maxilla and mandible taught by Ryu to reduce time for designing a prosthesis by designing multiple prostheses at a time (Ryu Paragraph 178). Conclusion 28. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 29. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE Y AHN whose telephone number is (571)272-0672. The examiner can normally be reached M-F 8-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571)272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTINE YERA AHN/Examiner, Art Unit 2615 /ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Sep 07, 2022
Application Filed
Feb 06, 2025
Non-Final Rejection — §102, §103
May 12, 2025
Response Filed
Jun 03, 2025
Final Rejection — §102, §103
Sep 05, 2025
Request for Continued Examination
Sep 08, 2025
Response after Non-Final Action
Oct 22, 2025
Non-Final Rejection — §102, §103
Jan 22, 2026
Response Filed
Feb 12, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602877
BODY MODEL PROCESSING METHODS AND APPARATUSES, ELECTRONIC DEVICES AND STORAGE MEDIA
2y 5m to grant Granted Apr 14, 2026
Patent 12548187
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12456274
FACIAL EXPRESSION AND POSE TRANSFER UTILIZING AN END-TO-END MACHINE LEARNING MODEL
2y 5m to grant Granted Oct 28, 2025
Patent 12450810
ANIMATED FACIAL EXPRESSION AND POSE TRANSFER UTILIZING AN END-TO-END MACHINE LEARNING MODEL
2y 5m to grant Granted Oct 21, 2025
Patent 12439025
APPARATUS, SYSTEM, METHOD, STORAGE MEDIUM, AND FILE FORMAT
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+37.5%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month