Prosecution Insights
Last updated: April 19, 2026
Application No. 17/970,689

METHOD AND SYSTEM FOR SUPPLEMENTING SCAN DATA BY USING LIBRARY DATA

Final Rejection §103
Filed
Oct 21, 2022
Examiner
POTTS, RYAN PATRICK
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Medit Corp.
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
189 granted / 235 resolved
+18.4% vs TC avg
Strong +37% interview lift
Without
With
+36.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
29 currently pending
Career history
264
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
27.9%
-12.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see Remarks at page 10, filed 3 November 2025, with respect to the objection to the drawings have been fully considered and are persuasive. The objection is withdrawn. Applicant’s arguments, see Remarks at page 11, filed 3 November 2025, with respect to the objections to the abstract and claim 3 have been fully considered and are persuasive. The objections have been withdrawn. Applicant’s arguments, see Remarks at page 12, filed 3 November 2025, with respect to the invocation of 35 U.S.C. 112(f) by certain claim limitations in claims 16 and 18-20 have been fully considered and are persuasive. The invocation has been withdrawn. Applicant’s arguments, see Remarks at pages 12-13, filed 3 November 2025, with respect to the rejection of claims 6, 7, and 9 under 35 U.S.C. 112(b) have been fully considered and are persuasive. The rejection has been withdrawn. Applicant’s arguments, see Remarks at pages 13-14, filed 3 November 2025, with respect to the rejection of claim 3 under 35 U.S.C. 112(d) have been fully considered and are persuasive. The rejection has been withdrawn. Applicant’s arguments, see Remarks at pages 14-17, filed 3 November 2025, with respect to the rejections of claims under 35 U.S.C. 102 and under 35 U.S.C. 103 have been fully considered but are not persuasive. On pages 15-16 of Remarks, Applicant provides three separate arguments with respect to Kim. In the first argument, Applicant argues Kim does not teach “wherein the subject is at least one of a mouth of a patient and an oral model of the mouth of the patient” because Kim discloses a “subject” including a gypsum model (i.e., a plaster dental impression) that allegedly cannot be the “subject” recited in the claims because “the present invention expressly defines the subject as ‘a mouth of a patient and an oral model of the mouth of the patient’ (see Specification, [002], [0058]).” Examiner respectfully disagrees. Paragraph 2 does not mention a subject and paragraph 58 discloses in part, “the subject is not essentially limited to the inside of the actual mouth of a patient, and the subject may be an oral model (plaster cast) for testing the insertion depth and insertion angle of a structure before the structure is inserted into the mouth” (emphasis added). Paragraphs 2 and 58 do not include an express definition of the “subject” as recited in the claims. Paragraph 59 also describes different types of scanners depending upon whether the subject is an oral model or is inside an actual mouth, but not both. Furthermore, the claim language “a mouth of a patient and an oral model of the mouth of the patient” appears in claim 1 as part of “the subject is at least one of a mouth of a patient and an oral model of the mouth of the patient” (emphasis added). As explained in more detail below, per the decision in Superguide v. DirecTV, this limitation is interpreted to be a disjunctive list, meaning Kim’s disclosure of an oral gypsum model teaches “the subject” of claim 1. The oral gypsum model is an oral model of a patient’s mouth, as it provides a three-dimensional representation of the patient’s mouth so that dental prosthesis can be designed to fit in the patient’s mouth. Kim discloses an oral model into which a structure is inserted. Figure 4 of Kim shows an oral model of a patient’s mouth with a structure (scan pin) inserted. Figures 7-10b show the digital model representing the gypsum model with scan pin inserted, registration/alignment with the three-dimensional model of the scan pin, and a tooth model applied over the aligned scan pin. Thus, Kim also teaches “a subject into which a structure is inserted” as recited in claims 1, 5, and 16. In the second argument, Applicant argues that Kim does not disclose “a data blank” in the scan data or “a part having low density by the structure having a material different from that of the subject” because “Kim's scan pin and plaster model are both made of non-reflective material and therefore do not create any data blank due to differing reflectance or material properties.” These limitations are recited as alternatives. The prior art only needs to teach one of the alternatives. Applicant’s argument is considered moot because Kim is not relied upon to teach either alternative and Applicant’s argument is directed to Kim. Even though Rosenbaum is relied upon to teach the “data blank” alternative, it is noted that Kim discloses on page 4, “In a preferred embodiment, the scanner 150 is a three-dimensional scanner that emits a structured light to a subject, that is, the oral gypsum model, and receives and interprets reflected light, but the scanner is not limited thereto” (emphasis added). In the third argument, Applicant argues based on paragraphs 41 and 42 of Kim, that Kim “does not teach or suggest adding the library model data into the scan data and post-processing the combined data to correct or fill missing portions.” The argument is considered moot because Kim is not relied upon to teach the “post-processing” step in relation to the “data blank”. Applicant’s remaining arguments, see Remarks at page 17, filed 3 November 2025, with respect to the rejections under 35 U.S.C. 103 have been fully considered but are not persuasive. Applicant argues the cited art does not cure the deficiencies of Kim. Examiner respectfully disagrees. Applicant argues “neither reference describes that the scan data has a data blank”. Examiner respectfully disagrees. Rosenbaum discloses supplementing 3D scans “having hole 404 due to insufficient lighting conditions” by augmenting (supplementing) a scan “by merging one or more portions of the reference object with the scanned 3D geometry and/or other scanned features. To correct gaps in geometry, such as hole 404, scanned environment enhancer 220 could, for example, complete the gaps with geometry based on or from one or more selected reference objects.” See Rosenbaum at pars. 59-60. Thus, while Kim no longer anticipates claims 1, 5, and 16, the combination of Kim in view of Rosenbaum renders claims 1, 5, and 16 obvious as being unpatentable under 35 U.S.C. 103. Applicant argues, “Both Rosenbaum and Weiss merely disclose general 3D model registration or alignment techniques, and do not recognize, address, or solve the above-mentioned problem of data blanks due to material reflectivity differences.” Examiner respectfully disagrees. A disclosure of generalized 3D model registration or alignment techniques is applicable to specific applications of those techniques. Holes or incomplete data can arise in a 3D scan due to more than one cause besides material reflectivity differences. For example, if a 3D model of a tooth is generated from different views of an intraoral scanner, any region of the tooth that is not included in each image used to construct the 3D model will result in a data blank or void because there is no image data corresponding to that region. Rosenbaum discusses an analogous example in paragraph 20. In fact, in paragraph 3, Rosenbaum explicitly states that incomplete scan data can be caused by “insufficient lighting”. Rosenbaum’s scanner includes a depth-sensing camera and Kim’s scanner includes a structured light scanner. An intraoral device that acquires images of reflected structured light to create a 3D model is a depth-sensing camera. Thus, Rosenbaum explicitly addresses the problem of data blanks in 3D models of scanned objects. See, e.g., Rosenbaum at pars. 17-18. Claim Interpretation According to the Federal Circuit’s decision in SuperGuide v. DirecTV, claim language of the type “at least one of … and …” creates a presumption that Applicant intended the plain and ordinary meaning of the claim language to be a conjunctive list, unless the Specification supports an interpretation of the claim language that rebuts the presumption.1 Claims 1, 5, 6, and 16 recite limitations that raise the presumption of a conjunctive list per SuperGuide. Claims 1 and 6 are representative. [Claim 1] … wherein the subject is at least one of a mouth of a patient and an oral model of the mouth of the patient … [Claim 6] The method of claim 5, wherein the subject is at least one selected from a group comprising a mouth of a patient, a negative model of the mouth, and a positive model of the mouth. As a preliminary matter, “at least one selected from” is considered equivalent to “at least one of”. The “subject” of the “scan step” in claim 1 cannot simultaneously be a model of a patient’s mouth and the patient’s mouth itself based on the Specification. The Specification at paragraph 58 describes alternatives for the subject, providing, “the subject is not essentially limited to the inside of the actual mouth of a patient, and the subject may be an oral model (plaster cast) for testing the insertion depth and insertion angle of a structure before the structure is inserted into the mouth of a patient.” Accordingly, “at least one of a mouth of a patient and an oral model of the mouth of the patient” is interpreted to be a disjunctive list. Regarding claim 6, in addition to the portions of the Specification noted above, paragraph 58 provides, “The oral model may be a negative model, that is, an impression obtained by performing impression taking by using alginate, etc. or may be a positive model obtained by filling a negative model with a material such as plaster. Accordingly, the scan data 1 may be digital data of the negative or positive model that is imitated from a real thing within the mouth of a patient or the inside of the mouth.” (emphasis added). It is not reasonable to interpret the group of alternative subjects as a conjunctive list, e.g., a subject that is both a positive and a negative model of the mouth. Therefore, “at least one selected from a group comprising a mouth of a patient, a negative model of the mouth, and a positive model of the mouth” is interpreted to be a disjunctive list. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5 and 7-16 are rejected under 35 U.S.C. 103 as being unpatentable over KR Pat. Appl. Pub. No. 20140008881 (See machine translation) to Kim in view of U.S. Pat. Appl. Pub. No. 20180114363 to Rosenbaum. Regarding claim 1, Kim teaches a method of supplementing scan data using library data, the method comprising: a scan step of obtaining scan data of a subject (Kim, pg. 4, par. [7], “oral gypsum model”) into which a structure (Kim, pg. 4, par. [1], “scan pin”) is inserted through a scanner (FIG. 4 shows a scan pin inserted into the gypsum model; Kim, pg. 4, par. [7], “the scanner 150 is a three-dimensional scanner that emits a structured light to a subject, that is, the oral gypsum model, and receives and interprets reflected light”); a step of selecting, by a controller (Kim, pg. 4, par. [8], “design controller 164” of “a PC”), library model data (Kim, pg. 4, par. [18], “standard scan pin image”; The pin image, like the model of the plaster, is three-dimensional, as shown in, for example, Fig. 9b) corresponding to the structure (Kim, pg. 4, par. [18], “in the 3D plaster model image, the scan pin image portion is aligned and combined with the standard scan pin image stored in the scan pin information DB 162 (step 202).”); and a step of post-processing, by the controller, the scan data (Kim, pg. 4, par. [8], “design controller 164” of “a PC”; Image alignment is a post-processing step because it occurs after the model is initially created. See Kim at pg. 4, par. 19), wherein the subject is an oral model of the mouth of the patient (A gypsum dental impression with a pin structure inserted therein is a model of the mouth of the patient from which the impression was taken. See Kim at pg. 4, par. [7], “oral gypsum model”), wherein the library model data is added to the scan data in the step of post-processing and is processed along with the scan data (Three-dimensional alignment is a post-processing step that follows data acquisition and adds different datasets together into a shared coordinate system. See Kim at pg. 4, par. 19; Fig. 9B (best viewed in color) shows the post-processed 3D image including both the scan pin portion highlighted in green and the plaster model portion highlighted in purple. The model is further post-processed to design a crown over the scan pin, as shown in Fig. 10b.), but does not teach that which is explicitly taught by Rosenbaum. Rosenbaum teaches wherein the scan data before the step of post processing has a data blank (Rosenbaum, par. 59, “hole 404 due to insufficient lighting conditions”. Rosenbaum discloses supplementing 3D scans “having hole 404 due to insufficient lighting conditions” by augmenting (supplementing) a scan “by merging one or more portions of the reference object with the scanned 3D geometry and/or other scanned features. To correct gaps in geometry, such as hole 404, scanned environment enhancer 220 could, for example, complete the gaps with geometry based on or from one or more selected reference objects.” See Rosenbaum at pars. 59-60.). Kim discloses generating three-dimensional models of objects and supplementing them with library model data. (Kim, pg. 4, par. [19]; Figs. 9a and 9b). Thus, Kim shows that it was known in the art before the effective filing date of the claimed invention to supplement a three-dimensional model with library data, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, supplementing scan data with library model data. Rosenbaum discloses generating three-dimensional models of objects and supplementing them with library model data. (Rosenbaum, par. [0019]; par. [0069]). Rosenbaum further discloses, “3D object(s) produced by a scan may be incomplete or inaccurate. This could be from environmental conditions that make it difficult to detect physical features, such as insufficient lighting, proximity, and the like. … various complications can result in defects to 3D virtual objects produced from the 3D scanning such as missing parts or holes in the model or pixel tearing of textures. Thus, due to insufficient environmental information from scanning devices, it may not be possible to accurately complete reproductions of the physical environment.” (par. [0003]). Thus, Rosenbaum shows that it was known in the art before the effective filing date of the claimed invention to supplement a three-dimensional model with library data, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, supplementing scan data with library model data. A person of ordinary skill in the art would have been motivated to use the three-dimensional modelling disclosed by Kim with the reference model supplementation disclosed by Rosenbaum, to thereby merge reference library model data with small voids or data blanks detected in a model scan by filling the missing data with matching model data as an approximation of the data that was missing. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of providing a more accurate baseline for alignment. Regarding claim 2, Kim in view of Rosenbaum teaches the method of claim 1, wherein the library model data added in the step of post-processing is aligned with at least some of the scan data (Kim, pg. 4, par. [19], “Image alignment is performed by adjusting the size of the plaster model image so that the size of the recognition pin of the scan pin in the plaster model image is the same as the size of the recognition mark in the standard scan pin image.”; Figs. 9a and 9b show the library data added to/supplementing the scan data.). Regarding claim 3, Kim in view of Rosenbaum teaches the method of claim 1, wherein the at least some of the scan data is aligned based on the library model data (Kim, pg. 4, par. [19], “Image alignment is performed by adjusting the size of the plaster model image so that the size of the recognition pin of the scan pin in the plaster model image is the same as the size of the recognition mark in the standard scan pin image.”; The library data is aligned with the scanned oral model.). Regarding claim 4, Kim in view of Rosenbaum teaches the method of claim 1, wherein in a three-dimensional (3-D) model of the structure (Rosenbaum, par. [0017]-[0018], “a partial scanned 3D model or a model with large holes in it”), a data blank of the structure (Rosenbaum, par. [0017]-[0018], “missing parts or “holes.”) that has not been scanned in the scan step (i.e., was missed during scanning) is supplemented with the library model data (Rosenbaum, par. [0017]-[0018], “automatically matching (e.g., during a live or real-time scanning process) a partial scanned 3D model or a model with large holes in it to an existing model to provide more accurate surface reconstruction or an otherwise enhanced scanned 3D virtual object.”; par. [0066], “At block 510, method 500 includes initiating a scan of a physical environment. For example, environmental scanner 212 can initiate a scan of a physical environment, which may be performed by a depth-sensing camera of user device 110.”; par. [0019], “The library objects used for matching to scanned 3D geometry (e.g., for mesh fitting) can include but is not limited to: basic primitives (e.g. cubes, spheres), stock objects that have geometric similarities with a scanned mesh (e.g. table, chair, face), and/or a model of the same actual object(s) scanned previously. In order to auto-complete textures for scanned 3D virtual objects, a library object texture may be used to infer the texture using the surrounding textures and smooth transitions may be produced between them.”; par. [0069], “scanned geometry enhancer 220 can augment scanned environmental features 236 with one or more features of the matched one or more of reference objects 232. Optionally, the segmented scanned environmental features may be displayed and/or presented on a user device, such as via scanning interface 218. Optionally, blocks 520, 530, and 540 may repeat as the physical environment is further scanned, as indicated in FIG. 5.”). The rationale for obviousness is the same as provided for claim 1. Regarding claim 5, Kim teaches a method of supplementing scan data using library data, the method comprising: a scan step of obtaining scan data of a subject (Kim, pg. 4, par. [7], “oral gypsum model”) into which a structure is inserted (Kim, pg. 4, par. [1], “scan pin”) through a scanner (Kim, pg. 4, par. [7], “the scanner 150 is a three-dimensional scanner that emits a structured light to a subject, that is, the oral gypsum model, and receives and interprets reflected light”); and an alignment step of generating, by a controller (Kim, pg. 4, par. [8], “design controller 164” of “a PC”), a three-dimensional (3-D) model of the subject (Kim, pg. 4, par. [7], “oral gypsum model”; pg. 4, par. [8], “3D image generator 166”; The model includes several subjects including multiple teeth and a position of a scanbody. A ‘scanbody’ is used to identify a location of a dental implant in CAD software. Kim discloses a scanbody, “Examples of the standard scan pin image may include a 3D CAD image that renders 3D CAD data of the scan pin, or a 3D scanning image that has been precisely scanned and stored in advance” (pg. 4, par. [9]).) by aligning the scan data with library model data corresponding to the structure (Kim, pg. 4, par. [19], “Image alignment is performed by adjusting the size of the plaster model image so that the size of the recognition pin of the scan pin in the plaster model image is the same as the size of the recognition mark in the standard scan pin image.”), wherein the subject is an oral model of the mouth of the patient (A gypsum dental impression is a model of the mouth of the patient from which the impression was taken. See Kim at pg. 4, par. [7], “oral gypsum model”), but does not teach that which is explicitly taught by Rosenbaum. Rosenbaum teaches wherein the scan data has a data blank (Rosenbaum, par. 59, “hole 404 due to insufficient lighting conditions”). The rationale for obviousness is the same as provided for claim 1. Regarding claim 7, Kim in view of Rosenbaum teaches the method of claim 5, wherein the structure is at least any one selected among prosthetic appliances comprising a scanbody (Kim, pg. 4, par. 7, “oral gypsum model”; The model includes several subjects including multiple teeth and a position of a scanbody. A ‘scanbody’ is used to identify a location of a dental implant in CAD software. Kim discloses a scanbody, “Examples of the standard scan pin image may include a 3D CAD image that renders 3D CAD data of the scan pin, or a 3D scanning image that has been precisely scanned and stored in advance” (pg. 4, par. [9]).) or an abutment inserted into the subject (Kim, pg. 4, par. [11], “The design controller 164 controls the operations of the 3D image generator 166 and the data converter / renderer 168 in response to a user's manipulation of the input unit 172 so that the artificial crown and abutment can be designed”; pg. 4, par. [16], “Figure 6 shows in detail the process of producing a custom abutment using the artificial tooth manufacturing system of Figure 5 based on the oral gypsum model.”). Regarding claim 8, Kim in view of Rosenbaum teaches the method of claim 5, wherein the alignment step comprises: a local alignment step of aligning the scan data input in the scan step (Kim, pg. 4, par. [7], “the scanner 150 is a three-dimensional scanner that emits a structured light to a subject, that is, the oral gypsum model, and receives and interprets reflected light”; pg. 4, par. [12], “3D image generator 166 may generate a 3D image based on the 2D image input from the scanner 150.”; Converting a 2D image into a three-dimensional model requires aligning 2D image locations in three-dimensional space as in the structured-light technique disclosed in par. [7] on pg. 4.); and a global alignment step of generally aligning at least some of the scan data which is input with the library model data after the local alignment step is terminated (Kim, pg. 4, par. [19], “Image alignment is performed by adjusting the size of the plaster model image so that the size of the recognition pin of the scan pin in the plaster model image is the same as the size of the recognition mark in the standard scan pin image. This can be done by rotating one of the two images in three directions so that the boundary of the is exactly coincident. By aligning and combining the images, it is possible to determine the position and orientation of the scan pin substructure, in particular, the fixture seating shape, which is not revealed in the first gypsum model image (step 204).”), but does not teach that which is explicitly further taught by Rosenbaum. Rosenbaum further teaches sequentially aligning the scan data (Rosenbaum, par. [0003], “When performed in real-time during scanning, the object completion can assist the user's decision whether to skip scanning for an area or attempt to scan it more thoroughly. For example, if the user rotates and otherwise inspects the 3D model and it looks complete, the user may terminate scanning.”; par. [0060], “Scanned environment enhancer 220 can perform the augmentations to scanned environmental features 236 (e.g., in real-time during a live scan of the environment) by, for example, mesh fitting one or more of the selected reference objects to the scanned 3D geometry. This can result in a hybrid object including some portions of scanned geometry and some portions of reference geometry and/or other object features.”). Kim in view of Rosenbaum is analogous to the claimed invention for the reasons provided above. Rosenbaum further discloses generating three-dimensional models of objects and supplementing them with library model data. (Rosenbaum, par. [0019]; par. [0069]). Rosenbaum also discloses sequentially scanning the physical environment to either fill missing parts of a generated three-dimensional model with additional scans or create a hybrid model with library model data. (Rosenbaum, par. [0003]; par. [0060]). Thus, Rosenbaum shows that it was known in the art before the effective filing date of the claimed invention to supplement a three-dimensional model with library data and sequentially scanning an object to fill in missing data, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, supplementing scan data with library model data. A person of ordinary skill in the art would have been motivated to use the three-dimensional modelling disclosed by Kim in view of Rosenbaum with the sequential scanning further disclosed by Rosenbaum, to thereby sequentially scan the physical environment to either fill missing parts of the generated three-dimensional model with additional scans to sequentially perform local alignment of model features or create a hybrid model with library model data sequentially aligned in each subsequent scan. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of creating a more accurate model by filling in the missing data with real data or model data in real time as the target object is sequentially scanned. Regarding claim 9, Kim in view of Rosenbaum teaches the method of claim 8, wherein: a real-time 3-D model of the subject is generated in the local alignment step (Rosenbaum, par. [0003], “When performed in real-time during scanning, the object completion can assist the user's decision whether to skip scanning for an area or attempt to scan it more thoroughly. For example, if the user rotates and otherwise inspects the 3D model and it looks complete, the user may terminate scanning.”), and the real-time 3-D model is reconstructed in the global alignment step (Rosenbaum, par. [0060], “Scanned environment enhancer 220 can perform the augmentations to scanned environmental features 236 (e.g., in real-time during a live scan of the environment) by, for example, mesh fitting one or more of the selected reference objects to the scanned 3D geometry. This can result in a hybrid object including some portions of scanned geometry and some portions of reference geometry and/or other object features.”). The rationale for obviousness is the same as provided for claim 8. Regarding claim 10, Kim in view of Rosenbaum teaches the method of claim 5, wherein the library model data is selected before the alignment step (Kim, pg. 4, par. [9], “Examples of the standard scan pin image may include a 3D CAD image that renders 3D CAD data of the scan pin, or a 3D scanning image that has been precisely scanned and stored in advance”; The CAD data is stored/selected before the alignment in step 202 occurs.). Regarding claim 11, Kim in view of Rosenbaum teaches the method of claim 10, wherein the library model data is selected before (Kim, pg. 4, par. [9], “Examples of the standard scan pin image may include a 3D CAD image that renders 3D CAD data of the scan pin, or a 3D scanning image that has been precisely scanned and stored in advance”; The CAD data is stored/selected before the alignment in step 202 occurs.) the scan step. Regarding claim 12, Kim in view of Rosenbaum teaches the method of claim 8, wherein the library model data is selected before the global alignment step (Kim, pg. 4, par. [9], “Examples of the standard scan pin image may include a 3D CAD image that renders 3D CAD data of the scan pin, or a 3D scanning image that has been precisely scanned and stored in advance”; The CAD data is stored/selected before the alignment in step 202 occurs.). Regarding claim 13, Kim in view of Rosenbaum teaches the method of claim 12, wherein the library model data is selected before the scan step (Kim, pg. 4, par. [9], “Examples of the standard scan pin image may include a 3D CAD image that renders 3D CAD data of the scan pin, or a 3D scanning image that has been precisely scanned and stored in advance”; The CAD data is stored/selected before the alignment in step 202 occurs.) or after the local alignment step. Regarding claim 14, Kim in view of Rosenbaum teaches the method of claim 5, wherein the library model data is automatically or manually selected (Kim discloses the scan pin CAD data being “stored in advance” (Kim, pg. 4, par. [9]). Kim further discloses a “design/processing program 160” (Kim, pg. 4, par. [8]). Kim discloses selecting the data, which implicitly must be done either automatically or manually because the terms “automatically” and “manually” are opposites. The selection must be one or the other because there are only two possible ways of selecting the data: either manually or automatically. Thus, Kim discloses at least one of “automatically or manually” selecting the “library model data”. For example, the design program receives user input, but the program itself performs automatic functions in response to user input, such as overlaying the scan image data with the scan pin image. Therefore, the selection of the CAD data from a storage location to align the data is performed automatically because it is the program, not the user, that renders the alignment.). Regarding claim 15, Kim in view of Rosenbaum teaches the method of claim 14, but does not teach that which is explicitly further taught by Rosenbaum. Rosenbaum further teaches wherein when the library model data is manually selected (Rosenbaum, par. [0062], “some user selection or other input is employed first to allow the user to select between augmentation options, such as those for particular areas or regions of a 3D model, or for the 3D model overall. As an example, a user could be provided with different reference objects to select for augmentation and/or different combinations of object attributes, such as textures for augmented regions of the 3D model and the like.”), the library model data is selected in a library interface (Rosenbaum, par. [0037], “Scanning interface 218 can, for example, correspond to application 110 of FIG. 1 and include a graphical user interface (GUI) or other suitable interface to assist the user in capturing physical environmental features via environmental scanner. Scanning interface 218 can, for example, allow the user to selectively activate or deactivate environmental scanning by environmental scanner 212.”) in response to an input from a user (Rosenbaum, par. [0062], “user selection or other input”). Kim in view of Rosenbaum is analogous to the claimed invention for the reasons provided above. Rosenbaum further discloses generating three-dimensional models of objects and supplementing them with library model data (Rosenbaum, par. [0019]; par. [0069]) and a user interface that permits a user to manually select different models to augment scan data. (Rosenbaum, par. [0037]; par. [0062]). Thus, Rosenbaum shows that it was known in the art before the effective filing date of the claimed invention to supplement a three-dimensional model with manually selected library data using a user interface, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, supplementing scan data with library model data. A person of ordinary skill in the art would have been motivated to combine the model library and PC disclosed by Kim in view of Rosenbaum with the user interface and model selection interface functionality further disclosed by Rosenbaum, to thereby give a user the ability to select different dental device models to supplement the scan data for making a prosthetic as suggested by Rosenbaum. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of giving the user more freedom of choice and control over the prosthetic that is created for a patient. Regarding claim 16, Kim teaches a system for supplementing scan data using library data, the system comprising: a scanner (Kim, pg. 4, par. [7], “the scanner 150 is a three-dimensional scanner that emits a structured light to a subject, that is, the oral gypsum model, and receives and interprets reflected light”) configured to obtain scan data of a subject (Kim, pg. 4, par. [7], “oral gypsum model”) into which a structure (Kim, pg. 4, par. [1], “scan pin”) is inserted; and a processor (Kim, pg. 4, par. [8], “design controller 164” of “a PC”) which is configured to execute: a control operation to generate a three-dimensional (3-D) model of the subject (Kim, pg. 4, par. [7], “oral gypsum model”; pg. 4, par. [8], “3D image generator 166”; The model includes several subjects including multiple teeth and a position of a scanbody/implant.) by aligning the obtained scan data with library model data corresponding to the structure (Kim, pg. 4, par. [19], “Image alignment is performed by adjusting the size of the plaster model image so that the size of the recognition pin of the scan pin in the plaster model image is the same as the size of the recognition mark in the standard scan pin image.”), wherein the subject is an oral model of the mouth of the patient (A gypsum dental impression is a model of the mouth of the patient from which the impression was taken. See Kim at pg. 4, par. [7], “oral gypsum model”), but does not teach that which is explicitly taught by Rosenbaum. Rosenbaum teaches wherein the scan data has a data blank (Rosenbaum, par. 59, “hole 404 due to insufficient lighting conditions”). The rationale for obviousness is the same as provided for claim 1. Claims 6 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Rosenbaum and in further view of U.S. Pat. Appl. Pub. No. 20210100643 (prov. appl. filed Sep. 4, 2019) to Weiss. Regarding claim 6, Kim in view of Rosenbaum teaches the method of claim 5, wherein the subject is a positive model of the mouth (Kim, Figs. 9a, 9b, 10a, 10c, and 14-19 show the plaster model, which is a positive model of the patient’s mouth created from the negative space of an impression taken from the patient’s mouth.). Kim in view of Rosenbaum does not explicitly teach wherein the subject is at least one selected from a group comprising a mouth of a patient and a negative model of the mouth. Weiss teaches wherein the subject is at least one selected from a group comprising a mouth of a patient (Weiss, par. [0117], “According to an example, a user (e.g., a practitioner) may subject a patient to intraoral scanning. In doing so, the user may apply scanner 150 to one or more patient intraoral locations.”; A handheld scanner 150 is shown in FIG. 1.) and a negative model of the mouth (Weiss, par. [0112], “it should be understood that embodiments described with reference to intraoral scans also apply to lab scans or model/impression scans. A lab scan or model/impression scan may include one or more images of a dental site or of a model or impression of a dental site, which may or may not include height maps, and which may or may not include color images”; A dental impression is a negative model.). Kim in view of Rosenbaum discloses generating three-dimensional models of dental-related objects and supplementing them with library model data to construct a prosthesis for a person’s mouth. (Kim, pg. 4, par. [19]; Figs. 9a and 9b). Kim discloses scanning a positive model of the mouth (Kim, Figs. 9a, 9b, 10a, 10c, and 14-19) and further discloses a group of other potential subjects including a mouth of a patient (Kim, pg. 4, par. [2], “In the impression body, the structure of the patient's mouth, such as the shape and arrangement of the patient's teeth, and the upper shape of the scan pin are negatively reflected”; The plaster model is a model of the patient’s mouth, made by taking a negative impression of the mouth and then creating the gypsum oral model from the negative impression to create a positive model of the mouth. To obtain an impression in the first place, the patient’s mouth must be available. Thus, positive and negative models as well as the mouth of the patient are available as potential subjects.) and a negative model of the mouth (Kim, pg. 4, par. [2], “In the impression body, the structure of the patient's mouth, such as the shape and arrangement of the patient's teeth, and the upper shape of the scan pin are negatively reflected”; The impression of the patient’s mouth is available as a subject. The impression is a negative model.). Thus, Kim in view of Rosenbaum further shows that it was known in the art before the effective filing date of the claimed invention to supplement a three-dimensional model derived from captured images of a subject with library data, the subject being a positive or negative model of the mouth or a mouth of a patient, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, supplementing scan data with library model data. Weiss discloses generating three-dimensional models of dental-related objects and supplementing them with library model data to construct a prosthesis for a person’s mouth. (Weiss, par. [0119], “A prosthesis may include any restoration such as crowns, veneers, inlays, onlays, implants and bridges, for example, and any other artificial partial or complete denture.”). Weiss further discloses performing an intraoral scan of a patient’s mouth with a handheld scanner (Weiss, par. [0183]) including a camera. (Weiss, par. [0125]). Weiss also discloses scanning impressions. (Weiss, par. [0112]). Thus, Weiss shows that it was known in the art before the effective filing date of the claimed invention to supplement a three-dimensional model derived from captured images of a patient’s mouth with library data, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, supplementing scan data with library model data. A person of ordinary skill in the art would have been motivated to use the handheld camera-based scanner disclosed by Weiss to scan subjects including a patient’s mouth, a negative model of the mouth, and a positive model of the mouth of a patient disclosed by Kim in view of Rosenbaum, to thereby scan any of said subjects and create a three-dimensional model thereof for designing and fitting a prosthesis to a patient. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of making the scanning more user-friendly to the operator by having more choices of different subjects. Regarding claim 17, Kim in view of Rosenbaum teaches system of claim 16, but does not teach that which is explicitly taught by Weiss. Weiss teaches wherein the scanner comprises: at least one camera configured to accommodate light reflected by the subject (Weiss, par. [0125], “performing image registration includes capturing 3D data of various points of a surface in multiple images (views from a camera), and registering the images by computing transformations between the images.”), and an imaging sensor (Weiss, par. [0125], “camera”; A camera requires a sensor to convert incident light/radiation to an electrical signal.) telecommunicationally (Under the broadest reasonable interpretation, “telecommunicationally” is interpreted to mean communication over a distance through a medium including wired or wireless transmission.) connected to the camera (Weiss, par. [0183], “The dental modeling logic 2550 may be, for example, a component of an intraoral scanning apparatus that includes a handheld intraoral scanner and a computing device operatively coupled (e.g., via a wired or wireless connection) to the handheld intraoral scanner.” The scanner includes the camera and its sensor. The resulting image data is transmitted wired or wirelessly.) and configured to obtain a two-dimensional (2-D) image of the subject (Weiss, par. [0115], “Computing device 105 may be coupled to an intraoral scanner 150 (also referred to as a scanner) and/or a data store 125”; par. [0241], “Inputs may include image data, such as 2D height maps that provide depth values at each pixel location, and/or color images that are actual or estimated colors for a given 2D model projection”). Kim in view of Rosenbaum discloses generating three-dimensional models of dental-related objects and supplementing them with library model data to construct a prosthesis for a person’s mouth. (Kim, pg. 4, par. [19]; Figs. 9a and 9b). Thus, Kim in view of Rosenbaum shows that it was known in the art before the effective filing date of the claimed invention to supplement a three-dimensional model with library data, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, supplementing scan data with library model data. Weiss discloses generating three-dimensional models of dental-related objects and supplementing them with library model data to construct a prosthesis for a person’s mouth. (Weiss, par. [0119], “A prosthesis may include any restoration such as crowns, veneers, inlays, onlays, implants and bridges, for example, and any other artificial partial or complete denture.”). Weiss further discloses performing an intraoral scan with a scanner including a camera. (Weiss, par. [0125]). Thus, Weiss shows that it was known in the art before the effective filing date of the claimed invention to supplement a three-dimensional model with library data, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, supplementing scan data with library model data. A person of ordinary skill in the art would have been motivated to use the scanning technique disclosed by Kim in view of Rosenbaum with the handheld camera-based intraoral scanner disclosed by Weiss, to thereby scan a mouth structure and create a three-dimensional model for fitting a prosthesis using the handheld scanner disclosed by Weiss that includes a camera and image sensor. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of making the scanner more user-friendly to the operator. Regarding claim 18, Kim in view of Rosenbaum and in further view of Weiss teaches the system of claim 17, wherein the control operation comprises: a three-dimensional (3-D) conversion operation to convert the 2-D image of the subject into 3-D data (Kim, pg. 4, par. [12], “3D image generator 166 may generate a 3D image based on the 2D image input from the scanner 150.”), and an alignment operation to align the 3-D data (Kim, pg. 4, par. [19], “Image alignment is performed by adjusting the size of the plaster model image so that the size of the recognition pin of the scan pin in the plaster model image is the same as the size of the recognition mark in the standard scan pin image.”). Regarding claim 19, Kim in view of Rosenbaum and in further view of Weiss teaches the system of claim 18, wherein the alignment operation comprises: a local alignment operation to perform alignment on the 3-D data that are consecutive to each other (Kim, pg. 4, par. [7], “the scanner 150 is a three-dimensional scanner that emits a structured light to a subject, that is, the oral gypsum model, and receives and interprets reflected light”; Structured light scanning emits a pattern of light, where the pattern is aligned in three-dimensions to form a three-dimensional model.), and a global alignment operation to generally align the 3-D data with the library model data after the alignment performed by the local alignment operation (Kim, pg. 4, par. [19], “Image alignment is performed by adjusting the size of the plaster model image so that the size of the recognition pin of the scan pin in the plaster model image is the same as the size of the recognition mark in the standard scan pin image. This can be done by rotating one of the two images in three directions so that the boundary of the is exactly coincident. By aligning and combining the images, it is possible to determine the position and orientation of the scan pin substructure, in particular, the fixture seating shape, which is not revealed in the first gypsum model image (step 204).”), but does not teach that which is explicitly further taught by Rosenbaum. Rosenbaum further teaches sequentially aligning the scan data (Rosenbaum, par. [0003], “When performed in real-time during scanning, the object completion can assist the user's decision whether to skip scanning for an area or attempt to scan it more thoroughly. For example, if the user rotates and otherwise inspects the 3D model and it looks complete, the user may terminate scanning.”; par. [0060], “Scanned environment enhancer 220 can perform the augmentations to scanned environmental features 236 (e.g., in real-time during a live scan of the environment) by, for example, mesh fitting one or more of the selected reference objects to the scanned 3D geometry. This can result in a hybrid object including some portions of scanned geometry and some portions of reference geometry and/or other object features.”). Kim in view of Rosenbaum and in further view of Weiss is analogous to the claimed invention for the reasons provided above. Rosenbaum further discloses generating three-dimensional models of objects and supplementing them with library model data. (Rosenbaum, par. [0019]; par. [0069]). Rosenbaum also discloses sequentially scanning the physical environment to either fill missing parts of a generated three-dimensional model with additional scans or create a hybrid model with library model data. (Rosenbaum, par. [0003]; par. [0060]). Thus, Rosenbaum shows that it was known in the art before the effective filing date of the claimed invention to supplement a three-dimensional model with library data and sequentially scan an object to fill in missing data, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, supplementing scan data with library model data. A person of ordinary skill in the art would have been motivated to use the three-dimensional modelling disclosed by Kim in view of Rosenbaum and in further view of Weiss with the sequential scanning further disclosed by Rosenbaum, to thereby sequentially scan the physical environment to either fill missing parts of the generated three-dimensional model with additional scans to sequentially perform local alignment of model features or create a hybrid model with library model data sequentially aligned in each subsequent scan. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of creating a more accurate model by filling in the missing data with real data or model data in real time as the target object is sequentially scanned. Regarding claim 20, Kim in view of Weiss and in further view of Rosenbaum teaches the system of claim 19, wherein: the control operation further comprises a library selection operation to select the library model data (Kim, pg. 4, par. [9], “Examples of the standard scan pin image may include a 3D CAD image that renders 3D CAD data of the scan pin, or a 3D scanning image that has been precisely scanned and stored in advance”; The CAD data is stored/selected before the alignment in step 202 occurs. The program selects the model for designing a prosthetic.), and the library model data selected by the library selection operation supplements the scan data obtained by the scanner (Kim, Figs. 9a and 9b show the library data added to/supplementing the scan data.). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Pat. Appl. Pub. No. 20200197129 is pertinent because it discloses supplementing a data blank in a 3D model of an intraoral site, which is relevant to the “data blank” in the claims. See pars. 10, 52, 62, 63, and 88. U.S. Pat. Appl. Pub. No. 20210243382 is pertinent because it discloses “if the external light is serious, a data blank (B) in which the data are missing between the gingiva (g4) and the tooth (t4) is also likely to be generated”, which is pertinent to the “data blank” caused by reflectivity differences described in Remarks filed 3 November 2025. Applicant's amendment necessitated the new grounds of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN P POTTS whose telephone number is (571)272-6351. The examiner can normally be reached M-F, 9am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN P POTTS/Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672 1 See Superguide Corp. v. Direct TV Enterprises, Inc., 358 F.3d 870, 69 USPQ2d 1865 (Fed. Cir. 2004).
Read full office action

Prosecution Timeline

Oct 21, 2022
Application Filed
Apr 28, 2025
Non-Final Rejection — §103
Nov 03, 2025
Response Filed
Jan 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591966
METHOD AND APPARATUS FOR ANALYZING BLOOD VESSEL BASED ON MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 31, 2026
Patent 12560734
METHOD AND SYSTEM FOR PROCESSING SEISMIC IMAGES TO OBTAIN A REFERENCE RGT SURFACE OF A GEOLOGICAL FORMATION
2y 5m to grant Granted Feb 24, 2026
Patent 12555259
PRODUCT IDENTIFICATION APPARATUS, PRODUCT IDENTIFICATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12548658
Systems and Methods for Scalable Mapping of Brain Dynamics
2y 5m to grant Granted Feb 10, 2026
Patent 12538743
WARPAGE AMOUNT ESTIMATION APPARATUS AND WARPAGE AMOUNT ESTIMATION METHOD
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+36.8%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month