DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/25/2026 has been entered.
Response to Amendment
3. Acknowledgement is made of amendment filed on February 25, 2026, in which claim 2 is amended, and claims 2-21 are still pending.
Response to Arguments
4. Applicant's arguments, filed on February 25, 2026, with respect to Claims 2-21 have been fully considered but they are not persuasive.
5. With regards to arguments for independent claim 2, applicants argue that Sachdeva et al. (US 2004/0029068 A1), Unklesbay et al. (US 2021/0045701 A1) and Pokotilov et al. (US 2018/0263733 A1) fail to disclose generating a parametric 3D model of the patient's face, teeth, gingiva, lips, or any combination thereof, wherein the parametric 3D model comprises patient-specific input parameters of the patient's gingiva based at least in part on the identified one or more edges. The examiner respectfully agrees and moot in view of the new grounds of rejections regarding claim 2, since in Wu et al. (US 2018/0085201 A1) teaches (“Embodiments can enable non-invasive reconstruction of an entire object-specific or person-specific teeth row from just a set of photographs of the mouth region of an object (e.g., an animal) or a person (e.g., an actor or a patient), respectively. A teeth statistic model defining individual teeth in a teeth row can be developed for achieving this. Under this model-based approach, a personalized high-quality 3D teeth model can be reconstructed using just a sparse set of (e.g., uncalibrated) images or a short monocular video sequence as input. The teeth statistic model can be employed to reconstruct a 3D model of teeth based on images of teeth of an object or a person corresponding to the 3D model. In some embodiments, the teeth statistic model can jointly describe shape and pose variations per tooth, and as well as placement of the individual teeth in the teeth row. Unlike related approaches in the medical field, the model-based approach in accordance with the disclosure is non-invasive and can be used to reconstruct a geometric model of the entire teeth row including the gums from images captured from afar and potentially simultaneously. Under this approach, statistical information of a novel parametric tooth prior learned from high-quality 3D dental scans that models the global deformations of an entire teeth row as well as the individual variation of each single tooth can be used to fit teeth information acquired from the images.” [0005] “a single image or multiple images of an object's or a person's teeth can be received. Object-specific or person-specific teeth information can then be acquired from the image(s). Such teeth information may include data terms indicating color, edges, shape, maxilla and/or mandible bones and/or any other aspects of the teeth of the object or the person. Parameters of each tooth for the 3D model, corresponding to the object or the person, can be estimated from the statistic model using the teeth information. The teeth parameters can then be used to model the teeth of the 3D model using pre-authored 3D teeth templates.” [0007]) Wu teaches statistical information of a parametric tooth from 3D dental scans models the global deformations of an entire teeth and each single tooth used to fit teeth information acquired, that the entire teeth row including the gums from images captured and edges and other information regarding the individual teeth as input from person-specific. The teeth parameters can then be used to model the teeth of the 3D model. Therefore, Wu teaches the arguments of the limitations for claim 2 as it is recited.
Claim Rejections - 35 USC § 103
6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
9. Claim(s) 2, 3, 7-12 and 19-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sachdeva et al. (US 2004/0029068 A1) in view of Pokotilov et al. (US 2018/0263733 A1) and Wu et al. (US 2018/0085201 A1).
10. With reference to claim 2, Sachdeva teaches A computer-implemented method of simulating orthodontic treatment, (“A method and workstation for orthodontic treatment planning of a patient.” Abstract “a practitioner viewing the model and using the treatment planning tools may determine that a patient may benefit from a combination of customized orthodontic brackets and wires and removable aligning devices. Data from the virtual patient models is provided to diverse manufacturers for coordinated preparation of customized appliances. Moreover, the virtual patient model and powerful tools described herein provide a means by which the complete picture of the patient can be shared with other specialists (e.g., dentists, maxilla-facial or oral surgeons, cosmetic surgeons, other orthodontists) greatly enhancing the ability of diverse specialists to coordinate and apply a diverse range of treatments to achieve a desired outcome for the patient. In particular, the overlay or superposition of a variety of image information, including 2D X-Ray, 3D teeth image data, photographic data, CT scan data, and other data, and the ability to toggle back and forth between these views and simulate changes in position or shape of craniofacial structures, and the ability to share this virtual patient model across existing computer networks to other specialists and device manufacturers, allows the entire treatment of the patient to be simulated and modeled in a computer.” [0068] “The type of image data that will be obtained will vary depending on the available image acquisition devices available to the practitioner. Preferably, the system employs software simulation of changes in shape or position of craniofacial structures (e.g., teeth or jaw) on the visual appearance, e.g., smile, of the patient.” [0072] “FIGS. 4A-4E show several screen displays from a user interface of the unified workstation that illustrate the process of texture mapping a 3D object (here, teeth) by projection of color data from a 2D photograph. After a patient's dentition is scanned, the virtual teeth and gingiva for both upper and lower arches are represented as a single surface, in the present example a triangle mesh surface. FIG. 4A shows a 2D digital photograph of teeth/gingivae 71 displayed in a graphical window 73 along with a 3D virtual model of the teeth 75 to one side.” [0089]) Sachdeva also teaches the computer-implemented method comprising: rendering a first color 2D image comprising a color representation of a patient’s face, teeth, gingiva, or lips, or any combination thereof; (“the system will acquire digitized images from an X-ray machine capturing X-ray photographs of the patient's head, jaw, teeth, roots of teeth, and other craniofacial structures. … While the above discussion has described how 3D image of the face can be obtained from a three-dimensional scanner, there are other possibilities that may be used in the practice of alternative embodiments. One such alternative is creating a 3D virtual face from a series of 2-D color photographs. … Morphable models can be built based on various known approaches such as optic flow algorithms or active model matching strategy, or a combination of both. One approach is to scan a set of 2D faces.” [0074-0077] “After the images of the face, craniofacial structures, X-rays, teeth etc. are obtained and stored in memory in digital form they are superimposed on each other (i.e., registered to each other via software in the workstation) to create a complete virtual patient model on the workstation. The superposition of the sets of image data may be developed as an automatic software process, or one in which there is user involvement to aid in the process. In one possible example, the three-dimensional textured model of the face is properly aligned with the 3D jaw model obtained from the intra-oral scan, 3D skull data from CT scan, and 2 dimensional X-rays to create a virtual patient model.” [0083]) Sachdeva teaches 2-D color photographs of the patient's head, jaw, teeth, roots of teeth, and other craniofacial structures. Sachdeva further teaches simulating a position of the patient’s teeth by rendering the parametric 3D model with the patient’s teeth in a predetermined position of a treatment plan; (“The virtual patient model, or some portion thereof, such as data describing a three-dimensional model of the teeth in initial and target or treatment positions, is useful information for generating customized orthodontic appliances for treatment of the patient. The position of the teeth in the initial and desired positions can be used to generate a set of customized brackets, and customized flat planar archwire, and customized bracket placement jigs, as described in the above-referenced Andreiko et al. patents.” [0066] “a practitioner viewing the model and using the treatment planning tools may determine that a patient may benefit from a combination of customized orthodontic brackets and wires and removable aligning devices. Data from the virtual patient models is provided to diverse manufacturers for coordinated preparation of customized appliances. Moreover, the virtual patient model and powerful tools described herein provide a means by which the complete picture of the patient can be shared with other specialists (e.g., dentists, maxilla-facial or oral surgeons, cosmetic surgeons, other orthodontists) greatly enhancing the ability of diverse specialists to coordinate and apply a diverse range of treatments to achieve a desired outcome for the patient. In particular, the overlay or superposition of a variety of image information, including 2D X-Ray, 3D teeth image data, photographic data, CT scan data, and other data, and the ability to toggle back and forth between these views and simulate changes in position or shape of craniofacial structures, and the ability to share this virtual patient model across existing computer networks to other specialists and device manufacturers, allows the entire treatment of the patient to be simulated and modeled in a computer.” [0068] “the system employs software simulation of changes in shape or position of craniofacial structures (e.g., teeth or jaw) on the visual appearance, e.g., smile, of the patient. Accordingly, at least one of the data sets will include normally include data regarding the surface configuration of the face and head. … The image data regarding the patient's exterior appearance can be obtained through other means including via scanning of the head and face of the patient via the hand-held 3D-scanner 30 described in the published OraMetrix PCT application, … the scanner captures a sequence of overlapping images of the surface of the patient as the scanner is held by the hand and moved about the face. The set of images can be obtained in only a few minutes. Each image is converted to a set of X, Y and Z coordinate positions comprising a cloud of points representing the surface of the face. The point clouds from each image are registered to each other to find a best fit to the data. The resulting registered point cloud is then stored in the memory as a virtual three-dimensional object.” [0072-0073] “FIGS. 4A-4E show several screen displays from a user interface of the unified workstation that illustrate the process of texture mapping a 3D object (here, teeth) by projection of color data from a 2D photograph. After a patient's dentition is scanned, the virtual teeth and gingiva for both upper and lower arches are represented as a single surface, in the present example a triangle mesh surface. FIG. 4A shows a 2D digital photograph of teeth/gingivae 71 displayed in a graphical window 73 along with a 3D virtual model of the teeth 75 to one side.” [0089]) Sachdeva teaches mapping color information from the first color 2D image onto the parametric 3D model; (“Three-dimensional image data sets of the upper and lower arches including upper and lower teeth are preferably created with a 3D optical scanner 30, such as the OraMetrix hand-held in-vivo scanner. If the 3D jaw model has no texture model, i.e., no color data, the texture data can be extracted from the 2 dimensional colored picture of the upper and lower jaw and mapped to the 3D coordinates on the jaw model using a cylindrical projection technique.” [0081] “FIGS. 4A-4E show several screen displays from a user interface of the unified workstation that illustrate the process of texture mapping a 3D object (here, teeth) by projection of color data from a 2D photograph. After a patient's dentition is scanned, the virtual teeth and gingiva for both upper and lower arches are represented as a single surface, in the present example a triangle mesh surface. FIG. 4A shows a 2D digital photograph of teeth/gingivae 71 displayed in a graphical window 73 along with a 3D virtual model of the teeth 75 to one side.” [0089])
PNG
media_image1.png
344
443
media_image1.png
Greyscale
Sachdeva does not explicitly teach color-coded image comprising an RGB color-coded representation; each color channel of the RGB color-coded representation corresponds to a quality or feature of the patient’s face, teeth, gingiva, lips, or any combination thereof; identifying one or more edges of the patient’s face, teeth, gingiva, lips, or any combination thereof; generating a parametric 3D model of the patient’s face, teeth, gingiva, lips, or any combination thereof, wherein the parametric 3D model comprises patient-specific input parameters of the patient's gingiva based at least in part on the identified one or more edges; and generating a second color-coded 2D image representing the patient’s face, teeth, gingiva, lips, or any combination thereof in accordance with the predetermined position of the treatment plan of the parametric 3D model and the mapped color channel information of the parametric 3D model. These are what Pokotilov teaches. Pokotilov teaches color-coded image comprising an RGB color-coded representation; each color channel of the RGB color-coded representation corresponds to a quality or feature of the patient’s face, teeth, gingiva, lips, or any combination thereof; (“The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model. … The red color channel may be used to differential each tooth and the gingiva from each other. In such an embodiment, the gingiva may have a red channel value of 1, the left upper central incisor may have a red value of 2, the right lower canine may have a red channel of 3, the portions of the model that are not teeth or gingiva might have a red channel value of 0, and so on, so that the red channel value of each pixel identifies the dental anatomy associated with the pixel.” [0327-0328]) Pokotilov teaches each color channel of the RGB color coded image corresponding to a different quality or feature of the model. Pokotilov also teaches identifying one or more edges of the patient’s face, teeth, gingiva, lips, or any combination thereof; (“At block 530 reference points are selected or otherwise identified on the 3D bite model and the 2D image of the patient. In some embodiments, the reference points may include the gingival apex of one or more teeth, such as the anterior teeth. In some embodiments, the reference points may include the midpoint of the incisal edge of the teeth or the ends of the incisal edges of teeth.” [0199] “the mouth opening is the shape of the inside edge of the patient's lips in the 2D image.” [0203]) Pokotilov further teaches generating a second color-coded 2D image representing the patient’s face, teeth, gingiva, lips, or any combination thereof in accordance with the predetermined position of the treatment plan of the 3D model and the mapped color channel information of the 3D model. (“Systems and methods are described herein to more closely integrate 3D bite models into 2D images of a patient” [0008] “the 3D bite model may have parameters that are similar to or the same as the parameters of the 2D image 3641. A final composite image 3630 may be generated based on applying the parameters determined earlier in the process to the 3D bite model 3631 such that the 3D bite model 3631 is matched to the colors of the natural teeth 3642 of the 2D image 3641 of the patient.” [0322] “according to an orthodontic treatment plan, a blurred initial image of the patient's teeth 4810, and a color coded image 4812 of the 3D model of the patient's teeth in the clinical final position. The image of a rendering of a 3D model of the patient's teeth in a clinical final position or the 3D rendered model of the patients teeth in the clinical final position 4808 may be determined based on the clinical orthodontic treatment plan for moving the patient's teeth from the initial position towards the final position, as described above.” [0324-0325] “The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model. For example, the green color channel, which may be an 8-bit color channel indicates the brightness of the blurred image 4810 on a scale of 0 to 255 as, for example, overlaid on the 3D model. The red color channel may be used to differential each tooth and the gingiva from each other. In such an embodiment, the gingiva may have a red channel value of 1, the left upper central incisor may have a red value of 2, the right lower canine may have a red channel of 3, the portions of the model that are not teeth or gingiva might have a red channel value of 0, and so on, so that the red channel value of each pixel identifies the dental anatomy associated with the pixel. … The blue color channel may be used to identify the angle of the teeth and/or gingiva with respect to the facial plane.” [0327-0329]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pokotilov into Sachdeva, in order to increase the effectiveness and acceptance of orthodontic treatment.
The combination of Sachdeva and Pokotilov does not explicitly teach generating a parametric 3D model of the patient's face, teeth, gingiva, lips, or any combination thereof, wherein the parametric 3D model comprises patient-specific input parameters of the patient's gingiva based at least in part on the identified one or more edges. This is what Wu teaches (“Embodiments can enable non-invasive reconstruction of an entire object-specific or person-specific teeth row from just a set of photographs of the mouth region of an object (e.g., an animal) or a person (e.g., an actor or a patient), respectively. A teeth statistic model defining individual teeth in a teeth row can be developed for achieving this. Under this model-based approach, a personalized high-quality 3D teeth model can be reconstructed using just a sparse set of (e.g., uncalibrated) images or a short monocular video sequence as input. The teeth statistic model can be employed to reconstruct a 3D model of teeth based on images of teeth of an object or a person corresponding to the 3D model. In some embodiments, the teeth statistic model can jointly describe shape and pose variations per tooth, and as well as placement of the individual teeth in the teeth row. Unlike related approaches in the medical field, the model-based approach in accordance with the disclosure is non-invasive and can be used to reconstruct a geometric model of the entire teeth row including the gums from images captured from afar and potentially simultaneously. Under this approach, statistical information of a novel parametric tooth prior learned from high-quality 3D dental scans that models the global deformations of an entire teeth row as well as the individual variation of each single tooth can be used to fit teeth information acquired from the images.” [0005] “a single image or multiple images of an object's or a person's teeth can be received. Object-specific or person-specific teeth information can then be acquired from the image(s). Such teeth information may include data terms indicating color, edges, shape, maxilla and/or mandible bones and/or any other aspects of the teeth of the object or the person. Parameters of each tooth for the 3D model, corresponding to the object or the person, can be estimated from the statistic model using the teeth information. The teeth parameters can then be used to model the teeth of the 3D model using pre-authored 3D teeth templates.” [0007]) Wu teaches statistical information of a parametric tooth from 3D dental scans models the global deformations of an entire teeth and each single tooth used to fit teeth information acquired, that the entire teeth row including the gums from images captured and edges and other information regarding the individual teeth as input from person-specific. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Wu into the combination of Sachdeva and Pokotilov, in order to seamlessly integrate teeth and gum model into existing photogrammetric multi-camera setups.
11. With reference to claim 3, Sachdeva teaches the first color 2D image is rendered from a captured image of the patient, video of the patient, stored image of the patient, or any combination thereof. (“The creation of the virtual patient model uses the capture and storage of at least two different digital sets of image data of the patient. … at least one of the data sets will include normally include data regarding the surface configuration of the face and head. A commercially available digital CCD camera 28 (FIG. 1), e.g., camera available from Sony or Canon, can be used to obtain this information. Preferably, the image data is color image data. The data sets are obtained by photographing the patient's head and face at various viewing angles-with the camera and storing the resulting image files in the memory of the computer.” [0071-0072] “the system will acquire digitized images from an X-ray machine capturing X-ray photographs of the patient's head, jaw, teeth, roots of teeth, and other craniofacial structures. … While the above discussion has described how 3D image of the face can be obtained from a three-dimensional scanner, there are other possibilities that may be used in the practice of alternative embodiments. One such alternative is creating a 3D virtual face from a series of 2-D color photographs. … Morphable models can be built based on various known approaches such as optic flow algorithms or active model matching strategy, or a combination of both. One approach is to scan a set of 2D faces.” [0074-0077] “After the images of the face, craniofacial structures, X-rays, teeth etc. are obtained and stored in memory in digital form they are superimposed on each other (i.e., registered to each other via software in the workstation) to create a complete virtual patient model on the workstation. The superposition of the sets of image data may be developed as an automatic software process, or one in which there is user involvement to aid in the process. In one possible example, the three-dimensional textured model of the face is properly aligned with the 3D jaw model obtained from the intra-oral scan, 3D skull data from CT scan, and 2 dimensional X-rays to create a virtual patient model.” [0083])
Sachdeva does not explicitly teach color-coded image. This is what Pokotilov teaches (“The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model.” [0327]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pokotilov into Sachdeva, in order to increase the effectiveness and acceptance of orthodontic treatment.
12. With reference to claim 7, Sachdeva teaches adjusting the color information based on color information from one or more datastores. (“The computer or workstation 10 (FIG. 1) that includes the software for generating the patient model preferably includes interactive treatment planning software that allows the user to simulate various possible treatments for the patient on the workstation and visualize the results of proposed treatments on the user interface by seeing their effect on the visual appearance of the patient, especially their smile. The interactive treatment planning preferably provides suitable tools and icons that allow the user to vary parameters affecting the patient. Such parameters would include parameters that can be changed so as to simulate change in the age of the patient, and parameters that allow the user to adjust the color, texture, position and orientation of the teeth, individually and as a group. The user manipulates the tools for these parameters and thereby generates various virtual patient models with different features and smiles.” [0111] “The workstation includes a memory storing machine readable instructions comprising an integrated treatment planning and model manipulation software program indicated generally at 300. The treatment planning instructions 300 will be described in further detail below. The treatment planning software uses additional software modules. A patient history module 302 contains user interface screens and appropriate prompts to obtain and record a complete patient medical and dental history, along with pertinent demographic data for the patient.” [0117])
Sachdeva does not explicitly teach color channel information. This is what Pokotilov teaches (“The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model. For example, the green color channel, which may be an 8-bit color channel indicates the brightness of the blurred image 4810 on a scale of 0 to 255 as, for example, overlaid on the 3D model. The red color channel may be used to differential each tooth and the gingiva from each other. In such an embodiment, the gingiva may have a red channel value of 1, the left upper central incisor may have a red value of 2, the right lower canine may have a red channel of 3, the portions of the model that are not teeth or gingiva might have a red channel value of 0, and so on, so that the red channel value of each pixel identifies the dental anatomy associated with the pixel. … The blue color channel may be used to identify the angle of the teeth and/or gingiva with respect to the facial plane.” [0327-0329]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pokotilov into Sachdeva, in order to increase the effectiveness and acceptance of orthodontic treatment.
13. With reference to claim 8, Sachdeva teaches the color information from one or more datastores comprises color information from one or more previous patients. (“The computer or workstation 10 (FIG. 1) that includes the software for generating the patient model preferably includes interactive treatment planning software that allows the user to simulate various possible treatments for the patient on the workstation and visualize the results of proposed treatments on the user interface by seeing their effect on the visual appearance of the patient, especially their smile. The interactive treatment planning preferably provides suitable tools and icons that allow the user to vary parameters affecting the patient. Such parameters would include parameters that can be changed so as to simulate change in the age of the patient, and parameters that allow the user to adjust the color, texture, position and orientation of the teeth, individually and as a group. The user manipulates the tools for these parameters and thereby generates various virtual patient models with different features and smiles.” [0111] “The workstation includes a memory storing machine readable instructions comprising an integrated treatment planning and model manipulation software program indicated generally at 300. The treatment planning instructions 300 will be described in further detail below. The treatment planning software uses additional software modules. A patient history module 302 contains user interface screens and appropriate prompts to obtain and record a complete patient medical and dental history, along with pertinent demographic data for the patient.” [0117])
Sachdeva does not explicitly teach color channel information. This is what Pokotilov teaches (“The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model. For example, the green color channel, which may be an 8-bit color channel indicates the brightness of the blurred image 4810 on a scale of 0 to 255 as, for example, overlaid on the 3D model. The red color channel may be used to differential each tooth and the gingiva from each other. In such an embodiment, the gingiva may have a red channel value of 1, the left upper central incisor may have a red value of 2, the right lower canine may have a red channel of 3, the portions of the model that are not teeth or gingiva might have a red channel value of 0, and so on, so that the red channel value of each pixel identifies the dental anatomy associated with the pixel. … The blue color channel may be used to identify the angle of the teeth and/or gingiva with respect to the facial plane.” [0327-0329]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pokotilov into Sachdeva, in order to increase the effectiveness and acceptance of orthodontic treatment.
14. With reference to claim 9, Sachdeva teaches the color information further comprises brightness, coloration, dental treatment material, textures, or any combination thereof. (“If the 3D jaw model has no texture model, i.e., no color data, the texture data can be extracted from the 2 dimensional colored picture of the upper and lower jaw and mapped to the 3D coordinates on the jaw model using a cylindrical projection technique. In this technique, a map is constructed in texture space, that for each point (u, v), specifies a triangle whose cylindrical projection covers that point. The 3D point p corresponding to point (u, v) in texture space is computed by intersecting a ray with the surface of the corresponding point in the 2D colored image.” [0081] “the user interface preferably includes an icon 744, which when activated allows the user to change the simulated illumination of the teeth, i.e., become brighter, or come from a different place, in order to more clearly show the surface characteristics of the teeth.” [0186])
Sachdeva does not explicitly teach color channel information. This is what Pokotilov teaches (“The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model. For example, the green color channel, which may be an 8-bit color channel indicates the brightness of the blurred image 4810 on a scale of 0 to 255 as, for example, overlaid on the 3D model. The red color channel may be used to differential each tooth and the gingiva from each other. In such an embodiment, the gingiva may have a red channel value of 1, the left upper central incisor may have a red value of 2, the right lower canine may have a red channel of 3, the portions of the model that are not teeth or gingiva might have a red channel value of 0, and so on, so that the red channel value of each pixel identifies the dental anatomy associated with the pixel. … The blue color channel may be used to identify the angle of the teeth and/or gingiva with respect to the facial plane.” [0327-0329]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pokotilov into Sachdeva, in order to increase the effectiveness and acceptance of orthodontic treatment.
15. With reference to claim 10, Sachdeva teaches generating textures on one or more areas of the color mapped parametric 3D model by processing the color information of the first colored 2D image. (“Three-dimensional image data sets of the upper and lower arches including upper and lower teeth are preferably created with a 3D optical scanner 30, such as the OraMetrix hand-held in-vivo scanner. If the 3D jaw model has no texture model, i.e., no color data, the texture data can be extracted from the 2 dimensional colored picture of the upper and lower jaw and mapped to the 3D coordinates on the jaw model using a cylindrical projection technique. In this technique, a map is constructed in texture space, that for each point (u, v), specifies a triangle whose cylindrical projection covers that point. The 3D point p corresponding to point (u, v) in texture space is computed by intersecting a ray with the surface of the corresponding point in the 2D colored image.” [0081])
Sachdeva does not explicitly teach color channel information. This is what Pokotilov teaches (“The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model. For example, the green color channel, which may be an 8-bit color channel indicates the brightness of the blurred image 4810 on a scale of 0 to 255 as, for example, overlaid on the 3D model. The red color channel may be used to differential each tooth and the gingiva from each other. In such an embodiment, the gingiva may have a red channel value of 1, the left upper central incisor may have a red value of 2, the right lower canine may have a red channel of 3, the portions of the model that are not teeth or gingiva might have a red channel value of 0, and so on, so that the red channel value of each pixel identifies the dental anatomy associated with the pixel. … The blue color channel may be used to identify the angle of the teeth and/or gingiva with respect to the facial plane.” [0327-0329]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pokotilov into Sachdeva, in order to increase the effectiveness and acceptance of orthodontic treatment.
16. With reference to claim 11, Sachdeva does not explicitly teach the textures on the parametric 3D model are generated on the second color-coded 2D image. This is what Pokotilov teaches. Pokotilov teaches the textures on the 3D model are generated on the second color-coded 2D image. ( “Systems and methods are described herein to more closely integrate 3D bite models into 2D images of a patient” [0008] “Color parameters may include color temperature, red, green, and blue color intensities, saturation, luminance of the teeth and gingiva. Material parameters may include surface texture modifications such as surface textures and surface reflectivity.” [0321] “The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model. For example, the green color channel, which may be an 8-bit color channel indicates the brightness of the blurred image 4810 on a scale of 0 to 255 as, for example, overlaid on the 3D model. The red color channel may be used to differential each tooth and the gingiva from each other. In such an embodiment, the gingiva may have a red channel value of 1, the left upper central incisor may have a red value of 2, the right lower canine may have a red channel of 3, the portions of the model that are not teeth or gingiva might have a red channel value of 0, and so on, so that the red channel value of each pixel identifies the dental anatomy associated with the pixel. … The blue color channel may be used to identify the angle of the teeth and/or gingiva with respect to the facial plane.” [0327-0329]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pokotilov into Sachdeva, in order to increase the effectiveness and acceptance of orthodontic treatment.
The combination of Sachdeva and Pokotilov does not explicitly teach the parametric 3D model. This is what Wu teaches (“Embodiments can enable non-invasive reconstruction of an entire object-specific or person-specific teeth row from just a set of photographs of the mouth region of an object (e.g., an animal) or a person (e.g., an actor or a patient), respectively. A teeth statistic model defining individual teeth in a teeth row can be developed for achieving this. Under this model-based approach, a personalized high-quality 3D teeth model can be reconstructed using just a sparse set of (e.g., uncalibrated) images or a short monocular video sequence as input. The teeth statistic model can be employed to reconstruct a 3D model of teeth based on images of teeth of an object or a person corresponding to the 3D model. In some embodiments, the teeth statistic model can jointly describe shape and pose variations per tooth, and as well as placement of the individual teeth in the teeth row. Unlike related approaches in the medical field, the model-based approach in accordance with the disclosure is non-invasive and can be used to reconstruct a geometric model of the entire teeth row including the gums from images captured from afar and potentially simultaneously. Under this approach, statistical information of a novel parametric tooth prior learned from high-quality 3D dental scans that models the global deformations of an entire teeth row as well as the individual variation of each single tooth can be used to fit teeth information acquired from the images.” [0005] “a single image or multiple images of an object's or a person's teeth can be received. Object-specific or person-specific teeth information can then be acquired from the image(s). Such teeth information may include data terms indicating color, edges, shape, maxilla and/or mandible bones and/or any other aspects of the teeth of the object or the person. Parameters of each tooth for the 3D model, corresponding to the object or the person, can be estimated from the statistic model using the teeth information. The teeth parameters can then be used to model the teeth of the 3D model using pre-authored 3D teeth templates.” [0007]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Wu into the combination of Sachdeva and Pokotilov, in order to seamlessly integrate teeth and gum model into existing photogrammetric multi-camera setups.
17. With reference to claim 12, Sachdeva teaches the textures are generated by projecting the first color 2D image onto the parametric 3D model. (“Three-dimensional image data sets of the upper and lower arches including upper and lower teeth are preferably created with a 3D optical scanner 30, such as the OraMetrix hand-held in-vivo scanner. If the 3D jaw model has no texture model, i.e., no color data, the texture data can be extracted from the 2 dimensional colored picture of the upper and lower jaw and mapped to the 3D coordinates on the jaw model using a cylindrical projection technique. In this technique, a map is constructed in texture space, that for each point (u, v), specifies a triangle whose cylindrical projection covers that point. The 3D point p corresponding to point (u, v) in texture space is computed by intersecting a ray with the surface of the corresponding point in the 2D colored image.” [0081])
Sachdeva does not explicitly teach color-coded image. This is what Pokotilov teaches (“The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model.” [0327]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pokotilov into Sachdeva, in order to increase the effectiveness and acceptance of orthodontic treatment.
18. With reference to claim 19, Sachdeva teaches the predetermined position comprises an initial position. (“The virtual patient model, or some portion thereof, such as data describing a three-dimensional model of the teeth in initial and target or treatment positions, is useful information for generating customized orthodontic appliances for treatment of the patient. The position of the teeth in the initial and desired positions can be used to generate a set of customized brackets, and customized flat planar archwire, and customized bracket placement jigs, as described in the above-referenced Andreiko et al. patents.” [0066] “the system employs software simulation of changes in shape or position of craniofacial structures (e.g., teeth or jaw) on the visual appearance, e.g., smile, of the patient. Accordingly, at least one of the data sets will include normally include data regarding the surface configuration of the face and head. … The image data regarding the patient's exterior appearance can be obtained through other means including via scanning of the head and face of the patient via the hand-held 3D-scanner 30 described in the published OraMetrix PCT application, … the scanner captures a sequence of overlapping images of the surface of the patient as the scanner is held by the hand and moved about the face. The set of images can be obtained in only a few minutes. Each image is converted to a set of X, Y and Z coordinate positions comprising a cloud of points representing the surface of the face. The point clouds from each image are registered to each other to find a best fit to the data. The resulting registered point cloud is then stored in the memory as a virtual three-dimensional object.” [0072-0073])
19. With reference to claim 20, Sachdeva teaches the predetermined position comprises a final position. (“The virtual patient model, or some portion thereof, such as data describing a three-dimensional model of the teeth in initial and target or treatment positions, is useful information for generating customized orthodontic appliances for treatment of the patient. The position of the teeth in the initial and desired positions can be used to generate a set of customized brackets, and customized flat planar archwire, and customized bracket placement jigs, as described in the above-referenced Andreiko et al. patents. Alternatively, the initial and final tooth positions can be used to derive data sets representing intermediate tooth positions, which are used to fabricate transparent aligning shells for moving teeth to the final position, as described in the above-referenced Chisti et al. patents.” [0066] “the system employs software simulation of changes in shape or position of craniofacial structures (e.g., teeth or jaw) on the visual appearance, e.g., smile, of the patient. Accordingly, at least one of the data sets will include normally include data regarding the surface configuration of the face and head. … The image data regarding the patient's exterior appearance can be obtained through other means including via scanning of the head and face of the patient via the hand-held 3D-scanner 30 described in the published OraMetrix PCT application, … the scanner captures a sequence of overlapping images of the surface of the patient as the scanner is held by the hand and moved about the face. The set of images can be obtained in only a few minutes. Each image is converted to a set of X, Y and Z coordinate positions comprising a cloud of points representing the surface of the face. The point clouds from each image are registered to each other to find a best fit to the data. The resulting registered point cloud is then stored in the memory as a virtual three-dimensional object.” [0072-0073])
20. With reference to claim 21, Sachdeva teaches the final position comprises a position after an orthodontic treatment plan, a restorative treatment plan, or both. (“The virtual patient model, or some portion thereof, such as data describing a three-dimensional model of the teeth in initial and target or treatment positions, is useful information for generating customized orthodontic appliances for treatment of the patient. The position of the teeth in the initial and desired positions can be used to generate a set of customized brackets, and customized flat planar archwire, and customized bracket placement jigs, as described in the above-referenced Andreiko et al. patents. Alternatively, the initial and final tooth positions can be used to derive data sets representing intermediate tooth positions, which are used to fabricate transparent aligning shells for moving teeth to the final position, as described in the above-referenced Chisti et al. patents.” [0066] “Once the user has modified the virtual patient model to achieve the patient's desired feature and smile, it is possible to automatically back-solve for the teeth, jaw and skull movement or correction necessary to achieve this result. In particular, the tooth movement necessary can be determined by isolating the teeth in the virtual patient model, treating this tooth finish position as the final position in the interactive treatment planning described in the published OraMetrix PCT application, WO 01/80761, designing the bracket placement and virtual arch wire necessary to move teeth to that position, and then fabricating the wire and bracket placement trays, templates or jigs to correctly place the brackets at the desired location. The desired jaw movement can be determined by comparing the jaw position in the virtual patient model's finish position with the jaw position in the virtual patient model in the original condition, and using various implant devices or surgical techniques to change the shape or position of the jaw to achieve the desired position.” [0113] “The evaluation can also serve as a guide by evaluating the course of treatment and the eventual outcome of treatment and providing a means to measure the difference between the actual outcome and the expected outcome. To realize this aspect, the practitioner would need to periodically obtain updated scans of the patient during the course of treatment and compare the current (or final) tooth position with the expected position and use measuring tools or other graphical devices (shading on tooth models) to quantify the amount of variance between the actual and expected position. Obtaining tooth position data during the course of treatment can be obtained by using the in-vivo scanner described in the published PCT application of OraMetrix, cited previously.” [0236])
21. Claim(s) 4-6 and 13-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sachdeva et al. (US 2004/0029068 A1), Pokotilov et al. (US 2018/0263733 A1) and Wu et al. (US 2018/0085201 A1), as applied to claims 2 and 10 above, and further in view of Unklesbay et al. (US 2021/0045701 A1).
22. With reference to claim 4, the combination of Sachdeva, Pokotilov and Wu does not explicitly teach defining a mask region of the parametric 3D model, wherein the mask region corresponds to one or more areas of the patient’s face, teeth, gingiva, lips, or any combination thereof. This is what Unklesbay teaches (“The rendered orthodontic appliance can be represented by a 2D image or a 3D model.” [0053] “FIGS. 8A-8C illustrate adaptive thresholding applied to luminance channel of YUV color space of region of interest: channel “Y” (luminance) of YUV color space (FIG. 8A), mask after adaptive thresholding applied combined (bitwise and) with mask from Lab color space adaptive threshold (FIG. 8B), and final color image of region of interest with combined mask applied (FIG. 8C).” [0055] “Edges are detected from the mask, using a technique such as Canny Edge Detection. FIG. 10 illustrates detected edges from canny edge detection. Contours can be generated from the detected edges, where individual teeth or groups of teeth can be analyzed or worked with as standalone objects. FIG. 11 illustrates closed contours found from detected edges, overlaid onto a color region of interest.” [0057-0058]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Unklesbay into the combination of Sachdeva, Pokotilov and Wu, in order to render the virtual scene in real-time and present back to the patient in sync.
23. With reference to claim 5, Sachdeva does not explicitly teach the color channel information is mapped onto at least part of the mask region of the parametric 3D model. This is what Pokotilov teaches. Pokotilov teaches color channel information. (“The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model. For example, the green color channel, which may be an 8-bit color channel indicates the brightness of the blurred image 4810 on a scale of 0 to 255 as, for example, overlaid on the 3D model. The red color channel may be used to differential each tooth and the gingiva from each other. In such an embodiment, the gingiva may have a red channel value of 1, the left upper central incisor may have a red value of 2, the right lower canine may have a red channel of 3, the portions of the model that are not teeth or gingiva might have a red channel value of 0, and so on, so that the red channel value of each pixel identifies the dental anatomy associated with the pixel. … The blue color channel may be used to identify the angle of the teeth and/or gingiva with respect to the facial plane.” [0327-0329]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pokotilov into Sachdeva, in order to increase the effectiveness and acceptance of orthodontic treatment.
The combination of Sachdeva, Pokotilov and Wu does not explicitly teach the color information is mapped onto at least part of the mask region of the parametric 3D model. This is what Unklesbay teaches (“The rendered orthodontic appliance can be represented by a 2D image or a 3D model.” [0053] “FIGS. 8A-8C illustrate adaptive thresholding applied to luminance channel of YUV color space of region of interest: channel “Y” (luminance) of YUV color space (FIG. 8A), mask after adaptive thresholding applied combined (bitwise and) with mask from Lab color space adaptive threshold (FIG. 8B), and final color image of region of interest with combined mask applied (FIG. 8C).” [0055] “Edges are detected from the mask, using a technique such as Canny Edge Detection. FIG. 10 illustrates detected edges from canny edge detection. Contours can be generated from the detected edges, where individual teeth or groups of teeth can be analyzed or worked with as standalone objects. FIG. 11 illustrates closed contours found from detected edges, overlaid onto a color region of interest.” [0057-0058] “Accompanying these techniques is UV mapping (or texture mapping) to apply color values from pixels in the 2D images to 3D vertices or triangles in the mesh. As such, a realistic-looking and reasonably accurate 3D model of the person's face may be generated in the virtual universe.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Unklesbay into the combination of Sachdeva, Pokotilov and Wu, in order to render the virtual scene in real-time and present back to the patient in sync.
24. With reference to claim 6, the combination of Sachdeva, Pokotilov and Wu does not explicitly teach the mask region is applied to only one of the patient’s face, teeth, gingiva, or lips. This is what Unklesbay teaches (“FIGS. 8A-8C illustrate adaptive thresholding applied to luminance channel of YUV color space of region of interest: channel “Y” (luminance) of YUV color space (FIG. 8A), mask after adaptive thresholding applied combined (bitwise and) with mask from Lab color space adaptive threshold (FIG. 8B), and final color image of region of interest with combined mask applied (FIG. 8C).” [0055] “Edges are detected from the mask, using a technique such as Canny Edge Detection. FIG. 10 illustrates detected edges from canny edge detection. Contours can be generated from the detected edges, where individual teeth or groups of teeth can be analyzed or worked with as standalone objects. FIG. 11 illustrates closed contours found from detected edges, overlaid onto a color region of interest.” [0057-0058]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Unklesbay into the combination of Sachdeva, Pokotilov and Wu, in order to render the virtual scene in real-time and present back to the patient in sync.
25. With reference to claim 13, the combination of Sachdeva, Pokotilov and Wu does not explicitly teach the textures are estimated textures. This is what Unklesbay teaches (“The addition, subtraction, or modification of natural tooth anatomy, such as by using tooth whitening agents, reducing or building-up cusp tips or incisal edges, or predicting the appearance of teeth after eruption in children (possibly including ectopic eruptions).” [0027] “Turn the 3D point cloud into a 3D mesh for the simulation step. This may also include finding texture coordinates through ray tracing based on camera parameters (e.g., camera location).” [0050] “the virtual treatment can be applied, either through augmentation or manipulation of the 3D geometry. This may include pin-pointing locations or areas to apply the simulation. In this example, augmentation is implemented, where the rendered orthodontic appliance is overlaid onto the region of interest, after estimating the location on the region of interest where the appliance should be placed, or where detected appliances are virtually removed from the region of interest. … The rendered orthodontic appliance can be represented by a 2D image or a 3D model.” [0053] “the video camera may be used as a type of 3D scanner, thereby capturing multiple 2D images of a person's face from different vantage points (i.e., viewpoints and look-at vectors, which together with the image plane form a set of distinct view frustums). Using 3D photogrammetry techniques, a 3D model of the person's face (and head) may be generated, originally in the form of a point cloud, then later in the form of a triangular mesh. Accompanying these techniques is UV mapping (or texture mapping) to apply color values from pixels in the 2D images to 3D vertices or triangles in the mesh. As such, a realistic-looking and reasonably accurate 3D model of the person's face may be generated in the virtual universe.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Unklesbay into the combination of Sachdeva, Pokotilov and Wu, in order to render the virtual scene in real-time and present back to the patient in sync.
26. With reference to claim 14, the combination of Sachdeva, Pokotilov and Wu does not explicitly teach the estimated textures are associated with one or more attributes of the surface of the teeth. This is what Unklesbay teaches (“The addition, subtraction, or modification of natural tooth anatomy, such as by using tooth whitening agents, reducing or building-up cusp tips or incisal edges, or predicting the appearance of teeth after eruption in children (possibly including ectopic eruptions).” [0027] “Turn the 3D point cloud into a 3D mesh for the simulation step. This may also include finding texture coordinates through ray tracing based on camera parameters (e.g., camera location). … This method involves a parametric representation of the face or region of interest (e.g., NURBS surface or Bézier surface), or a generic polygonal mesh (i.e., triangles or quadrilaterals), which will be morphed, expanded, or stretched to best fit either a point cloud of the face (see Method 1) or a set of landmarks obtained earlier.” [0050-0051] “the virtual treatment can be applied, either through augmentation or manipulation of the 3D geometry. This may include pin-pointing locations or areas to apply the simulation. In this example, augmentation is implemented, where the rendered orthodontic appliance is overlaid onto the region of interest, after estimating the location on the region of interest where the appliance should be placed, or where detected appliances are virtually removed from the region of interest. … The rendered orthodontic appliance can be represented by a 2D image or a 3D model. … various image processing techniques are used on the region of interest to segment and identify the teeth, which include scaling-up the region of interest, noise reduction, applying a segmentation algorithm such as Mean Shift segmentation, histogram equalization, adaptive thresholding on specific channels of multiple color spaces, eroding and dilating (opening/closing), edge detection, and finding contours.” [0053-0054] “the video camera may be used as a type of 3D scanner, thereby capturing multiple 2D images of a person's face from different vantage points (i.e., viewpoints and look-at vectors, which together with the image plane form a set of distinct view frustums). Using 3D photogrammetry techniques, a 3D model of the person's face (and head) may be generated, originally in the form of a point cloud, then later in the form of a triangular mesh. Accompanying these techniques is UV mapping (or texture mapping) to apply color values from pixels in the 2D images to 3D vertices or triangles in the mesh. As such, a realistic-looking and reasonably accurate 3D model of the person's face may be generated in the virtual universe.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Unklesbay into the combination of Sachdeva, Pokotilov and Wu, in order to render the virtual scene in real-time and present back to the patient in sync.
27. With reference to claim 15, the combination of Sachdeva, Pokotilov and Wu does not explicitly teach the estimated textures are adjusted to simulate orthodontic or cosmetic treatment, or both. This is what Unklesbay teaches (“The addition, subtraction, or modification of natural tooth anatomy, such as by using tooth whitening agents, reducing or building-up cusp tips or incisal edges, or predicting the appearance of teeth after eruption in children (possibly including ectopic eruptions).” [0027] “Turn the 3D point cloud into a 3D mesh for the simulation step. This may also include finding texture coordinates through ray tracing based on camera parameters (e.g., camera location). … This method involves a parametric representation of the face or region of interest (e.g., NURBS surface or Bézier surface), or a generic polygonal mesh (i.e., triangles or quadrilaterals), which will be morphed, expanded, or stretched to best fit either a point cloud of the face (see Method 1) or a set of landmarks obtained earlier.” [0050-0051] “the virtual treatment can be applied, either through augmentation or manipulation of the 3D geometry. This may include pin-pointing locations or areas to apply the simulation. In this example, augmentation is implemented, where the rendered orthodontic appliance is overlaid onto the region of interest, after estimating the location on the region of interest where the appliance should be placed, or where detected appliances are virtually removed from the region of interest. … The rendered orthodontic appliance can be represented by a 2D image or a 3D model. … various image processing techniques are used on the region of interest to segment and identify the teeth, which include scaling-up the region of interest, noise reduction, applying a segmentation algorithm such as Mean Shift segmentation, histogram equalization, adaptive thresholding on specific channels of multiple color spaces, eroding and dilating (opening/closing), edge detection, and finding contours.” [0053-0054] “the video camera may be used as a type of 3D scanner, thereby capturing multiple 2D images of a person's face from different vantage points (i.e., viewpoints and look-at vectors, which together with the image plane form a set of distinct view frustums). Using 3D photogrammetry techniques, a 3D model of the person's face (and head) may be generated, originally in the form of a point cloud, then later in the form of a triangular mesh. Accompanying these techniques is UV mapping (or texture mapping) to apply color values from pixels in the 2D images to 3D vertices or triangles in the mesh. As such, a realistic-looking and reasonably accurate 3D model of the person's face may be generated in the virtual universe.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Unklesbay into the combination of Sachdeva, Pokotilov and Wu, in order to render the virtual scene in real-time and present back to the patient in sync.
28. With reference to claim 16, the combination of Sachdeva, Pokotilov and Wu does not explicitly teach the one or more attributes of the surface of the teeth are feel, appearance, consistency, or any combination thereof. This is what Unklesbay teaches (“The addition, subtraction, or modification of natural tooth anatomy, such as by using tooth whitening agents, reducing or building-up cusp tips or incisal edges, or predicting the appearance of teeth after eruption in children (possibly including ectopic eruptions).” [0027] “Turn the 3D point cloud into a 3D mesh for the simulation step. This may also include finding texture coordinates through ray tracing based on camera parameters (e.g., camera location). … This method involves a parametric representation of the face or region of interest (e.g., NURBS surface or Bézier surface), or a generic polygonal mesh (i.e., triangles or quadrilaterals), which will be morphed, expanded, or stretched to best fit either a point cloud of the face (see Method 1) or a set of landmarks obtained earlier.” [0050-0051] “the virtual treatment can be applied, either through augmentation or manipulation of the 3D geometry. This may include pin-pointing locations or areas to apply the simulation. In this example, augmentation is implemented, where the rendered orthodontic appliance is overlaid onto the region of interest, after estimating the location on the region of interest where the appliance should be placed, or where detected appliances are virtually removed from the region of interest. … The rendered orthodontic appliance can be represented by a 2D image or a 3D model. … various image processing techniques are used on the region of interest to segment and identify the teeth, which include scaling-up the region of interest, noise reduction, applying a segmentation algorithm such as Mean Shift segmentation, histogram equalization, adaptive thresholding on specific channels of multiple color spaces, eroding and dilating (opening/closing), edge detection, and finding contours.” [0053-0054] “the video camera may be used as a type of 3D scanner, thereby capturing multiple 2D images of a person's face from different vantage points (i.e., viewpoints and look-at vectors, which together with the image plane form a set of distinct view frustums). Using 3D photogrammetry techniques, a 3D model of the person's face (and head) may be generated, originally in the form of a point cloud, then later in the form of a triangular mesh. Accompanying these techniques is UV mapping (or texture mapping) to apply color values from pixels in the 2D images to 3D vertices or triangles in the mesh. As such, a realistic-looking and reasonably accurate 3D model of the person's face may be generated in the virtual universe.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Unklesbay into the combination of Sachdeva, Pokotilov and Wu, in order to render the virtual scene in real-time and present back to the patient in sync.
29. With reference to claim 17, Sachdeva teaches the textures are generated from the first color 2D image (“If the 3D jaw model has no texture model, i.e., no color data, the texture data can be extracted from the 2 dimensional colored picture of the upper and lower jaw and mapped to the 3D coordinates on the jaw model using a cylindrical projection technique. In this technique, a map is constructed in texture space, that for each point (u, v), specifies a triangle whose cylindrical projection covers that point. The 3D point p corresponding to point (u, v) in texture space is computed by intersecting a ray with the surface of the corresponding point in the 2D colored image.” [0081])
Sachdeva does not explicitly teach color-coded image; using inpainting, blurring, image processing filters, image transformation, or any combination thereof. This is what Pokotilov teaches. Pokotilov teaches color-coded image; (“The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model.” [0327]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pokotilov into the combination of Sachdeva, in order to increase the effectiveness and acceptance of orthodontic treatment.
The combination of Sachdeva, Pokotilov and Wu does not explicitly teach using inpainting, blurring, image processing filters, image transformation, or any combination thereof. This is what Unklesbay teaches (“The rendered treatment is augmented onto the region of interest. Post processing can be done, such as Gaussian blur, to blend the augmented treatment with the original image and make the final image appear more natural, as if it were a part of the original image. .. The approach described above uses 3D modeling to determine the position, orientation, and scale of the face, which may be mathematically described by a 3D Affine transform in a virtual universe. A corresponding transform is then applied to the dental appliances so that they register to the teeth of the patient in somewhat realistic positions and orientations, although 2D image analysis may ultimately be used to find the Facial Axis (FA) points of the teeth at which to place brackets.” [0060-0061] “In the case of modified anatomies, the same method could be used, except the pixels of these anatomies may need to be colored using previously captured values that were UV mapped onto the mesh of the plan model. Alternatively, the color values could be obtained from the current video frame but then transformed to different positions as determined by morphs to the 3D anatomy according to the treatment plan. For example, an orthognathic surgery might prescribe that the mandible be advanced by several millimeters. The color values of the pixels used to render the patient's soft tissues (skin and lips) affected by the advancement would tend not to differ as a result; only their positions in the virtual world and thus the rendering would change. As such, the affected pixels in the 2D video image might simply by translated in the image plane according to a 3D Affine transform projected onto the view plane.” [0067]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Unklesbay into the combination of Sachdeva, Pokotilov and Wu, in order to render the virtual scene in real-time and present back to the patient in sync.
30. With reference to claim 18, Sachdeva teaches the textures are generated from the first color 2D image (“If the 3D jaw model has no texture model, i.e., no color data, the texture data can be extracted from the 2 dimensional colored picture of the upper and lower jaw and mapped to the 3D coordinates on the jaw model using a cylindrical projection technique. In this technique, a map is constructed in texture space, that for each point (u, v), specifies a triangle whose cylindrical projection covers that point. The 3D point p corresponding to point (u, v) in texture space is computed by intersecting a ray with the surface of the corresponding point in the 2D colored image.” [0081])
Sachdeva does not explicitly teach colored-coded image; the textures are generated from each pixel of the separately. This is what Pokotilov teaches. Pokotilov teaches color-coded image; (“The coded model of the patient's teeth 4812 may be a red-green-blue (RGB) color coded image of a model of the patients teeth, with each color channel corresponding to a different quality or feature of the model.” [0327]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pokotilov into the combination of Sachdeva, in order to increase the effectiveness and acceptance of orthodontic treatment.
The combination of Sachdeva, Pokotilov and Wu does not explicitly teach the textures are generated from each pixel of the separately. This is what Unklesbay teaches (“Accompanying these techniques is UV mapping (or texture mapping) to apply color values from pixels in the 2D images to 3D vertices or triangles in the mesh. As such, a realistic-looking and reasonably accurate 3D model of the person's face may be generated in the virtual universe. Subsequently, an image of the 3D model may be rendered onto a 2D plane that is positioned anywhere in the virtual universe according to a view frustum.” [0062] the color value of each pixel in the video image presented back to the patient can simply pass-through from the value captured by the video camera. In the case of appliances being attached to the patient's teeth (or other parts of the face), the appliances would be rendered in the virtual world, and their 2D renditions would overlay corresponding areas from the video frames, in effect masking the underlying anatomy. In the case of modified anatomies, the same method could be used, except the pixels of these anatomies may need to be colored using previously captured values that were UV mapped onto the mesh of the plan model. Alternatively, the color values could be obtained from the current video frame but then transformed to different positions as determined by morphs to the 3D anatomy according to the treatment plan. For example, an orthognathic surgery might prescribe that the mandible be advanced by several millimeters. The color values of the pixels used to render the patient's soft tissues (skin and lips) affected by the advancement would tend not to differ as a result; only their positions in the virtual world and thus the rendering would change. As such, the affected pixels in the 2D video image might simply by translated in the image plane according to a 3D Affine transform projected onto the view plane.” [0067]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Unklesbay into the combination of Sachdeva, Pokotilov and Wu, in order to render the virtual scene in real-time and present back to the patient in sync.
Conclusion
31. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michelle Chin whose telephone number is (571)270-3697. The examiner can normally be reached on Monday-Friday 8:00 AM-4:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http:/Awww.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Kent Chang can be reached on (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https:/Awww.uspto.gov/patents/apply/patent- center for more information about Patent Center and https:/Awww.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHELLE CHIN/
Primary Examiner, Art Unit 2614