DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Applicant's amendments filed on 26 February 2026 have been entered. No claims have been amended. No claims have been canceled. No claims have been added. Claims 1-19 are still pending in this application, with claims 1 and 14-16 being independent.
Response to Arguments
Applicant's arguments filed 26 February 2026 have been fully considered but they are not persuasive. With respect to the independent claims, Applicant argues that “Wu fails to disclose any of (i) determining a set of candidate texture values for a respective texture value [determined for a respective point], each candidate texture value corresponding to a respective frame in the subset of frames, (ii) computing, for each respective candidate texture value in the set of candidate texture values, a quality factor, or (iii) computing the respective texture value for the respective point by combining, based on their respective quality factors, candidate texture values selected from the set of candidate texture values-as required by independent claims 1 and 14-15…Wu therefore describes that each respective polygon in a polygonal 3D mesh is matched to a particular view (which is selected from multiple different views), and the texture assigned to the respective polygon is determined from a color image corresponding to the particular matched view. However, Wu does not disclose combining candidate texture values/color information from multiple different images in order to determine a texture value for each point in a 3D mesh or 3D point cloud, much less the combination of features (i), (ii), and (iii) of independent claims 1 and 14-15.”
Specifically, Applicant argues that “Wu merely describes matching each respective face in the mesh to a single, particular view that provides the best observation of the respective face (see Wu, paragraphs [0059] and [0101]), and even describes pruning a key view frame to "ensure no redundant view exists" (see Wu, paragraph [0091]). Therefore, rather than determining a set of candidate texture values for a respective texture value for a respective point, Wu describes determining a only single view for each face in a 3D mesh of polygonal faces”
Examiner respectfully disagrees, noting that the citation of the previous limitation discloses “texture coordinates of each vertex Vt that project onto the view and the bounding box of the projection are also contained in the texture fragment” (Paragraph [0111]), and that the cited Paragraph [0059] discloses, for example, “term “key” relates to the use of a particular image view as a type of “color key”, a resource used for color mapping, as the term is used by those skilled in the color imaging arts. A key view is a stored image taken at a particular aspect and used for texture mapping, as described in more detail subsequently.” Examiner notes that this clearly reads on the claimed “determining a set of candidate texture values,” as indeed texture values are determined for each point (or vertex, in this case).
Next, Applicant argues that “Wu also fails to disclose "computing, for each respective candidate texture value in the set of candidate texture values, a quality factor," as recited by independent claims 1 and 14-15. Wu cannot disclose feature (ii) already for the reason that feature (ii) requires the disclosure of feature (i) which is not disclosed (see above). Furthermore, contrary to the assertion in the Office Action (see Detailed Action, page 6), Wu's disclosure of generating a plurality of 2D color texture shading images, assigning each 3D mesh polygonal surface in the 3D mesh to one of the 2D color texture shading images, and rendering the 3D mesh polygonal surfaces according to determined coordinates in an assigned 2D color texture shading image (see paragraph [0148] of Wu) cannot disclose feature (ii) at least because Wu's method does not involve computing quality factors for each candidate texture value in a set of candidate texture values. Rather, Wu merely renders the 3D mesh with a color determined from a single assigned 2D color image. Nor can Wu's disclosure of generating a global texture map, as described in paragraphs [0060]-[0063], disclose feature (ii) at least because Wu's method of generating a global texture map does not involve computing quality factors for each candidate texture value in a set of candidate texture values.”
Firstly, Examiner respectfully disagrees with the assertion that (i) is not disclosed, as discussed above. Examiner points to the cited Paragraphs [0060]-[0063], which disclose: “projected image coordinates of vertices are used as their texture coordinates. In one exemplary embodiment, all boundaries between texture fragments are also projected onto views in the key view. Using corresponding color data from each key view, a color blending method can be performed on the projected boundary in order to reduce color discrepancies and/or to correct for any color discrepancy between views due to the mapping…From this mapping and blending process, regions in color shading images corresponding to the projected texture fragments for each of the views can be extracted and packed into a single texture image, termed a “global texture map.” Examiner notes that clearly said corresponding color data from each key view indeed corresponds to the quality factors, as claimed. Examiner notes that further clarification of said quality factors (both in the instant application and reference) can be seen in dependent claim 11 and cited Paragraphs [0094]-[0101], which disclose: “Exemplary criteria and parameters for determining visibility can include the following:…d) Normal score: The normal of Fci must be closer to the viewing direction of VS_i than a pre-defined threshold, Th_n1; … e) FOV: Fci must be in the FOV of VS_i;… f) Focus score: Fci must be closer to the focus plane of VS_i than a pre-defined threshold, Th_f2. … It can be appreciated that alternate visibility criteria can be used. … 2) Combine the normal score and focus score for all VS_i as the assigning score; … 3) Select one view with the best assigning score (e.g., the optimal view) as the assigned view of Fci.”
Finally, Applicant argues regarding feature (iii), that “Wu cannot disclose feature (iii) already for the reason that feature (iii) requires the disclosure of features (i) and (ii) which are both not disclosed.” Again Examiner respectfully disagrees for at least the above reasons. Further, Applicant argues that “contrary to the assertion in the Office Action (see Detailed Action, pages 6-7), Wu's disclosure of a "color blending method" performed "on the projected boundary" between texture fragments cannot disclose combining candidate texture values based on their respective quality factors as required by independent claims 1 and 14-15. While the "color blending" of Wu may result in blending multiple colors along projected boundaries between different views, any such blending described by Wu does not take into account quality factors that are computed for respective candidate texture value.”
Examiner points to cited Paragraph [0148], which discloses, for example, “grouping 3-D mesh polygonal surfaces assigned to the same 2-D color texture shading image into a 3-D mesh fragment surface; determining representative coordinates for each of the 3-D mesh fragment surfaces in the assigned 2-D color texture shading image; and rendering the 3-D mesh polygonal surfaces with the color texture values from the 3-D mesh fragment surfaces according to the determined coordinates in the assigned 2-D color texture shading image to generate a color texture 3-D surface contour image of the one or more teeth.” Examiner notes that this clearly teaches such combining of texture values, as currently claimed.
For the remaining claims, Applicant argues their allowance for at least the above reasons or the dependence to one of the aforementioned independent claims. It follows that all remaining rejections are maintained for at least the above reasons.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-19 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Wu et al. (US Pub. 2018/0025529), hereinafter Wu.
Regarding claim 1, Wu discloses a method for generating a texture for a three-dimensional (3D) model of an oral structure, the method comprising: providing the 3D model of the oral structure, the 3D model of the oral structure being provided in the form of a polygon mesh that includes a number of connected polygons registered in a 3D coordinate system (Paragraphs [0055]-[0059]: have been a number of 3-D reconstruction apparatus and methods to capture 3-D models of teeth, some of which actually collect the 3-D geometric information from the tooth surface. There have also been a number of apparatus disclosed for capturing photographs of the teeth surface, e.g., color images, which actually reflect the spectrum properties of teeth surfaces for given illumination sources. Method and/or apparatus embodiments of the application described herein can help to improve the user experience and/or provide enhancement of surface details and/or color texture in combining the 3-D geometric information and color image content…some of the conventional 3-D dental scanners use a color mapping scheme that assigns a color value to each vertex in the 3-D tooth model. This type of vertex/color assignment can be a poor compromise, however, and often provides an approximation of color that is disappointing, making it difficult to observe more complex surface detail information and color texture. An example of the results of existing 3-D dental scanners is shown in FIG. 7A. In contrast to FIG. 7A, an example of texture mapping results on teeth according to exemplary method and/or apparatus embodiments of the application is shown in FIG. 7B. As suggested by FIGS. 7A and 7B, and particularly noticeable when presented in color, exemplary texture mapping methods and/or apparatus of the application provide a more accurate and/or more realistic representation of the surface appearance for tooth structures than do vertex mapping technique… matching, merging, and 3-D mesh noise suppression, 3-D point clouds generated in all views can be combined to generate the final 3-D mesh surfaces for the subject teeth. This final 3-D mesh defines a number of faces, each face defined by its nearest 3-D vertices, so that each face is planar and has a triangular construction, although more generally, each face is planar and has a polygonal shape formed from three or more sides. A point cloud of a surface can be used to define a triangular mesh and, optionally, a mesh having other polygonal shapes. The triangular mesh is the most geometrically primitive mesh and generally allows the most straightforward computation of polygonal shapes); identifying a set of points located on the polygon mesh, each respective point in the set of points being defined by a coordinate value in the 3D coordinate system (Paragraphs [0060]-[0061]: processing can be performed fragment by fragment, one at a time. In processing each fragment, the vertices that define the fragment are projected onto its view (e.g., its key view) using a standard projection routine, employing techniques well known for mapping 3-D points to a 2-D plane. This projection can also use the camera's intrinsic parameters extracted as part of camera calibration… projected image coordinates of vertices are used as their texture coordinates. In one exemplary embodiment, all boundaries between texture fragments are also projected onto views in the key view. Using corresponding color data from each key view, a color blending method can be performed on the projected boundary in order to reduce color discrepancies and/or to correct for any color discrepancy between views due to the mapping; Paragraph [0148]: determining representative coordinates for each of the 3-D mesh fragment surfaces in the assigned 2-D color texture shading image; and rendering the 3-D mesh polygonal surfaces with the color texture values from the 3-D mesh fragment surfaces according to the determined coordinates in the assigned 2-D color texture shading image to generate a color texture 3-D surface contour image of the one or more teeth. In one embodiment, assigning each 3-D mesh polygonal surface forming the 3-D surface contour image of the one or more teeth to said one 2-D color texture shading images can include identifying 3-D mesh polygonal (e.g. triangular) surfaces forming the 3-D surface contour image of the one or more teeth; matching a first subset of 2-D color texture shading images by orientation alignment to a single one of the 3-D mesh polygonal surfaces; and determining 3-D mesh fragment surfaces by grouping remaining ones of the 3-D mesh polygonal surfaces to a single one of the matched 3-D mesh polygonal surfaces. In one embodiment, determining representative coordinates for each of the 3-D mesh fragment surfaces can include projection of the 3-D mesh fragment surface coordinates into the assigned 2-D color texture shading image); determining, for each respective point in the set of points, a respective texture value (Paragraph [0013]: grouping 3-D mesh polygonal surfaces assigned to the same view into a texture fragment; determining image coordinates for vertices of the 3-D mesh polygonal surfaces in each texture fragment from projection of the vertices onto the view associated with the texture fragment; and rendering the 3-D mesh with texture values in the 2-D color shading images corresponding to each texture fragment according to the determined image coordinates to generate a color texture 3-D surface contour image of the one or more teeth; Paragraph [0111]: each texture fragment contains the faces Fci and vertices Vt assigned to the same view in the key view frame and sharing the same region in the color shading image. The texture coordinates of each vertex Vt that project onto the view and the bounding box of the projection are also contained in the texture fragment), wherein each respective texture value is determined by: identifying a set of frames, filtering the set of frames to identify a subset of frames, determining a set of candidate texture values for the respective texture value (Fig. 8D; Paragraph [0059]: After mesh-generation, matching, merging, and 3-D mesh noise suppression, 3-D point clouds generated in all views can be combined to generate the final 3-D mesh surfaces for the subject teeth. This final 3-D mesh defines a number of faces, each face defined by its nearest 3-D vertices, so that each face is planar and has a triangular construction, although more generally, each face is planar and has a polygonal shape formed from three or more sides. A point cloud of a surface can be used to define a triangular mesh and, optionally, a mesh having other polygonal shapes. The triangular mesh is the most geometrically primitive mesh and generally allows the most straightforward computation of polygonal shapes. The multiple combined faces extend across the surface of teeth and related structures and thus, plane section by plane section, define the surface contour. As part of this processing, the visibility of each face in the mesh is determined and matched to the particular view that provides best observation on the faces in all views. The full set of views matched by all faces in the mesh serves as the key view frame. The term “key” relates to the use of a particular image view as a type of “color key”, a resource used for color mapping, as the term is used by those skilled in the color imaging arts. A key view is a stored image taken at a particular aspect and used for texture mapping, as described in more detail subsequently; Paragraph [0081]: At this point in processing, the key view frame can be identified, using the following exemplary sequence, shown as form visible view step S310 and key frame setup step S320 in the logic flow diagram of FIG. 8D), each candidate texture value corresponding to a respective frame in the subset of frames, computing, for each respective candidate texture value in the set of candidate texture values, a quality factor, and computing the respective texture value for the respective point by combining, based on their respective quality factors, candidate texture values selected from the set of candidate texture values (Paragraph [0148]: assigning each 3-D mesh polygonal surface in the 3-D mesh representing the 3-D surface contour image of the one or more teeth to one of a subset of the 2-D color texture shading images; grouping 3-D mesh polygonal surfaces assigned to the same 2-D color texture shading image into a 3-D mesh fragment surface; determining representative coordinates for each of the 3-D mesh fragment surfaces in the assigned 2-D color texture shading image; and rendering the 3-D mesh polygonal surfaces with the color texture values from the 3-D mesh fragment surfaces according to the determined coordinates in the assigned 2-D color texture shading image to generate a color texture 3-D surface contour image of the one or more teeth. In one embodiment, assigning each 3-D mesh polygonal surface forming the 3-D surface contour image of the one or more teeth to said one 2-D color texture shading images can include identifying 3-D mesh polygonal (e.g. triangular) surfaces forming the 3-D surface contour image of the one or more teeth; matching a first subset of 2-D color texture shading images by orientation alignment to a single one of the 3-D mesh polygonal surfaces; and determining 3-D mesh fragment surfaces by grouping remaining ones of the 3-D mesh polygonal surfaces to a single one of the matched 3-D mesh polygonal surfaces. In one embodiment, determining representative coordinates for each of the 3-D mesh fragment surfaces can include projection of the 3-D mesh fragment surface coordinates into the assigned 2-D color texture shading image; Paragraphs [0060]-[0063]: projected image coordinates of vertices are used as their texture coordinates. In one exemplary embodiment, all boundaries between texture fragments are also projected onto views in the key view. Using corresponding color data from each key view, a color blending method can be performed on the projected boundary in order to reduce color discrepancies and/or to correct for any color discrepancy between views due to the mapping…From this mapping and blending process, regions in color shading images corresponding to the projected texture fragments for each of the views can be extracted and packed into a single texture image, termed a “global texture map”. In one exemplary embodiment, a packing strategy can be used to make the packed texture image more compact and/or more efficient. The texture coordinates of all vertices can also be adjusted so that they align to the origin of the global texture map…all vertices with 3-D coordinates and 2-D texture coordinates and the global texture map can be output to a 3-D rendering engine for display, using techniques familiar to those skilled in volume image representation. Results can also be stored in memory or transmitted between processors); and creating a texture atlas, the texture atlas being provided in the form of a two-dimensional (2D) texture image, the 2D texture image including a number of texels, and a mapping between each respective texel in the 2D texture image and a corresponding point located on the polygon mesh in the 3D coordinate system, wherein each respective texel in the 2D texture image has a value equal to the respective texture value determined for the respective point in the set of points that corresponds to the respective texel (Paragraphs [0060]-[0063]: projected image coordinates of vertices are used as their texture coordinates. In one exemplary embodiment, all boundaries between texture fragments are also projected onto views in the key view. Using corresponding color data from each key view, a color blending method can be performed on the projected boundary in order to reduce color discrepancies and/or to correct for any color discrepancy between views due to the mapping…From this mapping and blending process, regions in color shading images corresponding to the projected texture fragments for each of the views can be extracted and packed into a single texture image, termed a “global texture map”. In one exemplary embodiment, a packing strategy can be used to make the packed texture image more compact and/or more efficient. The texture coordinates of all vertices can also be adjusted so that they align to the origin of the global texture map…all vertices with 3-D coordinates and 2-D texture coordinates and the global texture map can be output to a 3-D rendering engine for display, using techniques familiar to those skilled in volume image representation. Results can also be stored in memory or transmitted between processors; Paragraph [0148]: determining representative coordinates for each of the 3-D mesh fragment surfaces in the assigned 2-D color texture shading image; and rendering the 3-D mesh polygonal surfaces with the color texture values from the 3-D mesh fragment surfaces according to the determined coordinates in the assigned 2-D color texture shading image to generate a color texture 3-D surface contour image of the one or more teeth. In one embodiment, assigning each 3-D mesh polygonal surface forming the 3-D surface contour image of the one or more teeth to said one 2-D color texture shading images can include identifying 3-D mesh polygonal (e.g. triangular) surfaces forming the 3-D surface contour image of the one or more teeth; matching a first subset of 2-D color texture shading images by orientation alignment to a single one of the 3-D mesh polygonal surfaces; and determining 3-D mesh fragment surfaces by grouping remaining ones of the 3-D mesh polygonal surfaces to a single one of the matched 3-D mesh polygonal surfaces. In one embodiment, determining representative coordinates for each of the 3-D mesh fragment surfaces can include projection of the 3-D mesh fragment surface coordinates into the assigned 2-D color texture shading image).
Regarding claim 2, Wu discloses the method according to claim 1, wherein the set of points located on the polygon mesh includes, for each respective polygon in the polygon mesh, at least one point (Fig. 8D; Paragraph [0059]: After mesh-generation, matching, merging, and 3-D mesh noise suppression, 3-D point clouds generated in all views can be combined to generate the final 3-D mesh surfaces for the subject teeth. This final 3-D mesh defines a number of faces, each face defined by its nearest 3-D vertices, so that each face is planar and has a triangular construction, although more generally, each face is planar and has a polygonal shape formed from three or more sides. A point cloud of a surface can be used to define a triangular mesh and, optionally, a mesh having other polygonal shapes. The triangular mesh is the most geometrically primitive mesh and generally allows the most straightforward computation of polygonal shapes. The multiple combined faces extend across the surface of teeth and related structures and thus, plane section by plane section, define the surface contour. As part of this processing, the visibility of each face in the mesh is determined and matched to the particular view that provides best observation on the faces in all views. The full set of views matched by all faces in the mesh serves as the key view frame. The term “key” relates to the use of a particular image view as a type of “color key”, a resource used for color mapping, as the term is used by those skilled in the color imaging arts. A key view is a stored image taken at a particular aspect and used for texture mapping, as described in more detail subsequently; Claim 19: wherein rendering the 3-D surface contour image further comprises, for each polygonal face of the 3-D mesh, mapping a portion of the 2-D color shading image to the corresponding polygonal face that is defined by mapping feature points that define the polygonal face in the 3-D mesh to corresponding pixels in the associated 2-D color shading image).
Regarding claim 3, Wu discloses the method according to claim 1, wherein the set of points located on the polygon mesh includes, for each respective polygon in the polygon mesh, at least one vertex point, at least one edge point, and at least one interior point (Fig. 8C; Paragraph [0059]: mesh-generation, matching, merging, and 3-D mesh noise suppression, 3-D point clouds generated in all views can be combined to generate the final 3-D mesh surfaces for the subject teeth. This final 3-D mesh defines a number of faces, each face defined by its nearest 3-D vertices, so that each face is planar and has a triangular construction, although more generally, each face is planar and has a polygonal shape formed from three or more sides. A point cloud of a surface can be used to define a triangular mesh and, optionally, a mesh having other polygonal shapes. The triangular mesh is the most geometrically primitive mesh and generally allows the most straightforward computation of polygonal shapes. The multiple combined faces extend across the surface of teeth and related structures and thus, plane section by plane section, define the surface contour; Paragraphs [0076]-[0077]: an image acquisition step S210, structured light images for contour imaging are obtained. A point cloud generation step S220 then generates a 3-D point cloud from structured light images. Mesh information is combined in order to generate final output mesh Mo in a mesh generation step S230. FIG. 8C shows a mesh M with individual planar faces Fc. Each face Fc is triangular in the example shown. FIG. 8C also shows the pose relative to mesh M…Each triangular (planar) face Fc in the mesh is defined using three vertices Vt. Mesh Mo has a total of J planar faces Fc and I vertices Vt:).
Regarding claim 4, Wu discloses the method according to claim 1, wherein each polygon in the polygon mesh is a triangle, and wherein the set of points located on the polygon mesh includes, for each respective triangle in the polygon mesh, three vertex points, at least three edge points, and at least one interior point (Fig. 8C; Paragraph [0059]: mesh-generation, matching, merging, and 3-D mesh noise suppression, 3-D point clouds generated in all views can be combined to generate the final 3-D mesh surfaces for the subject teeth. This final 3-D mesh defines a number of faces, each face defined by its nearest 3-D vertices, so that each face is planar and has a triangular construction, although more generally, each face is planar and has a polygonal shape formed from three or more sides. A point cloud of a surface can be used to define a triangular mesh and, optionally, a mesh having other polygonal shapes. The triangular mesh is the most geometrically primitive mesh and generally allows the most straightforward computation of polygonal shapes. The multiple combined faces extend across the surface of teeth and related structures and thus, plane section by plane section, define the surface contour; Paragraphs [0076]-[0077]: an image acquisition step S210, structured light images for contour imaging are obtained. A point cloud generation step S220 then generates a 3-D point cloud from structured light images. Mesh information is combined in order to generate final output mesh Mo in a mesh generation step S230. FIG. 8C shows a mesh M with individual planar faces Fc. Each face Fc is triangular in the example shown. FIG. 8C also shows the pose relative to mesh M…Each triangular (planar) face Fc in the mesh is defined using three vertices Vt. Mesh Mo has a total of J planar faces Fc and I vertices Vt:).
Regarding claim 5, Wu discloses the method according to claim 1, wherein each frame in the set of frames includes a depth image and a composite color image, and wherein the 3D mesh is a 3D mesh constructed using depth data from the respective depth images (Paragraph [0003]: number of techniques have been developed for obtaining surface contour information from various types of objects in medical, industrial, and other applications. Optical 3-dimensional (3-D) measurement methods provide shape and depth information using light directed onto a surface in various ways. Among types of imaging methods used for contour imaging are those that generate a series of light patterns and use focus or triangulation to detect changes in surface shape over the illuminated area).
Regarding claim 6, Wu discloses the method according to claim 1, wherein each respective frame in the subset of frames includes a composite color image, the composite color image including a plurality of color channels (Fig. 3; Paragraphs [0045]-[0049]: Illumination array 10 projects light of different color component wavelengths, typically Red (R), Green (G), and Blue (B), one at a time, and captures a separate image on monochrome sensor array 30 at each wavelength band. However, other color component combinations can be used. The captured images are also processed and stored by control logic processor 80… shown in FIG. 3 is a red light source 32r, a green light source 32g, and a blue light source 32b for providing color light for capturing three grayscale images, also called monochromatic shading images needed for construction of a full color image. Each of these light sources can include a single light emitting element, such as a light-emitting diode (LED) or of multiple light emitting elements. In the embodiment shown, the illumination path for structured pattern light from the fringe generator and the RGB light is the same; the detection path of light toward sensor array 30 is also the same for both structured pattern and RGB image content).
Regarding claim 7, Wu discloses the method according to claim 6, wherein determining each respective candidate texture value that corresponds to a respective frame in the subset of frames comprises: determining, for each respective color channel of the plurality of color channels of the composite color image of the respective frame, a color channel contribution, and combining each respective color channel contribution to provide the respective candidate texture value (Fig. 3; Paragraphs [0045]-[0049]: Illumination array 10 projects light of different color component wavelengths, typically Red (R), Green (G), and Blue (B), one at a time, and captures a separate image on monochrome sensor array 30 at each wavelength band. However, other color component combinations can be used. The captured images are also processed and stored by control logic processor 80… shown in FIG. 3 is a red light source 32r, a green light source 32g, and a blue light source 32b for providing color light for capturing three grayscale images, also called monochromatic shading images needed for construction of a full color image. Each of these light sources can include a single light emitting element, such as a light-emitting diode (LED) or of multiple light emitting elements. In the embodiment shown, the illumination path for structured pattern light from the fringe generator and the RGB light is the same; the detection path of light toward sensor array 30 is also the same for both structured pattern and RGB image content; Paragraph [0061]: projected image coordinates of vertices are used as their texture coordinates. In one exemplary embodiment, all boundaries between texture fragments are also projected onto views in the key view. Using corresponding color data from each key view, a color blending method can be performed on the projected boundary in order to reduce color discrepancies and/or to correct for any color discrepancy between views due to the mapping).
Regarding claim 8, Wu discloses the method according to claim 7, wherein the composite color image of each frame in the subset of frames is a combination of monochrome images, each monochrome image corresponding to a respective color channel of the plurality of color channels (Paragraph [0058]: LED or other light sources having specified wavelength or color spectrum bands are used to illuminate the teeth through an optical path in an ordered sequence. In addition, a set of monochromatic component shading images are captured by a monochrome sensor in sequence. 2-D feature points are extracted from the monochrome images. Transformations between the shading images are calculated, by which the monochromatic component shading images are registered to each other, such as using the extracted feature points. In one embodiment, using a pre-specified color linear calibration matrix, the color value for each pixel is recovered from the combined, registered pixel values taken from the shading images. Thus, for each view, a color shading image is also generated).
Regarding claim 9, Wu discloses the method according to claim 8, wherein determining the color channel contribution for each respective color channel of the composite color image comprises: determining, based on a camera position in the 3D coordinate system that corresponds to the monochrome image corresponding to the respective color channel and the coordinate value in the 3D coordinate system of the respective point for which the respective texture value is computed, a pixel in the monochrome image and providing a pixel value of the determined pixel as the color channel contribution for the respective color channel (Paragraphs [0066]-[0068]: In an image acquisition step S110, acquire each of the component monochrome images, Ir, Ig, Ib, capturing them using different (e.g., Red, Green, and Blue) color illumination, respectively…2) Extract feature points from the captured component monochrome images in a feature points extraction step S120. Feature points extraction step S120 may use, for example, Harris & Stephens corner detection or other feature detection technique known to those skilled in the image feature detection arts. Feature points can be extracted from each of the component monochrome images Ir, Ig, Ib, to generate three corresponding sets of feature points; Paragraph [0148]: a method for forming a color texture mapping to a 3-D contour image of one or more teeth in a intra-oral camera with a monochrome sensor array, can include obtaining a 3-D mesh representing a 3-D surface contour image of the one or more teeth according to recorded image data; generating a plurality of sets of at least three monochromatic shading images by projecting light of at least three different spectral bands onto the one or more teeth and recording at least three corresponding color component image data on the monochrome sensor array; combining selected sets of the at least three monochromatic shading images to generate a plurality of corresponding 2-D color texture shading images, where each of the plurality of color texture shading images has a view to the one or more teeth; assigning each 3-D mesh polygonal surface in the 3-D mesh representing the 3-D surface contour image of the one or more teeth to one of a subset of the 2-D color texture shading images; grouping 3-D mesh polygonal surfaces assigned to the same 2-D color texture shading image into a 3-D mesh fragment surface; determining representative coordinates for each of the 3-D mesh fragment surfaces in the assigned 2-D color texture shading image; and rendering the 3-D mesh polygonal surfaces with the color texture values from the 3-D mesh fragment surfaces according to the determined coordinates in the assigned 2-D color texture shading image to generate a color texture 3-D surface contour image of the one or more teeth. In one embodiment, assigning each 3-D mesh polygonal surface forming the 3-D surface contour image of the one or more teeth to said one 2-D color texture shading images can include identifying 3-D mesh polygonal (e.g. triangular) surfaces forming the 3-D surface contour image of the one or more teeth; matching a first subset of 2-D color texture shading images by orientation alignment to a single one of the 3-D mesh polygonal surfaces; and determining 3-D mesh fragment surfaces by grouping remaining ones of the 3-D mesh polygonal surfaces to a single one of the matched 3-D mesh polygonal surfaces. In one embodiment, determining representative coordinates for each of the 3-D mesh fragment surfaces can include projection of the 3-D mesh fragment surface coordinates into the assigned 2-D color texture shading image).
Regarding claim 10, Wu discloses the method according to claim 9, wherein each respective monochrome image of each composite image is independently associated with a respective camera position in the 3D coordinate system (Fig. 8A; Paragraph [0066]: FIG. 8A is a logic flow diagram that shows exemplary processing for forming composite color shading images [Is1, Is2, . . . Isk}. This processing can be executed for each of the K views of the tooth obtained from reflectance imaging that obtains both contour images and color image content. The process of composite color shading image set generation step S100 can begin with a set of views V…each view at a different view pose, wherein pose for a particular view relates to its viewing aspect; the phrase “view pose” or “view aspect” relates to orientation alignment and positional characteristics such as the relative amounts of roll, yaw, and pitch of the subject relative to a coordinate system having its origin at a focal point of the camera, and includes characteristics such as view distance and camera angle; Paragraph [0003]: Fringe projection imaging uses patterned or structured light and triangulation to obtain surface contour information for structures of various types. In fringe projection imaging, a pattern of lines is projected toward the surface of an object from a given angle. The projected pattern from the surface is then viewed from another angle as a contour image, taking advantage of triangulation in order to analyze surface information based on the appearance of contour lines. Phase shifting, in which the projected pattern is incrementally shifted spatially for obtaining additional measurements at the new locations, is typically applied as part of fringe projection imaging, used in order to complete the contour mapping of the surface and to increase overall resolution in the contour image).
Regarding claim 11, Wu discloses the method according to claim 1, wherein filtering the set of frames to identify the subset of frames includes performing, for each respective frame in the set of frames, at least one of: a camera perpendicularity test that analyzes a degree of perpendicularity between a camera sensor plane corresponding to the respective frame and a normal of the respective point located on the polygon mesh, a camera distance test that analyzes a distance, in the 3D coordinate system, between a camera capture position corresponding to the respective frame and the respective point located on the polygon mesh (Fig. 8A; Paragraph [0066]: FIG. 8A is a logic flow diagram that shows exemplary processing for forming composite color shading images [Is1, Is2, . . . Isk}. This processing can be executed for each of the K views of the tooth obtained from reflectance imaging that obtains both contour images and color image content. The process of composite color shading image set generation step S100 can begin with a set of views V…each view at a different view pose, wherein pose for a particular view relates to its viewing aspect; the phrase “view pose” or “view aspect” relates to orientation alignment and positional characteristics such as the relative amounts of roll, yaw, and pitch of the subject relative to a coordinate system having its origin at a focal point of the camera, and includes characteristics such as view distance and camera angle; Paragraphs [0094]-[0101]: Exemplary criteria and parameters for determining visibility can include the following:…d) Normal score: The normal of Fci must be closer to the viewing direction of VS_i than a pre-defined threshold, Th_n1; … e) FOV: Fci must be in the FOV of VS_i;… f) Focus score: Fci must be closer to the focus plane of VS_i than a pre-defined threshold, Th_f2. … It can be appreciated that alternate visibility criteria can be used. … 2) Combine the normal score and focus score for all VS_i as the assigning score; … 3) Select one view with the best assigning score (e.g., the optimal view) as the assigned view of Fci), a view frustum test that determines whether the respective point located on the polygon mesh is located in a view frustum corresponding to the respective frame, or an occlusion test that analyzes whether the point located on the polygon mesh is, in an image corresponding to the respective frame, obstructed by other surfaces of the polygon mesh (Fig. 8D; Paragraph [0059]: After mesh-generation, matching, merging, and 3-D mesh noise suppression, 3-D point clouds generated in all views can be combined to generate the final 3-D mesh surfaces for the subject teeth. This final 3-D mesh defines a number of faces, each face defined by its nearest 3-D vertices, so that each face is planar and has a triangular construction, although more generally, each face is planar and has a polygonal shape formed from three or more sides. A point cloud of a surface can be used to define a triangular mesh and, optionally, a mesh having other polygonal shapes. The triangular mesh is the most geometrically primitive mesh and generally allows the most straightforward computation of polygonal shapes. The multiple combined faces extend across the surface of teeth and related structures and thus, plane section by plane section, define the surface contour. As part of this processing, the visibility of each face in the mesh is determined and matched to the particular view that provides best observation on the faces in all views. The full set of views matched by all faces in the mesh serves as the key view frame. The term “key” relates to the use of a particular image view as a type of “color key”, a resource used for color mapping, as the term is used by those skilled in the color imaging arts; Paragraphs [0081]-[0091]: the key view frame can be identified, using the following exemplary sequence, shown as form visible view step S310 and key frame setup step S320 in the logic flow diagram of FIG. 8D: … 1) For each face Fci in the final output mesh Mo, determine in which views V the face Fci is visible. In form visible view step S310, form the visible view set VS_i of Fci: VS_i=[Vn, Vm, . . . , Vl]. … This determination can use visibility criteria such as the following: [0084] a) The normal of face Fci must be closer to the viewing direction of VS_i than a pre-defined threshold, Th_n; … b) Face Fci must be in the field of view (FOV) of visible view set VS_i; … c) Fci must be closer to the focus plane of visible view set VS_i than a pre-defined threshold, Th_f; … However, alternate visibility criteria can be applied. … 2) Using visible view set VS_i for Fc, identify and remove views that overlap excessively. For example, if two views contain an excessive number of identical faces, one of the views is redundant and can be removed. In exemplary key frame setup step S320, the remaining views V can be used to form the key view frame as set).
Regarding claim 12, Wu discloses the method according to claim 1, wherein computing, for each respective candidate texture value in the set of candidate texture values, a quality factor includes assigning, for each respective frame in the set of subframes, weighting factors based on at least one of: a degree of perpendicularity between a camera sensor plane corresponding to the respective frame and a normal of the respective point located on the polygon mesh (Fig. 8A; Paragraph [0066]: FIG. 8A is a logic flow diagram that shows exemplary processing for forming composite color shading images [Is1, Is2, . . . Isk}. This processing can be executed for each of the K views of the tooth obtained from reflectance imaging that obtains both contour images and color image content. The process of composite color shading image set generation step S100 can begin with a set of views V…each view at a different view pose, wherein pose for a particular view relates to its viewing aspect; the phrase “view pose” or “view aspect” relates to orientation alignment and positional characteristics such as the relative amounts of roll, yaw, and pitch of the subject relative to a coordinate system having its origin at a focal point of the camera, and includes characteristics such as view distance and camera angle; Paragraphs [0094]-[0101]: Exemplary criteria and parameters for determining visibility can include the following:…d) Normal score: The normal of Fci must be closer to the viewing direction of VS_i than a pre-defined threshold, Th_n1; … e) FOV: Fci must be in the FOV of VS_i;… f) Focus score: Fci must be closer to the focus plane of VS_i than a pre-defined threshold, Th_f2. … It can be appreciated that alternate visibility criteria can be used. … 2) Combine the normal score and focus score for all VS_i as the assigning score; … 3) Select one view with the best assigning score (e.g., the optimal view) as the assigned view of Fci), a distance, in the 3D coordinate system, between a camera capture position corresponding to the respective frame and the respective point located on the polygon mesh (Fig. 8A; Paragraph [0066]: FIG. 8A is a logic flow diagram that shows exemplary processing for forming composite color shading images [Is1, Is2, . . . Isk}. This processing can be executed for each of the K views of the tooth obtained from reflectance imaging that obtains both contour images and color image content. The process of composite color shading image set generation step S100 can begin with a set of views V…each view at a different view pose, wherein pose for a particular view relates to its viewing aspect; the phrase “view pose” or “view aspect” relates to orientation alignment and positional characteristics such as the relative amounts of roll, yaw, and pitch of the subject relative to a coordinate system having its origin at a focal point of the camera, and includes characteristics such as view distance and camera angle), a scanner movement speed corresponding to the respective frame (Paragraph [0044]: camera 40 is used in still mode, held in the same fixed position for obtaining color component images as that used for structured light pattern projection and imaging. In other exemplary embodiments, for color contour imaging, camera 40 can move while obtaining color component images and/or can move when used for structured light pattern projection and imaging), or a degree of whiteness of the respective candidate texture value (Paragraph [0048]: Intra-oral camera 40 of FIG. 3 optionally uses polarized light for surface contour imaging of tooth 22. Polarizer 14 provides the fringe pattern illumination from fringe pattern generator 12 as linearly polarized light. In one embodiment, the transmission axis of analyzer 28 is parallel to the transmission axis of polarizer 14. With this arrangement, only light with the same polarization as the fringe pattern is provided to the sensor array 30. In another embodiment, analyzer 28, in the path of reflected light to sensor array 30, can be rotated by an actuator 18 into either an orientation that matches the polarization transmission axis of polarizer 14 and obtains specular light from surface portions of the tooth or an orientation orthogonal to the polarization transmission axis of polarizer 14 for reduced specular content, obtaining more of the scattered light from inner portions of the tooth. In certain exemplary embodiments herein, combinations of polarized and non-polarized light can be used).
Regarding claim 13, Wu discloses the method according to claim 1, wherein computing the respective texture value for the respective point by combining, based on their respective quality factors, candidate texture values selected from the set of candidate texture values comprises: selecting a subset of the candidate texture values based on their respective quality factors and averaging individual color channel values provided by each candidate texture value in the subset of candidate texture values (Fig. 12A; Fig. 12B; Paragraphs [0121]-[0128]: Set the target color values at UV1k1 and UV2k1 with the weighted average of C1k1 and C1k2; and C2k1 and C2k2, respectively. At the same time, also calculate the difference between the current color value and target color value… 4c) For all pixels on the 2d line linking texture coordinates UV1k1 and UV2k1, calculate a color difference using bi-linear interpolation and the color difference at coordinates UV1k1 and UV2k1 calculated in 4b). The pixels on lines of all projected edges of the current fragment define an enclosed contour, termed an edge contour, of current fragments on images. The pixel values are used as the boundary condition for the following sub-step. … 4d) Based on the above boundary condition, an interpolation can be performed to calculate the pixel value for all pixels bounded by the edge contour. The interpolated pixel value is the smoothed color difference for each pixel bounded within the edge contour. … 4e) For each pixel inside the current fragment, apply the interpolated color difference to the original color value of the pixel in color shading image Isk1 to generate the color blending version Isk_adj_1 as the final shading image for the current fragment. Results from before and after color blending appear as shown in the examples of FIG. 12A and FIG. 12B).
Regarding claim 14, the limitations of this claim substantially correspond to the limitations of claim 1; thus they are rejected on similar grounds.
Regarding claim 15, the limitations of this claim substantially correspond to the limitations of claim 1; thus they are rejected on similar grounds.
Regarding claim 16, Wu discloses a method for coloring points in a three-dimensional (3D) model of an oral structure, the method comprising: providing the 3D model of the oral structure, the 3D model of the oral structure comprising a plurality of points registered in a 3D coordinate system (Paragraphs [0055]-[0059]: have been a number of 3-D reconstruction apparatus and methods to capture 3-D models of teeth, some of which actually collect the 3-D geometric information from the tooth surface. There have also been a number of apparatus disclosed for capturing photographs of the teeth surface, e.g., color images, which actually reflect the spectrum properties of teeth surfaces for given illumination sources. Method and/or apparatus embodiments of the application described herein can help to improve the user experience and/or provide enhancement of surface details and/or color texture in combining the 3-D geometric information and color image content…some of the conventional 3-D dental scanners use a color mapping scheme that assigns a color value to each vertex in the 3-D tooth model. This type of vertex/color assignment can be a poor compromise, however, and often provides an approximation of color that is disappointing, making it difficult to observe more complex surface detail information and color texture. An example of the results of existing 3-D dental scanners is shown in FIG. 7A. In contrast to FIG. 7A, an example of texture mapping results on teeth according to exemplary method and/or apparatus embodiments of the application is shown in FIG. 7B. As suggested by FIGS. 7A and 7B, and particularly noticeable when presented in color, exemplary texture mapping methods and/or apparatus of the application provide a more accurate and/or more realistic representation of the surface appearance for tooth structures than do vertex mapping technique… matching, merging, and 3-D mesh noise suppression, 3-D point clouds generated in all views can be combined to generate the final 3-D mesh surfaces for the subject teeth. This final 3-D mesh defines a number of faces, each face defined by its nearest 3-D vertices, so that each face is planar and has a triangular construction, although more generally, each face is planar and has a polygonal shape formed from three or more sides. A point cloud of a surface can be used to define a triangular mesh and, optionally, a mesh having other polygonal shapes. The triangular mesh is the most geometrically primitive mesh and generally allows the most straightforward computation of polygonal shapes); identifying a set of points in the plurality of points of the 3D model, each respective identified point in the 3D model being defined by a coordinate value in the 3D coordinate system (Paragraphs [0060]-[0061]: processing can be performed fragment by fragment, one at a time. In processing each fragment, the vertices that define the fragment are projected onto its view (e.g., its key view) using a standard projection routine, employing techniques well known for mapping 3-D points to a 2-D plane. This projection can also use the camera's intrinsic parameters extracted as part of camera calibration… projected image coordinates of vertices are used as their texture coordinates. In one exemplary embodiment, all boundaries between texture fragments are also projected onto views in the key view. Using corresponding color data from each key view, a color blending method can be performed on the projected boundary in order to reduce color discrepancies and/or to correct for any color discrepancy between views due to the mapping; Paragraph [0148]: determining representative coordinates for each of the 3-D mesh fragment surfaces in the assigned 2-D color texture shading image; and rendering the 3-D mesh polygonal surfaces with the color texture values from the 3-D mesh fragment surfaces according to the determined coordinates in the assigned 2-D color texture shading image to generate a color texture 3-D surface contour image of the one or more teeth. In one embodiment, assigning each 3-D mesh polygonal surface forming the 3-D surface contour image of the one or more teeth to said one 2-D color texture shading images can include identifying 3-D mesh polygonal (e.g. triangular) surfaces forming the 3-D surface contour image of the one or more teeth; matching a first subset of 2-D color texture shading images by orientation alignment to a single one of the 3-D mesh polygonal surfaces; and determining 3-D mesh fragment surfaces by grouping remaining ones of the 3-D mesh polygonal surfaces to a single one of the matched 3-D mesh polygonal surfaces. In one embodiment, determining representative coordinates for each of the 3-D mesh fragment surfaces can include projection of the 3-D mesh fragment surface coordinates into the assigned 2-D color texture shading image); determining, for each identified point in the 3D model, a respective color information value, the respective color information value determined by: identifying a set of images captured from an image scan of at least a portion of the oral structure, the identified set of images each comprising a corresponding point that corresponds to the respective point in the 3D model and each having associated color information (Paragraph [0054]: Calibration is provided for the image content, adjusting the obtained image data to generate accurate color for each image pixel. FIGS. 6A, 6B, and 6C show component grayscale or monochrome images 90r, 90g, and 90b of teeth obtained on monochrome sensor array 30 using red, green, and blue light from light sources 32r, 32g, and 32b (FIG. 3) respectively. A grayscale representation of a color image can be formed by combining calibrated image data content for the red, green, and blue illumination. Color calibration data, such as using a linear calibration matrix or other calibration mechanism, can be of particular value where a monochrome sensor is used to obtain color data and helps to compensate for inherent response characteristics of the sensor array for different wavelengths; Paragraph [0148]: determining representative coordinates for each of the 3-D mesh fragment surfaces in the assigned 2-D color texture shading image; and rendering the 3-D mesh polygonal surfaces with the color texture values from the 3-D mesh fragment surfaces according to the determined coordinates in the assigned 2-D color texture shading image to generate a color texture 3-D surface contour image of the one or more teeth. In one embodiment, assigning each 3-D mesh polygonal surface forming the 3-D surface contour image of the one or more teeth to said one 2-D color texture shading images can include identifying 3-D mesh polygonal (e.g. triangular) surfaces forming the 3-D surface contour image of the one or more teeth; matching a first subset of 2-D color texture shading images by orientation alignment to a single one of the 3-D mesh polygonal surfaces; and determining 3-D mesh fragment surfaces by grouping remaining ones of the 3-D mesh polygonal surfaces to a single one of the matched 3-D mesh polygonal surfaces. In one embodiment, determining representative coordinates for each of the 3-D mesh fragment surfaces can include projection of the 3-D mesh fragment surface coordinates into the assigned 2-D color texture shading image); combining the color information associated with the corresponding point in each of the identified scan images into a color information value (Fig. 3; Paragraphs [0045]-[0049]: Illumination array 10 projects light of different color component wavelengths, typically Red (R), Green (G), and Blue (B), one at a time, and captures a separate image on monochrome sensor array 30 at each wavelength band. However, other color component combinations can be used. The captured images are also processed and stored by control logic processor 80… shown in FIG. 3 is a red light source 32r, a green light source 32g, and a blue light source 32b for providing color light for capturing three grayscale images, also called monochromatic shading images needed for construction of a full color image. Each of these light sources can include a single light emitting element, such as a light-emitting diode (LED) or of multiple light emitting elements. In the embodiment shown, the illumination path for structured pattern light from the fringe generator and the RGB light is the same; the detection path of light toward sensor array 30 is also the same for both structured pattern and RGB image content; Paragraph [0061]: projected image coordinates of vertices are used as their texture coordinates. In one exemplary embodiment, all boundaries between texture fragments are also projected onto views in the key view. Using corresponding color data from each key view, a color blending method can be performed on the projected boundary in order to reduce color discrepancies and/or to correct for any color discrepancy between views due to the mapping); and associating the combined color information value with the respective color information value of the respective point in the 3D model (Paragraphs [0055]-[0059]: have been a number of 3-D reconstruction apparatus and methods to capture 3-D models of teeth, some of which actually collect the 3-D geometric information from the tooth surface. There have also been a number of apparatus disclosed for capturing photographs of the teeth surface, e.g., color images, which actually reflect the spectrum properties of teeth surfaces for given illumination sources. Method and/or apparatus embodiments of the application described herein can help to improve the user experience and/or provide enhancement of surface details and/or color texture in combining the 3-D geometric information and color image content…some of the conventional 3-D dental scanners use a color mapping scheme that assigns a color value to each vertex in the 3-D tooth model. This type of vertex/color assignment can be a poor compromise, however, and often provides an approximation of color that is disappointing, making it difficult to observe more complex surface detail information and color texture. An example of the results of existing 3-D dental scanners is shown in FIG. 7A. In contrast to FIG. 7A, an example of texture mapping results on teeth according to exemplary method and/or apparatus embodiments of the application is shown in FIG. 7B. As suggested by FIGS. 7A and 7B, and particularly noticeable when presented in color, exemplary texture mapping methods and/or apparatus of the application provide a more accurate and/or more realistic representation of the surface appearance for tooth structures than do vertex mapping technique… matching, merging, and 3-D mesh noise suppression, 3-D point clouds generated in all views can be combined to generate the final 3-D mesh surfaces for the subject teeth. This final 3-D mesh defines a number of faces, each face defined by its nearest 3-D vertices, so that each face is planar and has a triangular construction, although more generally, each face is planar and has a polygonal shape formed from three or more sides. A point cloud of a surface can be used to define a triangular mesh and, optionally, a mesh having other polygonal shapes. The triangular mesh is the most geometrically primitive mesh and generally allows the most straightforward computation of polygonal shapes).
Regarding claim 17, Wu discloses the method according to claim 16, the 3D model of the oral structure comprising a point cloud comprising a plurality of points registered in a 3D coordinate system and representing the oral structure (Paragraph [0059]: After mesh-generation, matching, merging, and 3-D mesh noise suppression, 3-D point clouds generated in all views can be combined to generate the final 3-D mesh surfaces for the subject teeth. This final 3-D mesh defines a number of faces, each face defined by its nearest 3-D vertices, so that each face is planar and has a triangular construction, although more generally, each face is planar and has a polygonal shape formed from three or more sides. A point cloud of a surface can be used to define a triangular mesh and, optionally, a mesh having other polygonal shapes. The triangular mesh is the most geometrically primitive mesh and generally allows the most straightforward computation of polygonal shapes. The multiple combined faces extend across the surface of teeth and related structures and thus, plane section by plane section, define the surface contour).
Regarding claim 18, Wu discloses the method according to claim 16, the 3D model of the oral structure comprising a polygon mesh comprising a number of connected polygons registered in a 3D coordinate system, wherein the identified set of points are located on the polygon mesh (Fig. 8C; Paragraphs [0055]-[0059]: have been a number of 3-D reconstruction apparatus and methods to capture 3-D models of teeth, some of which actually collect the 3-D geometric information from the tooth surface. There have also been a number of apparatus disclosed for capturing photographs of the teeth surface, e.g., color images, which actually reflect the spectrum properties of teeth surfaces for given illumination sources. Method and/or apparatus embodiments of the application described herein can help to improve the user experience and/or provide enhancement of surface details and/or color texture in combining the 3-D geometric information and color image content…some of the conventional 3-D dental scanners use a color mapping scheme that assigns a color value to each vertex in the 3-D tooth model. This type of vertex/color assignment can be a poor compromise, however, and often provides an approximation of color that is disappointing, making it difficult to observe more complex surface detail information and color texture. An example of the results of existing 3-D dental scanners is shown in FIG. 7A. In contrast to FIG. 7A, an example of texture mapping results on teeth according to exemplary method and/or apparatus embodiments of the application is shown in FIG. 7B. As suggested by FIGS. 7A and 7B, and particularly noticeable when presented in color, exemplary texture mapping methods and/or apparatus of the application provide a more accurate and/or more realistic representation of the surface appearance for tooth structures than do vertex mapping technique… matching, merging, and 3-D mesh noise suppression, 3-D point clouds generated in all views can be combined to generate the final 3-D mesh surfaces for the subject teeth. This final 3-D mesh defines a number of faces, each face defined by its nearest 3-D vertices, so that each face is planar and has a triangular construction, although more generally, each face is planar and has a polygonal shape formed from three or more sides. A point cloud of a surface can be used to define a triangular mesh and, optionally, a mesh having other polygonal shapes. The triangular mesh is the most geometrically primitive mesh and generally allows the most straightforward computation of polygonal shapes).
Regarding claim 19, Wu discloses the method according to claim 18, wherein the identified set of points comprise the vertices of the polygons in the polygon mesh (Paragraph [0059]: After mesh-generation, matching, merging, and 3-D mesh noise suppression, 3-D point clouds generated in all views can be combined to generate the final 3-D mesh surfaces for the subject teeth. This final 3-D mesh defines a number of faces, each face defined by its nearest 3-D vertices, so that each face is planar and has a triangular construction, although more generally, each face is planar and has a polygonal shape formed from three or more sides. A point cloud of a surface can be used to define a triangular mesh and, optionally, a mesh having other polygonal shapes. The triangular mesh is the most geometrically primitive mesh and generally allows the most straightforward computation of polygonal shapes).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW D SALVUCCI whose telephone number is (571)270-5748. The examiner can normally be reached M-F: 7:30-4:00PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XIAO WU can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW SALVUCCI/Primary Examiner, Art Unit 2613