DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Langi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. 12056820 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because they can read on to each other, see the following mapping table.
Current Application
1
2
3
4
5
6
7
8
Patent application
1
2
3
4
4
5
5
6
Current Application
9
10
11
12
13
14
15
16
Patent application
8
9
10
11
11
12
12
13
Current Application
17
18
19
20
Patent application
14
15
16
17
Also, shown below is a mapping between the limitations of independent claims of current application U.S. Patent Application 18793273 and independent claims of U.S. Patent Application 12056820 B2.
Claims
Current Application
Claims
Patent Application
1
A method comprising: accessing an image depicting a dental arch of a user;
1
1. A method comprising: accessing an image depicting a dental arch of a user;
identifying, from the image, a set of features of the dental arch; generating, using a convolutional neural network, a first voxel grid based on the set of features identified from the image,
identifying, from the image, a set of features describing the dental arch of the user; generating, using a convolutional neural network, a first voxel grid based on the set of features identified from the image, the first voxel grid including occupancy probabilities representing a three-dimensional (3D) surface of the dental arch of the user, wherein each occupancy probability of the occupancy probabilities correspond to a voxel of the first voxel grid,
wherein a shape of the first voxel grid is generated based on a camera projection matrix accounting for a depth of the dental arch depicted in the image; and generating a 3D model comprising a 3D surface of the dental arch of the user based on a voxel grid from combining at least a portion of the first voxel grid and a portion of a second voxel grid, wherein generating the 3D model further comprises:
wherein a shape of the first voxel grid is generated based on a camera projection matrix, the camera projection matrix accounting for a depth of the dental arch depicted in the image; and generating a 3D model comprising a 3D surface of the dental arch of the user based on a merged voxel grid from combining at least a portion of the first voxel grid and a portion of a second voxel grid, wherein generating the 3D model further comprises:
generating a geographical structure; and processing the geographical structure through a refinement process using the set of features for vertices of at least one respective tooth.
generating a geographical mesh based on the occupancy probabilities included in the merged voxel grid, wherein each occupancy probability of the occupancy probabilities corresponds to a probability value of one or more voxels being occupied by the dental arch of the user, and wherein generating the geographical mesh comprises iteratively processing the merged voxel grid including the probability value of each of the one or more voxels being occupied by the dental arch of the user; and processing the geographical mesh through a mesh refinement process, wherein the mesh refinement process comprises extracting the set of features for vertices of at least one respective tooth, propagating information along mesh edges, and updating vertex positions.
9
A system comprising: a processing circuit configured to: access an image depicting a dental arch of a user; identify, from the image, a set of features of the dental arch; generate,
8
A system comprising: one or more computer processors; and one or more computer-readable mediums storing instructions that, when executed by the one or more computer processors, cause the system to perform operations comprising: accessing an image depicting a dental arch of a user; identifying, from the image, a set of features describing the dental arch of the user;
using a convolutional neural network, a first voxel grid based on the set of features identified from the image, wherein a shape of the first voxel grid is generated based on a camera projection matrix accounting for a depth of the dental arch depicted in the image; and generate a 3D model comprising a 3D surface of the dental arch of the user based on a voxel grid from combining at least a portion of the first voxel grid and a portion of a second voxel grid, wherein generating the 3D model further comprises:
generating, using a convolutional neural network, a first voxel grid based on the set of features identified from the image, the first voxel grid including occupancy probabilities representing a three-dimensional (3D) surface of the dental arch of the user, wherein each occupancy probability of the occupancy probabilities correspond to a voxel of the first voxel grid, wherein a shape of the first voxel grid is generated based on a camera projection matrix, the camera projection matrix accounting for a depth of the dental arch depicted in the image; and generating a 3D model comprising a 3D surface of the dental arch of the user based on a merged voxel grid from combining at least a portion of the first voxel grid and a portion of a second voxel grid, wherein generating the 3D model further comprises:
generating a geographical structure; and processing the geographical structure through a refinement process using the set of features for vertices of at least one respective tooth.
generating a geographical mesh based on the occupancy probabilities included in the merged voxel grid, wherein each occupancy probability of the occupancy probabilities corresponds to a probability value of one or more voxels being occupied by the dental arch of the user, and wherein generating the geographical mesh comprises iteratively processing the merged voxel grid including the probability value of each of the one or more voxels being occupied by the dental arch of the user; and processing the geographical mesh through a mesh refinement process, wherein the mesh refinement process comprises extracting the set of features for vertices of at least one respective tooth, propagating information along mesh edges, and updating vertex positions.
17
A non-transitory computer-readable medium storing instructions that, when executed by one or more computer processors of one or more computing devices, cause the one or more computing devices to perform operations comprising: accessing an image depicting a dental arch of a user; identifying, from the image, a set of features of the dental arch;
14
A non-transitory computer-readable medium storing instructions that, when executed by one or more computer processors of one or more computing devices, cause the one or more computing devices to perform operations comprising: accessing an image depicting a dental arch of a user; identifying, from the image, a set of features describing the dental arch of the user;
generating, using a convolutional neural network, a first voxel grid based on the set of features identified from the image, wherein a shape of the first voxel grid is generated based on a camera projection matrix accounting for a depth of the dental arch depicted in the image; and generating a 3D model comprising a 3D surface of the dental arch of the user based on a voxel grid from combining at least a portion of the first voxel grid and a portion of a second voxel grid, wherein generating the 3D model further comprises:
generating, using a convolutional neural network, a first voxel grid based on the set of features identified from the image, the first voxel grid including occupancy probabilities representing a three-dimensional (3D) surface of the dental arch of the user, wherein each occupancy probability of the occupancy probabilities correspond to a voxel of the first voxel grid, wherein a shape of the first voxel grid is generated based on a camera projection matrix, the camera projection matrix accounting for a depth of the dental arch depicted in the image; and generating a 3D model comprising a 3D surface of the dental arch of the user based on a merged voxel grid from combining at least a portion of the first voxel grid and a portion of a second voxel grid, wherein generating the 3D model further comprises:
generating a geographical structure; and processing the geographical structure through a refinement process using the set of features for vertices of at least one respective tooth.
generating a geographical mesh based on the occupancy probabilities included in the merged voxel grid, wherein each occupancy probability of the occupancy probabilities corresponds to a probability value of one or more voxels being occupied by the dental arch of the user, and wherein generating the geographical mesh comprises iteratively processing the merged voxel grid including the probability value of each of the one or more voxels being occupied by the dental arch of the user; and processing the geographical mesh through a mesh refinement process, wherein the mesh refinement process comprises extracting the set of features for vertices of the at least one respective tooth, propagating information along mesh edges, and updating vertex positions.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 – 4, 6 - 12, and 14 to 20 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (NPL: CN 106504331 A, 2017-03-15) in view of Parpara et al. (Publication: US 2019/0102880 A1) and Ho (Publication: US 2018/0218513 A1).
Regarding claim 1, see rejection on claim 17.
Regarding claim 2, see rejection on claim 18.
Regarding claim 3, see rejection on claim 19.
Regarding claim 4, see rejection on claim 20.
Regarding claim 6, see rejection on claim 14.
Regarding claim 7, see rejection on claim 15.
Regarding claim 8, see rejection on claim 16.
Regarding claim 9, see rejection on claim 17.
Regarding claim 10, see rejection on claim 18.
Regarding claim 11, see rejection on claim 19.
Regarding claim 12, see rejection on claim 20.
Regarding claim 14, Li in view of Panara, Ho disclose all the limitation of claim 9 including the dental arch of the user.
Li discloses the geographical structure is at least one of (i) a polygon mesh model, (ii) a triangle mesh model, (iii) a non-uniform rational basis spline (NURBS) surface model, or (iv) a CAD model (Page 11, paragraph 1 - “make certain adjustment to the triangular mesh vertex coordinates of the near clipping plane. make distance clipping plane near the vertex of the triangular mesh moves on the plane, such as the triangular grid processing in FIG. 9 “, Page 7 paragraphs 11- “the curve represented by the triangular grid model, KA can be approximated , is the partitioned area in the triangular area.” FIG. 9. Is a cutting plane cut triangular grid based on the feature, curvature with divided area).
Ho discloses the grid comprises a point cloud representation ([0048] - including a matched point cloud from the current image into the voxel grid, the voxel grid is then refined to be an accurate representation of the scene.
[0070] - update the voxel grid to include the matched point cloud”).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Li in view Parpara and Ho with the grid comprises a point cloud representation as taught by Ho. The motivation for doing so the measurement can be more accurate.
Regarding claim 15, Li in view of Panara, Ho disclose all the limitation of claim 9 including the dental arch of the user.
Li discloses generating a mesh based on a plurality of occupancy probabilities comprised in the first voxel grid (
Page 11, paragraph 1 - “make certain adjustment to the triangular mesh vertex coordinates of the near clipping plane. make distance clipping plane near the vertex of the triangular mesh moves on the plane, such as the triangular grid processing in FIG. 9 “, Page 7 paragraphs 11- “the curve represented by the triangular grid model, KA can be approximated , is the partitioned area in the triangular area.” FIG. 9. Is a cutting plane cut triangular grid based on the feature, curvature with divided area, voxel grid.
Fig. 3, Page 8 paragraphs 9 to 11 page 11 paragraph 3 - “The comparison of characteristic value similarity in the four, similarity between two areas is expressed as: wherein [alpha] 1, [alpha] 2 and [alpha] 3 represents the corresponding weighted value, respectively taking 0.25, 0.25 and 0.5 best similarity comparison effect by experiment three weights. sum of the similarity of the similarity of each region. process between two crown region similarity comparison may be regarded as optimal matching node is composed by two groups of area complete bipartite graph, as shown in FIG. 4, in which S1 and S2 respectively represent two crown area node ui is S1, vi S2 regional node.”
The weighted values between two areas is occupancy probabilities that includes the three-dimensional dental model.),
the mesh representing the 3D surface of the arch (Page 11 paragraphs 1 to 3, Page 2, last 3 paragraphs - As shown in Fig. 14, the three-dimensional final model is generated include based on the Fig. 9 . The regions form a whole tooth crown curved surface, represents the shape of the original dental crown curved surface. the invention is dental modelling method based on three-dimensional model. ).
Regarding claim 16, Li in view of Panara, Ho disclose all the limitation of claim 15 including the dental arch of the user.
Li discloses iteratively processing the mesh through the refinement process, the refinement process comprising a vertex alignment stage, a graph convolution stage, and a vertex refinement stage (Page 3 paragraphs 1 to 5 -
“A dental modelling method based on three-dimensional model retrieval, wherein it comprises the following steps:
(1) establishing a dental model base, and extracting four feature values by performing the region division to the crown;
(2) reading to be modelled data model, the data model to be modelled area dividing, according to the area division calculation descriptors of the crown and the teeth type by type identification judgement;
(3) searching the tooth model with the highest similarity in the model library according to the description sub and tooth type apparatus;
(4) using three translation transformation method to be the retrieval of data model and modeling of tooth model registration;
(5) cutting and splicing the two models to generate a complete tooth model.” “a mesh refinement process”.
Fig 9 is a triangle mesh.
“the rotation matrix R and the translation matrix T to be modelled dental crown model with similar tooth model, then using the AABB axis aligned bounding box algorithm, using a minimum cuboid parallel to coordinate axes to be modelled tooth crown model, z-axis negative position of the rectangular plane of the half-shaft direction is the desired position of the cutting plane, as shown in FIG. 9.” “vertex alignment stage” Page 10, paragraph 9.
“then comparing the relative area of the region. When the relative area of the two areas is delta S1 and delta S2, the area corresponding to the area similarity is expressed as the last area relative to the adjacent side length are compared. Because adjacent regions long opposite sides of recording area and the adjacent three types of region common edge length and area length ratio, is one three-dimensional vectors. when carrying out similarity comparison, similar to the similarity calculation method, and calculating average.” “convolution stage, deals with edge length” Page 8 lines 8 to 9.
“The method in FIG. 10 (a) the method uses diagonal of quadrilateral, mesh, to quadrilateral is divided into two triangles, (b) is the clipping plane generated by the two endpoints of the middle point of the edge and the opposite-side connection, the original quadrangle into three triangles, (c) method is fussy, the cutting two end points of the side connected with the midpoint of the opposite side, at the same time, the middle point of each side is connected with adjacent triangle vertex. three methods each with quality.” “a vertex refinement stage” Page 10 paragraph 12.).
Regarding claim 17, Li discloses a non-transitory computer-readable medium storing instructions that, when executed by one or more computer processors of one or more computing devices, cause the one or more computing devices to perform operations comprising (Page 1 last paragraph - computer aided oral cavity orthodontic system uses computer graphics, graphic image processing and analysis techniques, the oral cavity orthodontic performing computer aided design. It is known that a computer has a memory storing instructions that perform functions.):
generating a first voxel grid based on the set of features identified from the image (Page 5 paragraph 8 –the area is divided it into two or more small triangles, “generating” , “first small triangle is a first voxel grid”, based on the image of the triangle constrain condition.
Page 9 paragraph 8 - The invention according to a normal vector and the distance of the seed point of the image, establishing the triangle constraint condition, further screening, obtaining target point, “based on the set of features identified from the image”.),
wherein a shape of the first voxel grid is generated (Page 2 last paragraph -the method can make the dental model is divided into relatively simple shapes “generate”, a plurality of area feature. These regions form a whole tooth crown curved surface, represents the shape feature of the original dental crown curved surface.
Page 6 last paragraph, paragraph 10 and Page 7 paragraph first - The invention uses three-dimensional tooth model data is a triangular grid model . Triangular shape of is made up of vertices thus “voxel grid” can be read on.
Page 5 paragraph 8 - In the step (5) Triangular grid is made up of small triangles, this the first small triangle is the first voxel grid.);
generating a 3D model comprising a 3D surface of the dental of the user based on a voxel grid from combining at least a portion of the first voxel grid and a portion of a second voxel grid, wherein generating the 3D model further comprises (As shown in Fig. 14, the three-dimensional final model is generated include based on the Fig. 9, “3D model” . The regions form a whole tooth crown curved surface, represents the shape of the original dental crown curved surface, combination, (surface, first voxel grid and second voxel grid).
Page 5 paragraph 8 – in the step (5) - it is to divide it into two or more small triangles, narrow triangle processing, generally to combination processing thus two small triangles (first voxel grid and second voxel grid) are combined.
Page 11 paragraphs 1 to 3, Page 2, last 2 paragraphs - “processing generally narrow triangle to perform combination processing and complicated operation, make certain adjustment to the triangular mesh vertex coordinates of the near clipping plane. make distance clipping plane near the vertex of the triangular mesh moves on the plane, such as the triangular grid processing in FIG. 9, FIG. 13. “):
generating a geographical structure (Page 11 paragraphs 1 to 3, Page 2, last 2 paragraphs - “make certain adjustment to the triangular mesh vertex coordinates of the near clipping plane. make distance clipping plane near the vertex of the triangular mesh moves on the plane, such as the triangular grid processing in FIG. 9, FIG. 13. Compared with the cutting result of FIG. 9 and FIG. 13 the result not only solves the problem of triangular narrow, but also obviously reduces the new points generated in the cutting process and simplifies the cutting process, set of features.” FIG. 9. Is a cutting plane cut triangular grid based on the feature, curvature with divided area, generating a geographical structure.); and
processing the geographical structure through a refinement process using the set of features for vertices of at least one respective tooth (Page 10 paragraph 7 - step (5) cutting and splicing the two models to generate a complete tooth model, “a refinement process”.
Page 11 paragraph 3 – step (5) after cutting between the dental crown model to be modelled and cutting to obtain the dental modelbuilding triangle grid modelling the jointing is finished. in the process of splicing “refinement process” using a triangular mesh, will often appear to be modelled dental crown model and similar dental model of different sizes, “processing the geographical structure” . needs to be used again when determining the clipping plane AABB axis aligned bounding box generated by the four surfaces respectively parallel with the x-axis and y-axis in the bounding box is determined to be modelled dental crown and cutting the tooth root width by length and width data of the two model cutting to obtain the scaled tooth model, finally to be model modeling with cut tooth are connected by triangular finish jointing, as shown in FIG. 14. ).
Li does not however Parpara discloses
accessing an image depicting a dental arch of a user ([0109] – the treatment plan may be generated based on an intraoral scan of a dental arch to be modeled. The intraoral scan of the patient's dental arch may be performed to generate a three dimensional (3D) virtual model of the patient’s dental arch, mold. Fig. 3A depicts the image of dental arch of a user, a separate virtual 3D model of the patient's dental arch at that treatment stage may be generated. , “accessing an image”.
PNG
media_image1.png
662
516
media_image1.png
Greyscale
);
identifying, from the image, a set of features of the dental arch([0121] Each patient has a unique dental arch with unique gingiva. the shape and position of the cutline may be customized for each patient and for each stage of treatment. The cutline is customized to follow along the gum line (also referred to as the gingival line).
[0123] Identify the gingival cutline by first defining initial gingival curves along a line around a tooth (LAT) of a patient's dental arch in the virtual 3D model image(also referred to as a digital model) of the patient's dental arch for a treatment stage. The gingival curves may include interproximal areas between adjacent teeth of a patient as well as areas of interface between the teeth and the gums. The initially defined gingival curves may be replaced with a modified dynamic curve that represents the cutline.);
generating a 3D model comprising a 3D surface of the dental arch ([0109] – generate a three dimensional (3D) virtual model of the patient’s dental arch, a 3D surface model.
PNG
media_image1.png
662
516
media_image1.png
Greyscale
);
generating, using a convolutional neural network, based on the dental arch depicted in the image ([0168] the model may be a machine learning model, convolutional neural networks, that is trained to identify one or more high risk areas for one or more defects at one or more locations of the plastic shell image associated with a dental arch of a patient [0005]. Processing logic may train a machine learning model to generate the trained machine learning model.
[0169] - the machine learning model is convolutional neural networks.
PNG
media_image1.png
662
516
media_image1.png
Greyscale
).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Li with accessing an image depicting a dental arch of a user generating a 3D model comprising a 3D surface of the dental arch; generating, using a convolutional neural network, based on the dental arch depicted in the image as taught by Parpara. The motivation for doing so to improve accuracy.
Li in view of Parpara do not however Ho discloses
data is generated based on a camera projection matrix accounting for a depth of the data in the image ([0063] When a color stream is present generating pixel color values for individual images, process 900 includes “convert depth image data to 3D depth points”, “generated”. 910. This is based on an inverse of a camera projection matrix which has an intrinsic parameter of the depth camera where for the inverse matrix image , each depth pixel (u, v) in the image coordinates is converted back to (x, y, z) in 3D depth camera coordinates, “based on a camera projection matrix accounting for a depth of the data in the image”.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Li in view Parpara with data is generated based on a camera projection matrix accounting for a depth of the data in the image as taught by Ho. The motivation for doing so the measurement can be more accurate.
Regarding claim 18, Li in view of Panara, Ho disclose all the limitation of claim 17 including the dental arch of the user.
Li discloses generating a dental aligner customized for the user based on the 3D model of the dental (Page 11, paragraphs 3 and 4 - “after cutting between the dental crown model to be modelled and cutting to obtain the dental model building triangle grid modelling the jointing is finished. in the process of splicing using a triangular mesh, will often appear to be modelled dental crown model and similar dental model of different sizes. needs to be used again when determining the clipping plane AABB axis aligned bounding box generated by the four surfaces respectively parallel with the x-axis and y-axis in the bounding box is determined to be modelled dental crown and cutting the tooth root width by length and width data of the two model cutting to obtain the scaled tooth model, finally to be model modeling with cut tooth are connected by triangular finish jointing, as shown in FIG. 14. the modelling experiment shows that dental modelling method based on three-dimensional searching when the dental crown data is modelled to model in the model base is present to be modeling the dental crown data higher similarity of tooth model can obtain needed tooth model tooth by the modelling method of the invention”.
AAABB axis aligned bounding box algorithm is performed based on the three-dimensional dental model .
Page 11 paragraphs 1 to 3, Page 2, last 3 paragraphs - As shown in Fig. 14, the three-dimensional final model is generated include based on the Fig. 9 . The regions form a whole tooth crown curved surface, represents the shape of the original dental crown curved surface. the invention is dental modelling method based on three-dimensional model.).
Regarding claim 19, Li in view of Panara, Ho disclose all the limitation of claim 18 including the dental arch of the user.
Li discloses identifying a plurality of regions of the image that each depict the at least one respective tooth from the dental “page 3, paragraphs 1 to 5 - Based on the following steps:
“(1) establishing a dental model base, and extracting four feature values by performing the region division to the crown;
(2) reading to be modelled data model, the data model to be modelled area dividing, according to the area division calculation descriptors of the crown and the teeth type by type identification judgement;
four characteristic value in said step (1), respectively is a region type, total curvature of the region,
area relative to the side length of the area and an adjacent area.”
4 features values of divided areas for the teeth types are identified for its respect region.”); and
using to determine features describing the at least one respective tooth depicted in each respective region of the plurality of regions of the image (“page 3, paragraphs 1 to 5 - Based on the following steps:
“(1) establishing a dental model base, and extracting four feature values by performing the region division to the crown;
(2) reading to be modelled data model, the data model to be modelled area dividing, according to the area division calculation descriptors of the crown and the teeth type by type identification judgement;
four characteristic value in said step (1), respectively is a region type, total curvature of the region,
area relative to the side length of the area and an adjacent area.”
Using area division calculation descriptor to determine 4 feature values of divided areas for the teeth types, plurality of regions.”).
Parpara discloses using a neural network to determine data (0168] the model may be a machine learning model, convolutional neural networks, that is trained to identify one or more high risk areas for one or more defects at one or more locations of the plastic shell associated with a dental arch of a patient [0005]. Processing logic may train a machine learning model to generate the trained machine learning model.
[0169] - the machine learning model is convolutional neural networks.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Li in view of Parpara with using a neural network to determine data as taught by Parpara. The motivation for doing so to improve accuracy.
Regarding claim 20, Li in view of Panara, Ho disclose all the limitation of claim 18 including the dental arch of the user.
Li discloses assigning a respective label to each respective region of the plurality of regions of the image based on the features identified from the respective region (Page 7 paragraph 8 - “wherein the area type is the area division in the step (1) into three types, respectively reflecting dental crown on the curved tooth tip area (convex area), a ridge such as ridge-like feature area (double inflection point region) and ditch, trough like feature area (concave area). the divided area corresponding to the type mark as feature value extraction, for retrieving, storing, comparison of similarity, label”
The divided area corresponding to the type marked as features based on the tooth type identifying method of the area as shown in 5 steps in page 3, paragraphs 1 to 5. ),
each respective label identifying a tooth type of the at least one respective tooth of the dental that is depicted in the respective region of the plurality of regions of the image (page 3, paragraphs 1 to 5 - Based on the following steps:
“(1) establishing a dental model base, and extracting four feature values by performing the region division to the crown;
(2) reading to be modelled data model, the data model to be modelled area dividing, according to the area division calculation descriptors of the crown and the teeth type by type identification judgement;
four characteristic value in said step (1), respectively is a region type, total curvature of the region,
area relative to the side length of the area and an adjacent area.”
4 features values of divided areas for the teeth types are identified for its respect region.).
Claims 5 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (NPL: CN 106504331 A, 2017-03-15) in view of Parpara et al. (Publication: US 2019/0102880 A1), Ho (Publication: US 2018/0218513 A1) and Ouderkirk et al. (Publication: US 2019/0295503 A1).
Regarding claim 5, see rejection on claim 13.
Regarding claim 13, Li in view of Panara, Ho disclose all the limitation of claim 11 including first voxel grid.
Parpara discloses (i) the set of features of the dental arch ([0121] Each patient has a unique dental arch with unique gingiva. the shape and position of the cutline may be unique and customized for each patient and for each stage of treatment. For instance, the cutline is customized to follow along the gum line (also referred to as the gingival line).
[0123] Identify the a gingival cutline by first defining initial gingival curves along a line around a tooth (LAT) of a patient's dental arch in the virtual 3D model image(also referred to as a digital model) of the patient's dental arch for a treatment stage. The gingival curves may include interproximal areas between adjacent teeth of a patient as well as areas of interface between the teeth and the gums. The initially defined gingival curves may be replaced with a modified dynamic curve that represents the cutline.
PNG
media_image1.png
662
516
media_image1.png
Greyscale
)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Li in view of Parpara and Ho disclose
with (i) the set of features of the dental arch as taught by Parpara. The motivation for doing so to improve accuracy.
Ho discloses (ii) incorporating the depth data as a plurality of depths into the [[first voxel grid]] ([0063] When a color stream is present generating pixel color values for individual images, process 900 may include “convert depth image data to 3D depth points”. 910. This is based on an inverse of a camera projection matrix which has an intrinsic parameter of the depth camera where for the inverse matrix image , each depth pixel (u, v) in the image coordinates is converted back to (x, y, z) in 3D depth camera coordinates.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Li in view Parpara, Ho with (ii) incorporating the depth data as a plurality of depths into the [[first voxel grid]] as taught by Ho. The motivation for doing so the measurement can be more accurate.
Li in view Parpara, Ho do not however Ouderkirk discloses
apply a bit mask to the image to the identified plurality of regions for scanning depth data ([0060] - a display system may use a bit-depth mask that specifies bit depths for a variety of regions according to a pre-determined pattern. Display systems may accordingly drive portions of an integral display that correspond to a particular region of the bit-depth mask at the bit depth that is associated with that particular region of the bit-depth mask “identified plurality of regions for scanning depth data” . For example, if a region of a bit-depth mask indicates a bit depth of 5 bits, the display system may drive the corresponding region of an integral display at a bit depth of 5 bits.) ,
wherein the bit mask is configured based on (i) and (ii) ([0060] - a display system may use a bit-depth mask that specifies bit depths for a variety of regions according to a pre-determined pattern. Display systems may accordingly drive portions of an integral display that correspond to a particular region of the bit-depth mask at the bit depth that is associated with that particular region of the bit-depth mask “identified plurality of regions for scanning depth data” . For example, if a region of a bit-depth mask indicates a bit depth of 5 bits, the display system may drive the corresponding region of an integral display at a bit depth of 5 bits.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Li in view Parpara, Ho with apply a bit mask to the image to the identified plurality of regions for scanning depth data, wherein the bit mask is configured based on (i) and (ii) as taught by Ho. The motivation for doing so to reduce image data handled so to improve power.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ming Wu whose telephone number is (571)270-0724. The examiner can normally be reached on Monday - Friday: 9:30am - 6:00pm EST .
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MING WU/
Primary Examiner, Art Unit 2618