DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 12 repeatedly recites the limitation "a setup". Claims 5, 6, 16 and 17 recite the limitation “the setup”. It is unclear whether there are all the same “setup” or different setups. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8, 10 and 12-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nikolskiy et al. (US 2023/0145042) in view of Somasundaram et al. (US 2017/0169562).
In regard to claim 1, Nikolskiy et al. teach a computer-implemented method for generating setups for orthodontic alignment treatment, comprising: receiving, by one or more computer processors, a first digital representation of a patient's teeth (fig. 11 and paragraph 67, full geometric encoded representation); using, by the one or more computer processors and to determine a prediction for one or more tooth movements for a setup, a generator that comprises one or more neural networks (fig. 11 and paragraph 65) and that has been initially trained to predict one or more tooth movements for a setup (paragraph 65, determining final position of the teeth); further training, by the one or more computer processors, the generator based on the using, wherein the training of the neural network is modified by performing operations comprising: predicting, by the generator, one or more tooth movements for a setup based on the first digital representation of the patient's teeth, wherein the one or more tooth movements are described by at least one of a position and an orientation (fig. 11 element 1106 and paragraph 70); quantifying, by the generator, the difference between a representation of the one or more tooth movements predicted by the generator and a representation of one or more reference tooth movements (fig. 11 element 1108); generating a loss value based on the quantifying (1112 and paragraph 73) error signal); and modifying the generator based at least in part on the loss value (paragraph 74) but does not teach wherein the first digital representation includes a plurality of mesh elements and a respective mesh element feature vector associated with each mesh element in the plurality of mesh elements.
Somasundaram et al. teach wherein the first digital representation includes a plurality of mesh elements and a respective mesh element feature vector associated with each mesh element in the plurality of mesh elements (paragraphs 24 and 33).
The two are analogous art because they both deal with the same field of invention of image processing in dental applications.
Before the effective filing date it would have been obvious to one of ordinary skill in the art to provide the apparatus of Nikolskiy et al. with the feature vectors of Somasundaram et al. The rationale is as follows: Before the effective filing date it would have been obvious to provide the apparatus of Nikolskiy et al. with the feature vectors of Somasundaram et al. because would work equally well in the apparatus of NIkolskiy et al. as they do separately. One or ordinary skill in the art would recognize the feature vectors would provide predictable results and would allow for accurate and detailed descriptions of 3-D images.
in regard to claims 2 and 13, Nikolskiy et al. teach wherein a mesh element comprises and a voxel (paragraph 32 voxel representation).
Samasundaram et al. teach at least one of a vertex (paragraph 32 vertices), an edge (paragraph 33 maximum and minimum curvature), a face (paragraph 32).
In regard to claims 3 and 14, Somasundaram et al. teach wherein a mesh feature comprises at least one of a spatial feature and a structural feature (paragraph 32, mesh comprises faces and vertices).
In regard to claims 4 and 15, Nikolskiy et al. teach producing, by the one or more processors an output describing one or more transforms to be applied to one or more teeth (element 1109).
In regard to claims 5 and 16, Nikolskiy et al. teach wherein the setup in an intermediate setup (fig. 9).
In regard to claims 6 and 17, Nikolskiy et al. teach wherein the setup is a final setup (fig. 9).
In regard to claims 7 and 18, Nikolskiy et al. teach wherein modifying the training of the generator comprises adjusting one or more weights of the generator's one or more neural networks (paragraphs 73 and 74).
In regard to claims 8 and 19, Somasundaram et al. teach wherein the one or more mesh features include vertex XYZ positions (paragraph 33 vertex coordinates), surface normal vectors (paragraph 33 vertex normal), and vertex curvatures (paragraph 33 curvature).
In regard to claim 10, Nikolskiy et al. teach generating, by the one or more computer processors, a digital representation of the patient's teeth based on the one or more reference tooth movements (fig. 9 and paragraph 50).
In regard to claim 12, Nikolskiy et al. and Somasundaram et al. teach all the elements of claim 12 including one or more computer processors; and a non-transitory computer readable storage medium (fig. 1 and paragraph 28).
Claim(s) 9, 11 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nikolskiy et al. in view of Somasundaram et al. further considered with Zheng et al. (US 2023/0047647).
In regard to claims 9 and 20, Nikolskiy et al. and Somasundaram et al. teach all the elements of claim 9 except wherein the generator comprises at least one of a three-dimensional U-Net, a three-dimensional encoder, a three-dimensional decoder, a three- dimensional pyramid encoder/decoder, and a multi-layer perceptron (MLP).
Zheng et al. teach wherein the generator comprises at least one of a three-dimensional U-Net (paragraph 49. The specification shows only one of these being used as the generator not all of them as the “and” would imply), a three-dimensional encoder, a three-dimensional decoder, a three- dimensional pyramid encoder/decoder, and a multi-layer perceptron (MLP).
The three are analogous art because they all deal with the same field of invention of image processing in dental applications.
Before the effective filing date it would have been obvious to one of ordinary skill in the art to provide the apparatus of Nikolskiy et al. and Somasundaram et al. with the U-net of Zheng et al. The rationale is as follows: Before the effective filing date it would have been obvious to provide the apparatus of Nikolskiy et al. and Somasundaram et al. with the U-net of Zheng et al. because a U-net allows for fast reconstruction with a small amount of image data (paragraph 6).
In regard to claim 11, Nikolskiy et al. and Somasundaram et al. teach all the elements of claim 11 except wherein the generator is also trained at least in part by a discriminator, which comprises one or more neural networks and has been trained to distinguish between predicted tooth movements and reference tooth movements.
Zheng et al. teach wherein the generator is also trained at least in part by a discriminator (fig. 8), which comprises one or more neural networks (paragraph 84) and has been trained to distinguish between predicted images and reference images (paragraphs 86 and 87, Zheng et al. teach using a loss function to train the discriminator).
The three are analogous art because they all deal with the same field of invention of image processing in dental applications.
Before the effective filing date it would have been obvious to one of ordinary skill in the art to provide the apparatus of Nikolskiy et al. and Somasundaram et al. with the neural network discriminator of Zheng et al. The rationale is as follows: Before the effective filing date it would have been obvious to provide the apparatus of Nikolskiy et al. and Somasundaram et al. with the neural network discriminator of Zheng et al. Zheng et al. because the neural network of Zheng et al. can be trained and will improve over time.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH R HALEY whose telephone number is (571)272-0574. The examiner can normally be reached 7:30am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH R HALEY/ Primary Examiner, Art Unit 2621