DETAILED ACTION
This office action is responsive to original claims filed on 05/30/2023. Presently, Claims 1 - 17 remain pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 3, 13, and 15-16 are objected to because of the following informalities:
Claim 3, Line 10, recites “smallest eigenvalue λ”, which should be changed to “smallest eigenvalue λ3”.
Claim 13, Line 2, recites “the convolutional neural network”, which should be changed to “the convolutional neural network model”.
Claim 15, Line 2, recites “heat map”, which should be changed to “map”, “probability map” or any term that may be suitable. The application is not related to “heat”.
Claim 16, Line 2, recites “heat map”, which should be changed to “map”, “probability map” or any term that may be suitable. The application is not related to “heat”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1, Lines 4-6, recites “determining full arch data … and partial arch data … by applying the 2D depth image to a convolutional neural network model”. It is unclear whether as the limitation of “by applying the 2D depth image to a convolutional neural network model” refers to both the determination of full arch data and partial arch data, or the determination of one of full arch data and partial arch data. For present purposes of examination, Examiner interprets the recited “by applying the 2D depth image to a convolutional neural network model” to determine one of full arch data and partial arch data, but not both of them.
Claim 1, Lines 7-8, recites “fully-connected convolutional neural network”. In the field of artificial neural network, the terms “fully-connected neural network” and “convolution neural network” are widely used, a convolutional neural network may also contain fully-connected layers, but the above-recited term is not used. Specifically, there is no such network that can be fully-connected neural network and convolution neural network at the same time. Hence, it is unclear how the recited term “fully-connected convolutional neural network” differs from “convolutional neural network”. For present purposes of examination, Examiner interprets the recited term “fully-connected convolutional neural network” to refer to “convolutional neural network”.
Claim 9, Lines 3-4 and 6-7, Claim 10, Lines 1-3, Claim 11, Lines 2-3, 5, 7 and 9, Claim 12, Lines 2-5, and Claim 14, Lines 1-2, recite “fully-connected convolutional neural network”, which has the same issue as discussed above for Claim 1, Lines 7-8, and are interpreted similarly.
Claim 13, Lines 1-2, recites “the detecting the 2D landmark further comprises training the convolutional neural network”, and Line 3, “the convolutional neural network”. The independent Claim 1, Lines 7-8, recites “detecting a 2D landmark … a fully-connected convolutional neural network model”. For present purposes of examination, the recited “the convolutional neural network” in Claim 13 is interpreted to refer to “the fully-connected convolutional neural network”.
Claims 2-8 and 15-17 are also rejected under 35 U.S.C. 112(b) because they inherit the indefiniteness of the claim(s) they respectively depend upon.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 - 17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
With regard to Claims 1-16:
Step 1: the claim is drawn to a method/process, one of the four statutory categories.
Step 2A, Prong One:
The claims recite the limitations of “projecting 3D scan data”, “determining full arch data … and partial arch data …”, “detecting a 2D landmark”, and “back-projecting the 2D landmark” in Claim 1, “determining a projection direction vector by a principal component analysis” in Claim 2, “moving a matrix”, “calculating a covariance”, “operating eigen decomposition”, and “determining the projection direction vector” in Claim 3, “determining w3 as the projection direction vector when …”, and “determining -w3 as the projection direction vector when …” in Claim 4, “the 2D depth image is generated on a projection plane”, and “the projection plane is defined at a location …” in Claim 5, and “the 2D landmark is back-projected in a direction …” in Claim 6, which are, under their broadest reasonable interpretation, limitations that cover performance of the limitation in the mind or mathematical calculations. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind or mathematical calculations, then it falls within the “Mental Processes” grouping or “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
Step 2A, Prong Two:
This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements – convolutional neural network model in Claims 1, 7 and 13, fully-connected neural network model in Claims 1, 9-12 and 14, feature extractor comprising convolution layer and pooling layer in Claims 7-8, convolution process and deconvolution process in Claims 10-12 and 14, and training the convolutional neural network in Claim 13. The recited neural networks and their components or process are recited at a high-level of generality (i.e., as a generic artificial neural network performing a generic function of distinguishing types of images or detecting objects in images) such that it amounts no more than mere instructions to apply the exception using a generic neural network. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
Step 2B:
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply a generic artificial neural network performing a generic function of distinguishing types of images or detecting objects in images. Mere instructions to apply an exception using a generic artificial neural network cannot provide an inventive concept.
For the reasons set forth above, Claims 1-16 are not patent eligible.
With regard to Claim 17:
Step 1: the claim is drawn to a device/system, one of the four statutory categories.
Step 2A, Prong One:
The claim is dependent on Claim 1, so recites the limitations of “projecting 3D scan data”, “determining full arch data … and partial arch data …”, “detecting a 2D landmark”, and “back-projecting the 2D landmark” in Claim 1, which are, under their broadest reasonable interpretation, limitations that cover performance of the limitation in the mind or mathematical calculations. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind or mathematical calculations, then it falls within the “Mental Processes” grouping or “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A, Prong Two:
This judicial exception is not integrated into a practical application. In particular, the claim is dependent on Claim 1, so recites the additional elements – convolutional neural network model in Claims 1, fully-connected neural network model in Claims 1, and non-transitory computer-readable storage medium and one hardware processor in Claim 17. The recited neural networks, storage medium and hardware processor are recited at a high-level of generality (i.e., as a generic artificial neural network stored in a generic storage medium, performing a generic function of distinguishing types of images or detecting objects in images as executed by a generic processor) such that it amounts no more than mere instructions to apply the exception using a generic neural network on a generic computer. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply a generic artificial neural network performing a generic function of distinguishing types of images or detecting objects in images on a generic computer. Mere instructions to apply an exception using a generic artificial neural network cannot provide an inventive concept.
For the reasons set forth above, Claim 17 is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 9, 13 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Chung et al (IEEE Trans Med Imaging. 39(12):3900-3909; hereafter Chung), in view of Takahashi et al (J Prosthodont Res. 65(1):115-118; hereafter Takahashi).
With regard to Claim 1, Chung discloses a method for automatically detecting a landmark (the point and line pair) in three-dimensional (3D) dental scan data (Chung, Page 3902, Column 1, Para 1; “The point and line pairs are subsequently reconstructed (i.e., positioned) in the 3D domain. As the projected bounding plane, pertaining to the scanned model, is originally defined in a 3D domain, the point and line in Mp are automatically positioned in the 3D space.”. Chung discloses a method (termed as “deep pose regression”) for automatically detecting a landmark in 3D dental scan data. The method is also shown in Fig. 1(a).), the method comprising:
projecting 3D scan data (the scanned model) to generate a two-dimensional (2D) depth image (a synthetic depth image) (Chung, Page 3901, Column 2, Para 5; “For the scanned model, a synthetic depth image is generated for the primary axis (i.e., full-arch visible axis).”. The content in the left-upper corner of Fig. 1(a) (Page 3902) demonstrates the projecting procedure.);
detecting a 2D landmark in the 2D depth image using a fully-connected convolutional neural network model (Chung, Page 3902, Column 1, Para 1; “Then, the trained CNN model is used to acquire corresponding point and line pairs for each image (Fig. 1a).”); and
back-projecting the 2D landmark onto the 3D scan data to detect a 3D landmark of the 3D scan data (Chung, Page 3902, Column 1, Para 1; “The point and line pairs are subsequently reconstructed (i.e., positioned) in the 3D domain. … the point and line in Mp are automatically positioned in the 3D space.” Once the landmark detected in 2D image is positioned in 3D image, the landmark becomes 3D landmark.).
Chung does not clearly and explicitly disclose the method comprising determining full arch data obtained by scanning all teeth of a patient and partial arch data obtained by scanning only a part of teeth of the patient by applying the 2D depth image to a convolutional neural network model.
Takahashi in the same field of endeavor discloses the method comprising determining full arch data obtained by scanning all teeth of a patient and partial arch data obtained by scanning only a part of teeth of the patient (Takahashi, Page 116, Column 1, Para 1; “… 1184 oral photographic images … consisted of four types of dental arches: edentulous, arches with posterior tooth loss (distal extension missing), arches with bounded edentulous space (intermediate missing), and intact dentition (without missing) in each jaw.”. Of the 4 listed arch types, the first 3 are partial arch, and the 4th is full arch.) by applying the 2D depth image to a convolutional neural network model (Takahashi, Abstract; “The purpose of this study was to develop a method for classifying dental arches using a convolutional neural network (CNN) …”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chung, as suggested by Takahashi, in order to determine an image to be full arch or partial arch. One of ordinary skill in the art would have been motivated to make the modification for the benefit of improved accuracy of the detected landmark by using different detection models for different arch types.
With regard to Claim 2, Chung and Takahashi disclose the method of Claim 1. Chung further discloses wherein the projecting the 3D scan data comprises determining a projection direction vector (the primary axis) by a principal component analysis (Chung, Page 3901, Column 2, Para 5; “For the scanned model, a synthetic depth image is generated for the primary axis (i.e., full-arch visible axis). The primary axis can be easily obtained by principal component analysis (PCA) …”).
With regard to Claim 3, Chung and Takahashi disclose the method of Claim 2. Chung further discloses wherein the determining the projection direction vector comprises:
moving (
X
'
=
X
-
X
-
) (Chung, Page 3901, Column 2, Para 5; “… covariance matrix …
C
=
1
V
∑
i
(
v
i
-
u
)
(
v
i
T
-
u
T
)
…”. Chung discloses the moving of the matrix V (corresponding to X of Application) in the calculation of the covariance matrix (i.e. vi - u)) a matrix
X
=
x
1
x
2
…
x
n
y
1
y
2
…
y
n
z
1
z
2
…
z
n
of a set
i
∈
1,2
,
…
,
n
p
i
(
x
i
,
y
i
,
z
i
)
of coordinates of n 3D points of the 3D scan data (Chung, Page 3901, Column 2, Para 5; “In a given triangular mesh model M = {V, E}, where V and E are sets of vertices and edges, let a 3D vector v ∈ V be a positional vector in set V.”) based on an average value
X
-
of
X
=
x
1
x
2
…
x
n
y
1
y
2
…
y
n
z
1
z
2
…
z
n
(Chung, Page 3901, Column 2, Para 5; “Defining the mean vector as
u
=
1
V
∑
i
v
i
…”);
calculating a covariance
Σ
=
c
o
v
(
X
'
)
=
1
n
-
1
X
'
X
'
T
for the coordinates of the n 3D points (Chung, Page 3901, Column 2, Para 5; “… covariance matrix …
C
=
1
V
∑
i
(
v
i
-
u
)
(
v
i
T
-
u
T
)
…”.);
operating (
ΣA
=
A
Λ
) eigen decomposition of Σ (Chung, Page 3901, Column 2, Para 5; “PCA can be subsequently performed through eigen decomposition or singular value decomposition …”); and
determining the projection direction vector based on a direction vector w3 having the smallest eigenvalue λ among w1 = { w1p, w1q, w1r}, w2 = { w2p, w2q, w2r}, w3 = { w3p, w3q, w3r}, where
A
=
w
1
p
w
2
p
w
3
p
w
1
q
w
2
q
w
3
q
w
1
r
w
2
r
w
3
r
and
Λ
=
λ
1
0
0
0
λ
2
0
0
0
λ
3
(Chung, Page 3901, Column 2, Para 5; “The depth image, Mp, is then generated by projecting all vertices to a tight bounding plane that has v2 as a normal vector.”. Here the disclosed vector v2 is the eigen vector corresponding to the smallest eigen value).
With regard to Claim 4, Chung and Takahashi disclose the method of Claim 3. Chung further discloses wherein the determining the projection direction vector comprises:
determining w3 as the projection direction vector when
η
-
is an average of normal vectors of the 3D scan data and
w
3
∙
η
-
>
0
; and
determining -w3 as the projection direction vector when
η
-
is an average of normal vectors of the 3D scan data and
w
3
∙
η
-
≤
0
(According to specification of Application (Para 0059; “When the teeth protrude upward, the average of the normal vectors of the set of the triangles of the 3D scan data represents an upward direction. In contrast, when the teeth protrude downward, the average of the normal vectors of the set of the triangles of the 3D scan data generated a downward direction.”), the average of normal vectors of the 3D scan data corresponds to the direction from tooth root to occlusal surface. So the limitations in current claim require the projection to be along the direction. Chung discloses a projection direction vector along “full-arch visible axis” in Page 3901, Column 2, Para 5; “For the scanned model, a synthetic depth image is generated for the primary axis (i.e., full-arch visible axis).”. In addition, in Fig. 1a (partially shown below), the projecting in Chung is along the direction from tooth root to occlusal surface (see the red arrows), agreeing with the claim limitations).
PNG
media_image1.png
309
570
media_image1.png
Greyscale
Chung, Part of Fig. 1a
With regard to Claim 5, Chung and Takah