Prosecution Insights
Last updated: April 19, 2026
Application No. 18/764,335

SYSTEM AND METHOD FOR ADJUSTING DENTAL MODELS

Non-Final OA §103§112
Filed
Jul 04, 2024
Examiner
LI, RAYMOND CHUN LAM
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Sprintray Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
10 currently pending
Career history
10
Total Applications
across all art units

Statute-Specific Performance

§103
55.6%
+15.6% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
26.7%
-13.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because of the following informalities: Paragraph [0011] recites “the a preset correspondence”, and should be corrected to “a preset correspondence”. Paragraph [0098] recites “As shown in FIG. 12, 24 categories may be described in the 3D space through the display of a sphere”. FIG. 12 does not demonstrate 24 categories; therefore, the passage should be rewritten to more accurately convey the intended meaning, which is construed to be that Fig. 12 demonstrates the capability of 24 categories to be represented in the 3D space. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an acquisition module”, “a moving module”, and “an adjustment module” in Claim 11. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 5, 6, 8, 9, 16, 17, and 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. Claim 5 recites a “first preset direction”, which the specification makes reference to in Figure 12. It is depicted as originating from the coordinate origin. However, it is unclear what exactly “first preset direction” is in context with the claimed invention. “first preset direction” is only defined in relation to other elements, most notably in Paragraph [0011], which defines it as “corresponding to the first category by using the preset correspondence”. Both “category and correspondence” are similarly lacking definition with regards to the claimed invention. The nature of the invention is centered around the movement and orientation of a 3D dental model in the form of a 3D point cloud within a 3D space. While a “first preset direction” is understandable with regards to a general movement of a 3D dental model, it is unclear in what way a dental model is linked to a “first preset direction”, considering Figure 12 does not display a dental model. The breadth of the claims become unclear due to the lack of definition with regards to a “first preset direction”, which can vary significantly in interpretation. There exists significant prior art in translating and rotating point clouds (which may be dental models) to align to particular orientations using feature extraction from the point clouds, and subsequently computing translation and rotation matrices from the 3D correspondences of the point clouds. However, “direction” as a broad term is not prevalent amongst relevant prior art. While “direction” can potentially be interpreted as related in some form to translation or rotation matrices for orienting point clouds, there is no significant indication within the specification that this is the case. A person having ordinary skill in the art could interpret “a first preset direction” as a general term for how to move a model in an initial orientation to a second orientation. However, the fact that a “first preset direction” is not explicitly or implicitly linked to a known method for doing so would make it difficult for a person having ordinary skill in the art to understand exactly what a “first preset direction” is in context with the invention. Given that orienting point clouds is well established in the art, it wouldn’t necessarily be unreasonable for known methods to be applied to the understanding and application of a “first preset direction”, however, considering how a “first preset direction” is intrinsically linked to other elements that are similarly lacking definition such as “category” and “display direction”, doubt is cast on the assumed definition of a “first preset direction”. While the specification does provide ample description as to how a “first preset direction” is linked to other elements such as “category” and “display direction”, it lacks a baseline description and definition of said elements, resulting in an unclear base of understanding of how said limitations are to be understood with regards to the invention as a whole. A lack of working examples with regards to the claimed invention in context with a “first preset direction” and associated undefined elements results in an unclear understanding of the claimed invention. As a result of the aforementioned analysis, undue experimentation would be required to make the invention based on the contents of the disclosure. For the purposes of examination, a “first preset direction” will be interpreted under the broadest reasonable interpretation of translation and/or rotation matrices. Claim 6 recites a “first display direction” and “first category”, which are referenced in Figure 12 of the specification, where Paragraph [0096] describes the “first display direction” as belonging to the “first category”, where the “preset direction” also corresponds to the “first category”. As noted above in the analysis of Claim 5, a “first display direction” and “first category” are lacking definitions, and are only tangentially defined in relation to other similarly undefined elements. The nature of the invention is centered around the movement and orientation of a 3D dental model in the form of a 3D point cloud within a 3D space. While a “first display direction” and “first category” is understandable with regards to a general orientation and description of 3D space, it is unclear in what way a dental model is linked to a “first display direction” and “first category”, considering Figure 12 does not display a dental model. The breadth of the claims become unclear due to the lack of definition with regards to a “first display direction” and “first category”, which can vary significantly in interpretation. There exists significant prior art in translating and rotating point clouds (which may be dental models) to align to particular orientations using feature extraction from the point clouds, and subsequently computing translation and rotation matrices from the 3D correspondences of the point clouds. However, “display direction” and “category” as broad terms are not prevalent amongst relevant prior art. While “display direction” and “category” can potentially be interpreted as related in some form to initial orientation and some form of breakdown of a 3D space in relation to the coordinate origin, the meaning becomes unclear when considered alongside “direction”. A person having ordinary skill in the art could interpret “display direction” and “category” as general terms for orientation and coordinate ranges. However, the fact that a “display direction” and “category” are not explicitly or implicitly linked to a known method for doing so would make it difficult for a person having ordinary skill in the art to understand exactly what “display direction” and “category” are in context with the invention. Given that orienting point clouds is well established in the art, it wouldn’t necessarily be unreasonable for known methods to be applied to the understanding and application of a “display direction” and “category”. However, considering how a “display direction” and “category” is intrinsically linked to other elements that are similarly lacking definition, doubt is cast on the assumed definitions of a “display direction” and “category”. While the specification does provide ample description as to how a “display direction” and “category” are linked to other elements such as “direction”, it lacks a baseline description and definition of said elements, resulting in an unclear base of understanding of how said limitations are to be understood with regards to the invention as a whole. A lack of working examples with regards to the claimed invention in context with a “display direction” and “category” and associated undefined elements results in an unclear understanding of the claimed invention. As a result of the aforementioned analysis, undue experimentation would be required to make the invention based on the contents of the disclosure. For the purposes of examination, a “display direction” and “category” will be interpreted under the broadest reasonable interpretation as a starting orientation and a set of starting orientations, respectively. Claim 8 recites a “second preset direction”, in which the analysis of enablement of Claim 5 applies. Claim 9 recites a “second display direction” and “second category”, in which the analysis of enablement of Claim 6 applies. Claim 16, being similar in scope to Claim 5, is rejected under the same rationale for lack of enablement. Claim 17, being similar in scope to Claim 6, is rejected under the same rationale for lack of enablement. Claim 20, being similar in scope to Claim 9, is rejected under the same rationale for lack of enablement. All dependent claims of claims 5,6, 8, 9, 16, 17, 19, and 20 are similarly rejected. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5, 6, 8, 9, 16, 17, 19, and 20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 5, the term “first preset direction” is a relative term which renders the claim indefinite. The term “first preset direction” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Refer to the analysis of “first preset direction” on lack of enablement under 35 U.S.C. 112(a). Claim 5 recites the limitation "the first preset direction" in line 4. There is insufficient antecedent basis for this limitation in the claim. Regarding Claim 6, the terms “first display direction” and “first category” are relative terms which renders the claim indefinite. The terms “first display direction” and “first category” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Refer to the analysis of “first display direction” and “first category” on lack of enablement under 35 U.S.C. 112(a). Regarding Claim 8, the term “second preset direction” is a relative term which renders the claim indefinite. The term “second preset direction” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Refer to the analysis of “second preset direction” on lack of enablement under 35 U.S.C. 112(a). Claim 8 recites the limitation “the second preset direction” in line 4. There is insufficient antecedent basis for this limitation in the claim. Regarding Claim 9, the terms “second display direction” and “second category” are relative terms which renders the claim indefinite. The terms “second display direction” and “second category” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Refer to the analysis of “second display direction” and “second category” on lack of enablement under 35 U.S.C. 112(a). Claim 16, being similar in scope and structure to Claim 5, is rejected under the same analysis. Claim 17, being similar in scope and structure to Claim 6, is rejected under the same analysis. Claim 19, being similar in scope and structure to Claim 8, is rejected under the same analysis. Claim 20, being similar in scope and structure to Claim 9, is rejected under the same analysis. All dependent claims of Claims 5, 6, 8, 9, 16, 17, 19, and 20 are similarly rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (WO 2021257094 A1) in view of Chernov (US 20220165388 A1). Regarding Claim 1, Lee teaches a method for adjusting a model, comprising: Obtaining a point cloud data of a model in a 3D space (Paragraph [0021]: “The apparatus may orient 102 a model point cloud or a scanned point cloud based on a set of initial orientations. An orientation is a position in a 3D space. For example, an orientation may express a rotation and/or translation of an object model and/or point cloud in 3D space”. Notes: a point cloud is either scanned or implicitly obtained); Moving the model to a preset position in the 3D space based on the point cloud data (Paragraph [0021]: “The apparatus may orient 102 a model point cloud or a scanned point cloud based on a set of initial orientations. An orientation is a position in a 3D space. For example, an orientation may express a rotation and/or translation of an object model and/or point cloud in 3D space. An initial orientation is a starting orientation of an object model and/or point cloud. For example, a set of initial orientations may include initial orientations for the model point cloud and/or scanned point cloud from which feature determination, correspondence score determination, and/or alignment procedures may be performed. In some examples, orienting 102 a model point cloud or a scanned point cloud based on a set of initial orientations may include orienting (e.g., computing an orientation of) the model point cloud and/or scanned point cloud to an orientation (e.g., rotation and/or translation) indicated by an initial orientation or initial orientations in the set of initial orientations”); Adjusting the model at the preset position to a first preset orientation based on a neural network model (Paragraph [0024]: “The apparatus may determine 104, using a first portion of a machine learning model, first features of the model point cloud and second features of the scanned point cloud. A portion of a machine learning model is a part of a machine learning model. Examples of portions of a machine learning model may include a layer or layers, a node or nodes, and/or a connection or connections. In some examples, the first portion of the machine learning model may be a portion to determine, extract, and/or encode features of a point cloud or point clouds. For example, the first features may be values (e.g., data, vectors) that represent the model point cloud (e.g., shape, aspects, and/or characteristics of the model point cloud) and/or the second features may be values (e.g., data, vectors) that represent the scanned point cloud (e.g., shape, aspects, and/or characteristics of the scanned point cloud). The first features and/or the second features may be utilized to determine correspondences (e.g., correspondence scores) between the model point cloud and the scanned point cloud. In some examples, the model point cloud (e.g., original model point cloud, normalized model point cloud and/or model point cloud at an initial orientation, etc.) may be input into the first portion of the machine learning model to determine the first features. In some examples, the scanned point cloud (e.g., original scanned point cloud, normalized scanned point cloud and/or scanned point cloud at an initial orientation, etc.) may be input into the first portion of the machine learning model to determine the second features”; Paragraph [0025]: “In some examples, the first portion of the machine learning model may be a neural network (e.g., artificial neural network (ANN), CNN, DGCNN, etc.). For instance, the first portion of the machine learning model may include edge convolution layers. In some examples, the first portion of the neural network may include multiple edge convolution layers without a global feature aggregation layer. In some examples, the first portion of the machine learning model may provide and/or indicate features for each point of a point cloud or point clouds (e.g., model point cloud and/or scanned point cloud). Some examples of the first portion of the machine learning model are given herein. Other kinds of machine learning model portions (e.g., neural networks) that operate on point clouds may be used in some examples. In some examples, the first portion of the machine learning model may be referred to as a backbone layer or layers” Paragraph [0041]: “The apparatus may globally align 108 the model point cloud and the scanned point cloud based on the correspondence scores. For example, the apparatus may globally align the model point cloud to the scanned point cloud or may globally align the scanned point cloud to the model point cloud based on the correspondence scores. In some examples, the apparatus may use a third portion of the machine learning model to globally align 108 the model point cloud and the scanned point cloud. For instance, the third portion of the machine learning model may infer and/or predict a rotation matrix and/or translation matrix to align the model point cloud and the scanned point cloud based on the correspondence scores”. Notes: The broadest reasonable interpretation of a preset position is a desired orientation of a point cloud. Hence, orienting the point cloud from its initial orientation to the desired orientation is considered adjusting the model to a preset position. Furthermore, Neural networks can be used for feature extraction of the point clouds, which are further used to determine correspondence scores between a point cloud to be adjusted and the point cloud that defines the preset position. The correspondence scores are then utilized to adjust the point cloud to be adjusted using predicted translation and/or rotation matrices). Lee does not teach obtaining point cloud data of a dental model. However, Chernov teaches obtaining point cloud data of a dental model in a 3D space (Paragraph [0065]: “The apparatuses and/or methods (e.g., systems, devices, etc.) described below can be used with and/or integrated into an orthodontic treatment plan. The apparatuses and/or methods described herein may be used to segment a patient's teeth from a three-dimensional model, such as a 3D mesh model, a 3D point cloud, or a 3D scan (e.g., CT scan, CBCT scan, MRI scan, etc.)”; Paragraph [0066]: “The three-dimensional scan can generate a 3D mesh model, or a 3D point cloud model representing the patient's arch”). Lee and Chernov are considered analogous in the art with orientation of models with regards to point clouds of said models. There is clear motivation for the use of point clouds with respect to orientation of dental models, as evidenced by Chernov doing so. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of adjusting the position and orientation of a 3D model with point cloud data using a neural network model of Lee with the motivation of applying point cloud data to orienting dental models of Chernov; Doing so would yield the predictable result of adjusting the position and orientation of a 3D dental model with point cloud data using a neural network model. Regarding Claim 2, the method of Claim 1 is rejected over Lee as modified. Lee as modified teaches a first preset orientation being configured to represent at least one of the following: An orientation of a target tooth in the dental model coincides with an orientation of a first axis in the 3D space (Chernov, Paragraph [0119]: “The transformation engine 188 may implement one or more automated agents configured to adjust the position and orientation of the generic tooth model to better match the position and orientation of the selected segmented tooth from the 3D scan data”; Chernov, Paragraph [0066]: “The three-dimensional scan can generate a 3D mesh model, or a 3D point cloud model representing the patient's arch”; Chernov, Paragraph [0071]: “The scanning system 154 may include a computer system configured to scan a patient's dental arch. A “dental arch,” as used herein, may include at least a portion of a patient's dentition formed by the patient's maxillary and/or mandibular teeth, when viewed from an occlusal perspective” Notes: The dental model (3D point cloud model) is scanned from the occlusal position, which inherently orients the model with respect to all 3 axis such that the occlusal plane aligns with 2 of the three planes), An orientation of an occlusal surface of the dental model coincides with the orientation of a second axis in the 3D space (Chernov, Paragraph [0119]: “The transformation engine 188 may implement one or more automated agents configured to adjust the position and orientation of the generic tooth model to better match the position and orientation of the selected segmented tooth from the 3D scan data”; Chernov, Paragraph [0066]: “The three-dimensional scan can generate a 3D mesh model, or a 3D point cloud model representing the patient's arch”; Chernov, Paragraph [0071]: “The scanning system 154 may include a computer system configured to scan a patient's dental arch. A “dental arch,” as used herein, may include at least a portion of a patient's dentition formed by the patient's maxillary and/or mandibular teeth, when viewed from an occlusal perspective” Notes: The dental model (3D point cloud model) is scanned from the occlusal position, which inherently orients the model with respect to all 3 axis such that the occlusal plane aligns with 2 of the three planes), and an orientation of a wide surface of the tooth jaw in the dental model coincides with the orientation of a third axis in the 3D space (Chernov, Paragraph [0119]: “The transformation engine 188 may implement one or more automated agents configured to adjust the position and orientation of the generic tooth model to better match the position and orientation of the selected segmented tooth from the 3D scan data”; Chernov, Paragraph [0066]: “The three-dimensional scan can generate a 3D mesh model, or a 3D point cloud model representing the patient's arch”; Chernov, Paragraph [0071]: “The scanning system 154 may include a computer system configured to scan a patient's dental arch. A “dental arch,” as used herein, may include at least a portion of a patient's dentition formed by the patient's maxillary and/or mandibular teeth, when viewed from an occlusal perspective” Notes: The dental model (3D point cloud model) is scanned from the occlusal position, which inherently orients the model with respect to all 3 axis such that the occlusal plane aligns with 2 of the three planes, and the tooth jaw inherently coincides with the third axis.). Regarding Claim 3, the method of Claim 2 is rejected over Lee as modified. Lee teaches adjusting the dental model at the preset position to the first preset orientation based on the neural network model, including: determining feature vectors of the point cloud data (Lee, Paragraph [0024]: “The apparatus may determine 104, using a first portion of a machine learning model, first features of the model point cloud and second features of the scanned point cloud. A portion of a machine learning model is a part of a machine learning model. Examples of portions of a machine learning model may include a layer or layers, a node or nodes, and/or a connection or connections. In some examples, the first portion of the machine learning model may be a portion to determine, extract, and/or encode features of a point cloud or point clouds. For example, the first features may be values (e.g., data, vectors) that represent the model point cloud (e.g., shape, aspects, and/or characteristics of the model point cloud) and/or the second features may be values (e.g., data, vectors) that represent the scanned point cloud (e.g., shape, aspects, and/or characteristics of the scanned point cloud). The first features and/or the second features may be utilized to determine correspondences (e.g., correspondence scores) between the model point cloud and the scanned point cloud. In some examples, the model point cloud (e.g., original model point cloud, normalized model point cloud and/or model point cloud at an initial orientation, etc.) may be input into the first portion of the machine learning model to determine the first features. In some examples, the scanned point cloud (e.g., original scanned point cloud, normalized scanned point cloud and/or scanned point cloud at an initial orientation, etc.) may be input into the first portion of the machine learning model to determine the second features”); rotating the dental model based on the feature vectors to obtain a rotated dental model (Lee, Paragraph [0041]: “The apparatus may globally align 108 the model point cloud and the scanned point cloud based on the correspondence scores. For example, the apparatus may globally align the model point cloud to the scanned point cloud or may globally align the scanned point cloud to the model point cloud based on the correspondence scores. In some examples, the apparatus may use a third portion of the machine learning model to globally align 108 the model point cloud and the scanned point cloud. For instance, the third portion of the machine learning model may infer and/or predict a rotation matrix and/or translation matrix to align the model point cloud and the scanned point cloud based on the correspondence scores”; Lee, Paragraph [0043]: “In some examples, the apparatus (e.g., third portion of the machine learning model) may update the first point cloud X with the computed rotation matrix and translation matrix to produce an updated point cloud X.sup.*”); and adjusting the rotated dental model to the first preset orientation based on the neural network model (Lee, Paragraph [0024]: “The apparatus may determine 104, using a first portion of a machine learning model, first features of the model point cloud and second features of the scanned point cloud. A portion of a machine learning model is a part of a machine learning model. Examples of portions of a machine learning model may include a layer or layers, a node or nodes, and/or a connection or connections. In some examples, the first portion of the machine learning model may be a portion to determine, extract, and/or encode features of a point cloud or point clouds. For example, the first features may be values (e.g., data, vectors) that represent the model point cloud (e.g., shape, aspects, and/or characteristics of the model point cloud) and/or the second features may be values (e.g., data, vectors) that represent the scanned point cloud (e.g., shape, aspects, and/or characteristics of the scanned point cloud). The first features and/or the second features may be utilized to determine correspondences (e.g., correspondence scores) between the model point cloud and the scanned point cloud. In some examples, the model point cloud (e.g., original model point cloud, normalized model point cloud and/or model point cloud at an initial orientation, etc.) may be input into the first portion of the machine learning model to determine the first features. In some examples, the scanned point cloud (e.g., original scanned point cloud, normalized scanned point cloud and/or scanned point cloud at an initial orientation, etc.) may be input into the first portion of the machine learning model to determine the second features”; Lee, Paragraph [0025]: “In some examples, the first portion of the machine learning model may be a neural network (e.g., artificial neural network (ANN), CNN, DGCNN, etc.). For instance, the first portion of the machine learning model may include edge convolution layers. In some examples, the first portion of the neural network may include multiple edge convolution layers without a global feature aggregation layer. In some examples, the first portion of the machine learning model may provide and/or indicate features for each point of a point cloud or point clouds (e.g., model point cloud and/or scanned point cloud). Some examples of the first portion of the machine learning model are given herein. Other kinds of machine learning model portions (e.g., neural networks) that operate on point clouds may be used in some examples. In some examples, the first portion of the machine learning model may be referred to as a backbone layer or layers”; Lee, Paragraph [0041]: “The apparatus may globally align 108 the model point cloud and the scanned point cloud based on the correspondence scores. For example, the apparatus may globally align the model point cloud to the scanned point cloud or may globally align the scanned point cloud to the model point cloud based on the correspondence scores. In some examples, the apparatus may use a third portion of the machine learning model to globally align 108 the model point cloud and the scanned point cloud. For instance, the third portion of the machine learning model may infer and/or predict a rotation matrix and/or translation matrix to align the model point cloud and the scanned point cloud based on the correspondence scores”; Lee, Paragraph [0043]: “In some examples, the apparatus (e.g., third portion of the machine learning model) may update the first point cloud X with the computed rotation matrix and translation matrix to produce an updated point cloud X.sup.*”) Claim 11, which is similar in scope to Claim 1, is rejected under the same rationale. Claim 12, which is similar in scope to Claim 1, is rejected under the same rationale. Claim 13, which is similar in scope to Claim 2, is rejected under the same rationale. Claim 14, which is similar in scope to Claim 3, is rejected under the same rationale. Claims 4-9 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (WO 2021257094 A1) in view of Chernov (US 20220165388 A1) and in further view of Kayser (US 20230124868 A1). Regarding Claim 4, the method of Claim 3 is rejected over Lee as modified. Lee as modified teaches adjusting the rotated dental model to the first preset orientation based on the neural network model, including: Using a Dynamic Graph Convolutional Neural Network (DGCNN) model to adjust the rotated dental model to a second preset orientation (Lee, Paragraph [0025]: “In some examples, the first portion of the machine learning model may be a neural network (e.g., artificial neural network (ANN), CNN, DGCNN, etc.). For instance, the first portion of the machine learning model may include edge convolution layers. In some examples, the first portion of the neural network may include multiple edge convolution layers without a global feature aggregation layer. In some examples, the first portion of the machine learning model may provide and/or indicate features for each point of a point cloud or point clouds (e.g., model point cloud and/or scanned point cloud). Some examples of the first portion of the machine learning model are given herein. Other kinds of machine learning model portions (e.g., neural networks) that operate on point clouds may be used in some examples. In some examples, the first portion of the machine learning model may be referred to as a backbone layer or layers”); and adjusting the second preset orientation of the rotated dental model to the first preset orientation (Lee, Paragraph [0046]: “In some examples, the apparatus may refine a global alignment. For example, the apparatus may align the model point cloud and the scanned point cloud based on the global alignment at a finer scale than the global alignment. For instance, the apparatus may align the model point cloud and the scanned point cloud using an iterative closest point (ICP) technique. In some examples, if the 3D object model is represented as a mesh (in addition to the model point cloud, for instance), the apparatus may utilize a plane-to-plane ICP approach. In some examples, if the 3D object model is not represented as a mesh, the apparatus may utilize a point-to-plane ICP approach. An ICP technique may be utilized to determine a closed form solution of rotation and translation matrices. In some examples, rotation and translation matrices may be determined in accordance with Equation (6). In some examples, the apparatus may utilize the ICP technique to refine the correspondences between the model point cloud and the scanned point cloud with respect to a fixed rotation matrix R.sub.xy and translation matrix T.sub.xy” Notes: Lee teaches a second alignment of the point clouds in a refining stage, which utilizes the ICP algorithm). Lee as modified does not teach using a residual neural network model to adjust the second preset orientation to adjust the rotated dental model to the first preset orientation. However, Kayser teaches using a residual neural network model to adjust the preset orientation to adjust the model to the another orientation (Paragraph [0038]: “In one embodiment, the encoder 10 extracts characteristics or features from the partial point cloud 2 and estimates the transformation in the embedding space 30. As already mentioned above, in one embodiment, SALs based on PointNet++ and a ResNet-based PointNet are used to process the point cloud (see FIG. 2). By using SALs, local characteristics can be extracted with three different radii and propagated to one or more further layers in order to obtain a global characteristic. As already described, the global characteristic can be attached to all points of the point cloud and entered into the ResNet-based PointNet in order to estimate the transformation embedding 30. The use of both local and global characteristics allows for a particularly accurate estimation of the orientation of the object”; Paragraph [0041]: “In some embodiments of the disclosure, loss functions may be used as described below. As already described above, in one embodiment, the decoder 20 obtains a concatenated vector having a 3D point and embedding vector, in order to estimate SDF values and associated 3D coordinates in the canonical space. In this case, the loss functions of shape loss, L.sub.S, and correspondence loss, L.sub.C, can be formulated in order to estimate accurate SDF values and correspondences to the canonical orientation”; Paragraph [0042]: “With respect to the encoder 10, the partial point cloud 2 of the n-th object, transformed using the k-th rotation, P.sub.n.sup.k, and a one-hot vector, OH.sub.n, are used as input for the encoder 10 in order to estimate transformation embedding, T.sub.k. The transformation embedding along with the object embedding, O.sub.n, are concatenated with each point of the point cloud, X.sub.n.sup.k, and are used as input for the decoder in order to estimate the SDF values, S.sub.n.sup.k, and the 3D point correspondences”; Refer to Figure 1 to supplement the aforementioned passages. Notes: ResNet is shorthand for a residual neural network. Kayser teaches using Resnet to process a point cloud for accurate estimation of the orientation of the object, where Resnet is used to estimate transformation embeddings 30, which can be seen in Kayser, Figure 1. Kayser also teaches obtaining 3D point correspondences from transformation embeddings, which are concatenated with each point in the point cloud to determine 3D point correspondences. As taught by Lee as modified, 3D correspondence values can be used to compute rotation and translation matrices to update the point cloud to match a desired orientation). Lee as modified and Kayser are considered analogous in the art with regards to the orientation of point clouds using neural networks. A common motivation in the art is efficient use of resources with regards to machine learning. Resnet is a computationally expensive variant of a neural network; hence, in the context of the art, there would be a motivation to use Resnet when the orientation of a point cloud needs to be finetuned, as opposed to a more drastic orientation task. Using Resnet over another neural network would be appropriate for such a task. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the orientation method of a 3D model of Lee as modified with the use of Resnet for adjusting the orientation of a 3D model of Kayser; Doing so would yield the predictable result of a 2-step 3D model orientation process, in which the second step would be a fine tuning process after an initial adjustment in the first step. Regarding Claim 5, the method of Claim 4 is rejected over Lee as modified. Lee as modified teaches adjusting the rotated dental model to the second preset orientation by using the DGCNN model, including: Using the DGCNN model to determine the first preset direction of the rotated dental model in the 3D space (Lee, Paragraph [0041]: “The apparatus may globally align 108 the model point cloud and the scanned point cloud based on the correspondence scores. For example, the apparatus may globally align the model point cloud to the scanned point cloud or may globally align the scanned point cloud to the model point cloud based on the correspondence scores. In some examples, the apparatus may use a third portion of the machine learning model to globally align 108 the model point cloud and the scanned point cloud. For instance, the third portion of the machine learning model may infer and/or predict a rotation matrix and/or translation matrix to align the model point cloud and the scanned point cloud based on the correspondence scores”; Lee, Paragraph [0042]: “In some examples, the second portion of the machine learning model may provide the correspondence scores (e.g., probabilities) for each point in a first point cloud (e.g., model point cloud or scanned point cloud) to a second point cloud (e.g., scanned point cloud or model point cloud). For instance, the correspondence scores may indicate correspondence probabilities of point pairs between the model point cloud and the scanned point cloud. The apparatus (e.g., third portion of the machine learning model) may determine point cloud pairs (X, Y), where X = { X.sub.lt ... , X.sub.n } are the points of the first point cloud and Y = {y.sub.t, ...,y.sub.n} are the corresponding points of the second point cloud. From the point cloud pairs, the apparatus (e.g., third portion of the machine learning model) may compute a closed form solution of the rotation matrix R.sub.xy and translation matrix T.sub.xy from the first point cloud X to the second point cloud Y. For example, the apparatus may compute the rotation matrix and translation matrix using a singular value decomposition of a covariance matrix H = orthogonal matrix, S is a diagonal matrix, V is an orthogonal matrix, T denotes transpose, and X and y are centroids of X and Y, respectively. Examples of the rotation and translation matrices are given in Equation (6)”. Notes: The broadest reasonable interpretation of preset direction will be rotation/translation matrices, which are computed from correspondence scores of the point clouds); and Adjusting the rotated dental model to the second preset orientation by using the first preset direction (Lee, Paragraph [0043]: “In some examples, the apparatus (e.g., third portion of the machine learning model) may update the first point cloud X with the computed rotation matrix and translation matrix to produce an updated point cloud X.sup.*”. Notes: The broadest reasonable interpretation of preset direction will be transformation/translation matrices, which are computed from correspondence scores of the point clouds). Regarding Claim 6, the method of Claim 5 is rejected over Lee as modified. Lee as modified teaches determining the first preset direction of the rotated dental model in the 3D space by using the DGCNN model, including: Identifying a first display direction of the rotated dental model in the 3D space by using the DGCNN model (Lee, Paragraph [0025]: “In some examples, the first portion of the machine learning model may be a neural network (e.g., artificial neural network (ANN), CNN, DGCNN, etc.). For instance, the first portion of the machine learning model may include edge convolution layers. In some examples, the first portion of the neural network may include multiple edge convolution layers without a global feature aggregation layer. In some examples, the first portion of the machine learning model may provide and/or indicate features for each point of a point cloud or point clouds (e.g., model point cloud and/or scanned point cloud). Some examples of the first portion of the machine learning model are given herein. Other kinds of machine learning model portions (e.g., neural networks) that operate on point clouds may be used in some examples. In some examples, the first portion of the machine learning model may be referred to as a backbone layer or layers”; Lee, Paragraph [0041]: “The apparatus may globally align 108 the model point cloud and the scanned point cloud based on the correspondence scores. For example, the apparatus may globally align the model point cloud to the scanned point cloud or may globally align the scanned point cloud to the model point cloud based on the correspondence scores. In some examples, the apparatus may use a third portion of the machine learning model to globally align 108 the model point cloud and the scanned point cloud. For instance, the third portion of the machine learning model may infer and/or predict a rotation matrix and/or translation matrix to align the model point cloud and the scanned point cloud based on the correspondence scores”; Lee, Paragraph [0042]: “In some examples, the second portion of the machine learning model may provide the correspondence scores (e.g., probabilities) for each point in a first point cloud (e.g., model point cloud or scanned point cloud) to a second point cloud (e.g., scanned point cloud or model point cloud). For instance, the correspondence scores may indicate correspondence probabilities of point pairs between the model point cloud and the scanned point cloud. The apparatus (e.g., third portion of the machine learning model) may determine point cloud pairs (X, Y), where X = { X.sub.lt ... , X.sub.n } are the points of the first point cloud and Y = {y.sub.t, ...,y.sub.n} are the corresponding points of the second point cloud. From the point cloud pairs, the apparatus (e.g., third portion of the machine learning model) may compute a closed form solution of the rotation matrix R.sub.xy and translation matrix T.sub.xy from the first point cloud X to the second point cloud Y. For example, the apparatus may compute the rotation matrix and translation matrix using a singular value decomposition of a covariance matrix H = orthogonal matrix, S is a diagonal matrix, V is an orthogonal matrix, T denotes transpose, and X and y are centroids of X and Y, respectively. Examples of the rotation and translation matrices are given in Equation (6)”); Classifying the first display direction to obtain a first category to which the first display direction belongs (“Lee, Paragraph [0025]: “In some examples, the first portion of the machine learning model may be a neural network (e.g., artificial neural network (ANN), CNN, DGCNN, etc.). For instance, the first portion of the machine learning model may include edge convolution layers. In some examples, the first portion of the neural network may include multiple edge convolution layers without a global feature aggregation layer. In some examples, the first portion of the machine learning model may provide and/or indicate features for each point of a point cloud or point clouds (e.g., model point cloud and/or scanned point cloud). Some examples of the first portion of the machine learning model are given herein. Other kinds of machine learning model portions (e.g., neural networks) that operate on point clouds may be used in some examples. In some examples, the first portion of the machine learning model may be referred to as a backbone layer or layers”; Lee, Paragraph [0041]: “The apparatus may globally align 108 the model point cloud and the scanned point cloud based on the correspondence scores. For example, the apparatus may globally align the model point cloud to the scanned point cloud or may globally align the scanned point cloud to the model point cloud based on the correspondence scores. In some examples, the apparatus may use a third portion of the machine learning model to globally align 108 the model point cloud and the scanned point cloud. For instance, the third portion of the machine learning model may infer and/or predict a rotation matrix and/or translation matrix to align the model point cloud and the scanned point cloud based on the correspondence scores”; Lee, Paragraph [0042]: “In some examples, the second portion of the machine learning model may provide the correspondence scores (e.g., probabilities) for each point in a first point cloud (e.g., model point cloud or scanned point cloud) to a second point cloud (e.g., scanned point cloud or model point cloud). For instance, the correspondence scores may indicate correspondence probabilities of point pairs between the model point cloud and the scanned point cloud. The apparatus (e.g., third portion of the machine learning model) may determine point cloud pairs (X, Y), where X = { X.sub.lt ... , X.sub.n } are the points of the first point cloud and Y = {y.sub.t, ...,y.sub.n} are the corresponding points of the second point cloud. From the point cloud pairs, the apparatus (e.g., third portion of the machine learning model) may compute a closed form solution of the rotation matrix R.sub.xy and translation matrix T.sub.xy from the first point cloud X to the second point cloud Y. For example, the apparatus may compute the rotation matrix and translation matrix using a singular value decomposition of a covariance matrix H = orthogonal matrix, S is a diagonal matrix, V is an orthogonal matrix, T denotes transpose, and X and y are centroids of X and Y, respectively. Examples of the rotation and translation matrices are given in Equation (6)”. Notes: The broadest reasonable interpretation of a “first category” is considered akin to first display direction, or perhaps the general starting orientation. In this case, each possible starting orientation can be considered its own category in the broadest reasonable interpretation, and hence, the first category is the same as the first display direction (starting direction); furthermore, Lee teaches generating transformation and rotation matrices for an initial starting orientation to a desired orientation state); and Determining the first preset direction corresponding to the first category by using a first preset correspondence (Lee, Paragraph [0041]: “The apparatus may globally align 108 the model point cloud and the scanned point cloud based on the correspondence scores. For example, the apparatus may globally align the model point cloud to the scanned point cloud or may globally align the scanned point cloud to the model point cloud based on the correspondence scores. In some examples, the apparatus may use a third portion of the machine learning model to globally align 108 the model point cloud and the scanned point cloud. For instance, the third portion of the machine learning model may infer and/or predict a rotation matrix and/or translation matrix to align the model point cloud and the scanned point cloud based on the correspondence scores”; Lee, Paragraph [0042]: “In some examples, the second portion of the machine learning model may provide the correspondence scores (e.g., probabilities) for each point in a first point cloud (e.g., model point cloud or scanned point cloud) to a second point cloud (e.g., scanned point cloud or model point cloud). For instance, the correspondence scores may indicate correspondence probabilities of point pairs between the model point cloud and the scanned point cloud. The apparatus (e.g., third portion of the machine learning model) may determine point cloud pairs (X, Y), where X = { X.sub.lt ... , X.sub.n } are the points of the first point cloud and Y = {y.sub.t, ...,y.sub.n} are the corresponding points of the second point cloud. From the point cloud pairs, the apparatus (e.g., third portion of the machine learning model) may compute a closed form solution of the rotation matrix R.sub.xy and translation matrix T.sub.xy from the first point cloud X to the second point cloud Y. For example, the apparatus may compute the rotation matrix and translation matrix using a singular value decomposition of a covariance matrix H = orthogonal matrix, S is a diagonal matrix, V is an orthogonal matrix, T denotes transpose, and X and y are centroids of X and Y, respectively. Examples of the rotation and translation matrices are given in Equation (6)”. Notes: The broadest reasonable interpretation of preset direction will be rotation/translation matrices, which are computed from correspondence scores of the point clouds) Wherein the first preset correspondence is configured to represent the correspondence between the first category and the first preset direction (Lee, Paragraph [0041]: “The apparatus may globally align 108 the model point cloud and the scanned point cloud based on the correspondence scores. For example, the apparatus may globally align the model point cloud to the scanned point cloud or may globally align the scanned point cloud to the model point cloud based on the correspondence scores. In some examples, the apparatus may use a third portion of the machine learning model to globally align 108 the model point cloud and the scanned point cloud. For instance, the third portion of the machine learning model may infer and/or predict a rotation matrix and/or translation matrix to align the model point cloud and the scanned point cloud based on the correspondence scores”; Lee, Paragraph [0042]: “In some examples, the second portion of the machine learning model may provide the correspondence scores (e.g., probabilities) for each point in a first point cloud (e.g., model point cloud or scanned point cloud) to a second point cloud (e.g., scanned point cloud or model point cloud). For instance, the correspondence scores may indicate correspondence probabilities of point pairs between the model point cloud and the scanned point cloud. The apparatus (e.g., third portion of the machine learning model) may determine point cloud pairs (X, Y), where X = { X.sub.lt ... , X.sub.n } are the points of the first point cloud and Y = {y.sub.t, ...,y.sub.n} are the corresponding points of the second point cloud. From the point cloud pairs, the apparatus (e.g., third portion of the machine learning model) may compute a closed form solution of the rotation matrix R.sub.xy and translation matrix T.sub.xy from the first point cloud X to the second point cloud Y. For example, the apparatus may compute the rotation matrix and translation matrix using a singular value decomposition of a covariance matrix H = orthogonal matrix, S is a diagonal matrix, V is an orthogonal matrix, T denotes transpose, and X and y are centroids of X and Y, respectively. Examples of the rotation and translation matrices are given in Equation (6)”. Notes: The broadest reasonable interpretation of preset direction will be rotation/translation matrices, which are computed from correspondence scores of the point clouds, where correspondence scores are used to calculate the translation and rotation matrices). Regarding Claim 7, the method of Claim 4 is rejected over Lee as modified. Lee teaches configuring the second preset orientation to represent at least one of the following: The orientation of the target tooth in the rotated dental model coincides with the orientation of the first axis (Chernov, Paragraph [0119]: “The transformation engine 188 may implement one or more automated agents configured to adjust the position and orientation of the generic tooth model to better match the position and orientation of the selected segmented tooth from the 3D scan data”; Chernov, Paragraph [0066]: “The three-dimensional scan can generate a 3D mesh model, or a 3D point cloud model representing the patient's arch”; Chernov, Paragraph [0071]: “The scanning system 154 may include a computer system configured to scan a patient's dental arch. A “dental arch,” as used herein, may include at least a portion of a patient's dentition formed by the patient's maxillary and/or mandibular teeth, when viewed from an occlusal perspective”. Notes: The dental model (3d point cloud model) is scanned from the occlusal position, which inherently orients the model with respect to all 3 axis such that the occlusal plane aligns with 2 of the three planes), The orientation of the occlusal surface of the target tooth in the rotated dental model coincides with a target plane (Chernov, Paragraph [0119]: “The transformation engine 188 may implement one or more automated agents configured to adjust the position and orientation of the generic tooth model to better match the position and orientation of the selected segmented tooth from the 3D scan data”; Chernov, Paragraph [0066]: “The three-dimensional scan can generate a 3D mesh model, or a 3D point cloud model representing the patient's arch”; Chernov, Paragraph [0071]: “The scanning system 154 may include a computer system configured to scan a patient's dental arch. A “dental arch,” as used herein, may include at least a portion of a patient's dentition formed by the patient's maxillary and/or mandibular teeth, when viewed from an occlusal perspective”. Notes: The dental model (3d point cloud model) is scanned from the occlusal position, which inherently orients the model with respect to all 3 axis such that the occlusal plane aligns with 2 of the three planes), And the target plane is constructed from the first axis and the third axis (Chernov, Paragraph [0119]: “The transformation engine 188 may implement one or more automated agents configured to adjust the position and orientation of the generic tooth model to better match the position and orientation of the selected segmented tooth from the 3D scan data”; Chernov, Paragraph [0066]: “The three-dimensional scan can generate a 3D mesh model, or a 3D point cloud model representing the patient's arch”; Chernov, Paragraph [0071]: “The scanning system 154 may include a computer system configured to scan a patient's dental arch. A “dental arch,” as used herein, may include at least a portion of a patient's dentition formed by the patient's maxillary and/or mandibular teeth, when viewed from an occlusal perspective”. Notes: The dental model (3d point cloud model) is scanned from the occlusal position, which inherently orients the model with respect to all 3 axis such that the occlusal plane aligns with 2 of the three planes). Regarding Claim 8, the method Claim 4 is rejected over Lee as modified. Lee teaches adjusting the second preset orientation by using the residual neural network model to adjust the rotated dental model to the first preset orientation, including: Determining the second preset direction of the rotated dental model in the 3D space by using the residual neural network model (Kayser, Fig 1; Kayser, Paragraph [0038]: “In one embodiment, the encoder 10 extracts characteristics or features from the partial point cloud 2 and estimates the transformation in the embedding space 30. As already mentioned above, in one embodiment, SALs based on PointNet++ and a ResNet-based PointNet are used to process the point cloud (see FIG. 2). By using SALs, local characteristics can be extracted with three different radii and propagated to one or more further layers in order to obtain a global characteristic. As already described, the global characteristic can be attached to all points of the point cloud and entered into the ResNet-based PointNet in order to estimate the transformation embedding 30. The use of both local and global characteristics allows for a particularly accurate estimation of the orientation of the object”; Kayser, Paragraph [0041]: “In some embodiments of the disclosure, loss functions may be used as described below. As already described above, in one embodiment, the decoder 20 obtains a concatenated vector having a 3D point and embedding vector, in order to estimate SDF values and associated 3D coordinates in the canonical space. In this case, the loss functions of shape loss, L.sub.S, and correspondence loss, L.sub.C, can be formulated in order to estimate accurate SDF values and correspondences to the canonical orientation”; Kayser, Paragraph [0042]: “With respect to the encoder 10, the partial point cloud 2 of the n-th object, transformed using the k-th rotation, P.sub.n.sup.k, and a one-hot vector, OH.sub.n, are used as input for the encoder 10 in order to estimate transformation embedding, T.sub.k. The transformation embedding along with the object embedding, O.sub.n, are concatenated with each point of the point cloud, X.sub.n.sup.k, and are used as input for the decoder in order to estimate the SDF values, S.sub.n.sup.k, and the 3D point correspondences”; Refer to Figure 1 to supplement the aforementioned passages. Notes: ResNet is shorthand for a residual neural network. Kayser teaches using Resnet to process a point cloud for accurate estimation of the orientation of the object, where Resnet is used to estimate transformation embeddings 30, which can be seen in Kayser, Figure 1. Kayser also teaches obtaining 3D point correspondences from transformation embeddings, which are concatenated with each point in the point cloud to determine 3D point correspondences. As taught by Lee as modified, 3D correspondence values can be used to compute rotation and translation matrices to update the point cloud to match a desired orientation); And adjusting the second preset orientation by using the second preset direction to adjust the rotated dental model to the first preset orientation (Lee, Paragraph [0041]: “The apparatus may globally align 108 the model point cloud and the scanned point cloud based on the correspondence scores. For example, the apparatus may globally align the model point cloud to the scanned point cloud or may globally align the scanned point cloud to the model point cloud based on the correspondence scores. In some examples, the apparatus may use a third portion of the machine learning model to globally align 108 the model point cloud and the scanned point cloud. For instance, the third portion of the machine learning model may infer and/or predict a rotation matrix and/or translation matrix to align the model point cloud and the scanned point cloud based on the correspondence scores”; Lee, Paragraph [0042]: “In some examples, the second portion of the machine learning model may provide the correspondence scores (e.g., probabilities) for each point in a first point cloud (e.g., model point cloud or scanned point cloud) to a second point cloud (e.g., scanned point cloud or model point cloud). For instance, the correspondence scores may indicate correspondence probabilities of point pairs between the model point cloud and the scanned point cloud. The apparatus (e.g., third portion of the machine learning model) may determine point cloud pairs (X, Y), where X = { X.sub.lt ... , X.sub.n } are the points of the first point cloud and Y = {y.sub.t, ...,y.sub.n} are the corresponding points of the second point cloud. From the point cloud pairs, the apparatus (e.g., third portion of the machine learning model) may compute a closed form solution of the rotation matrix R.sub.xy and translation matrix T.sub.xy from the first point cloud X to the second point cloud Y. For example, the apparatus may compute the rotation matrix and translation matrix using a singular value decomposition of a covariance matrix H = orthogonal matrix, S is a diagonal matrix, V is an orthogonal matrix, T denotes transpose, and X and y are centroids of X and Y, respectively. Examples of the rotation and translation matrices are given in Equation (6); Lee, Paragraph [0043]: “In some examples, the apparatus (e.g., third portion of the machine learning model) may update the first point cloud X with the computed rotation matrix and translation matrix to produce an updated point cloud X.sup.*)”). Regarding Claim 9, the method of Claim 8 is rejected over Lee as modified. Lee as modified teaches determining the second preset direction of the rotated dental model in the 3D space by using the residual neural network model, including: Identifying a second display direction of the rotated model in the 3D space by using the residual neural network model (Kayser, Fig 1; Kayser, Paragraph [0038]: “In one embodiment, the encoder 10 extracts characteristics or features from the partial point cloud 2 and estimates the transformation in the embedding space 30. As already mentioned above, in one embodiment, SALs based on PointNet++ and a ResNet-based PointNet are used to process the point cloud (see FIG. 2). By using SALs, local characteristics can be extracted with three different radii and propagated to one or more further layers in order to obtain a global characteristic. As already described, the global characteristic can be attached to all points of the point cloud and entered into the ResNet-based PointNet in order to estimate the transformation embedding 30. The use of both local and global characteristics allows for a particularly accurate estimation of the orientation of the object”; Kayser, Paragraph [0041]: “In some embodiments of the disclosure, loss functions may be used as described below. As already described above, in one embodiment, the decoder 20 obtains a concatenated vector having a 3D point and embedding vector, in order to estimate SDF values and associated 3D coordinates in the canonical space. In this case, the loss functions of shape loss, L.sub.S, and correspondence loss, L.sub.C, can be formulated in order to estimate accurate SDF values and correspondences to the canonical orientation”; Kayser, Paragraph [0042]: “With respect to the encoder 10, the partial point cloud 2 of the n-th object, transformed using the k-th rotation, P.sub.n.sup.k, and a one-hot vector, OH.sub.n, are used as input for the encoder 10 in order to estimate transformation embedding, T.sub.k. The transformation embedding along with the object embedding, O.sub.n, are concatenated with each point of the point cloud, X.sub.n.sup.k, and are used as input for the decoder in order to estimate the SDF values, S.sub.n.sup.k, and the 3D point correspondences”; Lee, Paragraph [0043]: “In some examples, the apparatus (e.g., third portion of the machine learning model) may update the first point cloud X with the computed rotation matrix and translation matrix to produce an updated point cloud X.sup.*”; Refer to Figure 1 to supplement the aforementioned passages. Notes: ResNet is shorthand for a residual neural network. Kayser teaches using Resnet to process a point cloud for accurate estimation of the orientation of the object, where Resnet is used to estimate transformation embeddings 30, which can be seen in Kayser, Figure 1. Kayser also teaches obtaining 3D point correspondences from transformation embeddings, which are concatenated with each point in the point cloud to determine 3D point correspondences. As taught by Lee as modified, 3D correspondence values can be used to compute rotation and translation matrices to update the point cloud to match a desired orientation. A second display direction is taken in BRI to mean the orientation after first translation/rotation); Classifying the second display direction to obtain a second category to which the second display direction belongs; and determining the second preset direction corresponding to the second category by using a second preset correspondence (Kayser, Fig 1; Kayser, Paragraph [0038]: “In one embodiment, the encoder 10 extracts characteristics or features from the partial point cloud 2 and estimates the transformation in the embedding space 30. As already mentioned above, in one embodiment, SALs based on PointNet++ and a ResNet-based PointNet are used to process the point cloud (see FIG. 2). By using SALs, local characteristics can be extracted with three different radii and propagated to one or more further layers in order to obtain a global characteristic. As already described, the global characteristic can be attached to all points of the point cloud and entered into the ResNet-based PointNet in order to estimate the transformation embedding 30. The use of both local and global characteristics allows for a particularly accurate estimation of the orientation of the object”; Kayser, Paragraph [0041]: “In some embodiments of the disclosure, loss functions may be used as described below. As already described above, in one embodiment, the decoder 20 obtains a concatenated vector having a 3D point and embedding vector, in order to estimate SDF values and associated 3D coordinates in the canonical space. In this case, the loss functions of shape loss, L.sub.S, and correspondence loss, L.sub.C, can be formulated in order to estimate accurate SDF values and correspondences to the canonical orientation”; Kayser, Paragraph [0042]: “With respect to the encoder 10, the partial point cloud 2 of the n-th object, transformed using the k-th rotation, P.sub.n.sup.k, and a one-hot vector, OH.sub.n, are used as input for the encoder 10 in order to estimate transformation embedding, T.sub.k. The transformation embedding along with the object embedding, O.sub.n, are concatenated with each point of the point cloud, X.sub.n.sup.k, and are used as input for the decoder in order to estimate the SDF values, S.sub.n.sup.k, and the 3D point correspondences”; Lee, Paragraph [0041]: “The apparatus may globally align 108 the model point cloud and the scanned point cloud based on the correspondence scores. For example, the apparatus may globally align the model point cloud to the scanned point cloud or may globally align the scanned point cloud to the model point cloud based on the correspondence scores. In some examples, the apparatus may use a third portion of the machine learning model to globally align 108 the model point cloud and the scanned point cloud. For instance, the third portion of the machine learning model may infer and/or predict a rotation matrix and/or translation matrix to align the model point cloud and the scanned point cloud based on the correspondence scores”; Lee, Paragraph [0042]: “In some examples, the second portion of the machine learning model may provide the correspondence scores (e.g., probabilities) for each point in a first point cloud (e.g., model point cloud or scanned point cloud) to a second point cloud (e.g., scanned point cloud or model point cloud). For instance, the correspondence scores may indicate correspondence probabilities of point pairs between the model point cloud and the scanned point cloud. The apparatus (e.g., third portion of the machine learning model) may determine point cloud pairs (X, Y), where X = { X.sub.lt ... , X.sub.n } are the points of the first point cloud and Y = {y.sub.t, ...,y.sub.n} are the corresponding points of the second point cloud. From the point cloud pairs, the apparatus (e.g., third portion of the machine learning model) may compute a closed form solution of the rotation matrix R.sub.xy and translation matrix T.sub.xy from the first point cloud X to the second point cloud Y. For example, the apparatus may compute the rotation matrix and translation matrix using a singular value decomposition of a covariance matrix H = orthogonal matrix, S is a diagonal matrix, V is an orthogonal matrix, T denotes transpose, and X and y are centroids of X and Y, respectively. Examples of the rotation and translation matrices are given in Equation (6)”. Notes: The broadest reasonable interpretation of a “second category” is considered akin to second display direction, or perhaps the general orientation after a first translation or rotation. In this case, each possible orientation can be considered its own category in the broadest reasonable interpretation, and hence, the first category is the same as the first display direction (starting direction); furthermore, Lee teaches generating transformation and rotation matrices for an initial orientation to a desired orientation state); Wherein the second preset correspondence is configured to represent the correspondence between the second category and the second preset direction (Lee, Paragraph [0041]: “The apparatus may globally align 108 the model point cloud and the scanned point cloud based on the correspondence scores. For example, the apparatus may globally align the model point cloud to the scanned point cloud or may globally align the scanned point cloud to the model point cloud based on the correspondence scores. In some examples, the apparatus may use a third portion of the machine learning model to globally align 108 the model point cloud and the scanned point cloud. For instance, the third portion of the machine learning model may infer and/or predict a rotation matrix and/or translation matrix to align the model point cloud and the scanned point cloud based on the correspondence scores”; Lee, Paragraph [0042]: “In some examples, the second portion of the machine learning model may provide the correspondence scores (e.g., probabilities) for each point in a first point cloud (e.g., model point cloud or scanned point cloud) to a second point cloud (e.g., scanned point cloud or model point cloud). For instance, the correspondence scores may indicate correspondence probabilities of point pairs between the model point cloud and the scanned point cloud. The apparatus (e.g., third portion of the machine learning model) may determine point cloud pairs (X, Y), where X = { X.sub.lt ... , X.sub.n } are the points of the first point cloud and Y = {y.sub.t, ...,y.sub.n} are the corresponding points of the second point cloud. From the point cloud pairs, the apparatus (e.g., third portion of the machine learning model) may compute a closed form solution of the rotation matrix R.sub.xy and translation matrix T.sub.xy from the first point cloud X to the second point cloud Y. For example, the apparatus may compute the rotation matrix and translation matrix using a singular value decomposition of a covariance matrix H = orthogonal matrix, S is a diagonal matrix, V is an orthogonal matrix, T denotes transpose, and X and y are centroids of X and Y, respectively. Examples of the rotation and translation matrices are given in Equation (6)”. Notes: The broadest reasonable interpretation of preset direction will be rotation/translation matrices, which are computed from correspondence scores of the point clouds, where correspondence scores are used to calculate the translation and rotation matrices). Claim 15, which is similar in scope to Claim 4, is rejected under the same rationale. Claim 16, which is similar in scope to Claim 5, is rejected under the same rationale. Claim 17, which is similar in scope to Claim 6, is rejected under the same rationale. Claim 18, which is similar in scope to Claim 7, is rejected under the same rationale. Claim 19, which is similar in scope to Claim 8, is rejected under the same rationale. Claim 20, which is similar in scope to Claim 10, is rejected under the same rationale. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Lee (WO 2021257094 A1) in view of Chernov (US 20220165388 A1) and Kayser (US 20230124868 A1), in further view of Bogacz (WO 2023086756 A1). Regarding Claim 10, the method of Claim 1 is rejected over Lee as modified. Lee as modified teaches moving the dental model to the preset position in the 3D space based on the point cloud data (Lee, Paragraph [0021]: “The apparatus may orient 102 a model point cloud or a scanned point cloud based on a set of initial orientations. An orientation is a position in a 3D space. For example, an orientation may express a rotation and/or translation of an object model and/or point cloud in 3D space. An initial orientation is a starting orientation of an object model and/or point cloud. For example, a set of initial orientations may include initial orientations for the model point cloud and/or scanned point cloud from which feature determination, correspondence score determination, and/or alignment procedures may be performed. In some examples, orienting 102 a model point cloud or a scanned point cloud based on a set of initial orientations may include orienting (e.g., computing an orientation of) the model point cloud and/or scanned point cloud to an orientation (e.g., rotation and/or translation) indicated by an initial orientation or initial orientations in the set of initial orientations”). Lee as modified does not teach determining a middle position of the point cloud data, determining a coordinate origin of the dental model in the 3D space based on the middle position, and moving the dental model to the preset position based on the coordinate origin. However, moving a model to a preset position based on the coordinate origin is considered obvious in the art, and is inherent to 3D modeling and 3D modeling software. Furthermore, Bogacz teaches determining a middle position of the point cloud data (Paragraph [0038]: “Each level of integrity verification may be associated with a different data selection algorithm, methodology, or technique for ensuring the integrity of a different amount or sampling of data from the fde. As shown in FIG. 2, CGVS 100 may determine (at 204) a particular level of integrity verification for the received point cloud file that is associated with selecting (at 206) data points within various planes defined from a center point of the point cloud. Specifically, CGVS 100 may identify the center point within the 3D space represented by the point cloud”); Determining a coordinate origin of the model in the 3D space based on the middle position (Paragraph [0024]: “Each point cloud data point may include positional and non-positional values. The positional values may include coordinates within 3D space. For instance, each point cloud data point may include x-coordinate, y-coordinate, and z-coordinate data point values for each imaged point, feature, element, object of the 3D environment”; Notes: while Bogacz does not explicitly teach a dental model, the teaching of Bogacz can be applied to any model and associated point cloud data. In its broadest reasonable interpretation, a coordinate origin (x,y,z=0) is inherent to a 3D space. Given that the preset position is given (which can be a desired orientation, which itself can have a middle position with xyz coordinates) and the original orientation has a middle position with xyz coordinates, moving the dental model to the preset position given both coordinates is inherent to modeling in 3D spaces). Lee as modified and Bogacz are considered analogous in the art with respect to working with 3D models and associated point cloud data. Visually centering models for viewing is common place in the art; a motivation for using a middle position of point cloud data of a model to determining a coordinate origin of the dental model would be to more consistently center the 3D model for viewing. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of moving the dental model in a 3D space based on point cloud data of Lee as modified with the method of determining a middle point of point cloud data and its associated coordinate origin of a 3D model; doing so would yield the predictable result of a method for consistently moving a dental model in a 3D space through a central point in its point cloud data, and establishing an axis in relation to the central point. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAYMOND CHUN LAM LI whose telephone number is (571)272-5124. The examiner can normally be reached M-F 8:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RAYMOND CHUN LAM LI/Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jul 04, 2024
Application Filed
Mar 04, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month