DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Prior arts cited in this office action:
Kearney et al. (US 20210353393 A1, hereinafter “Kearney”)
Farkash et al. (US 20220189611 A1, hereinafter “Farkash”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kearney et al. (US 20210353393 A1, hereinafter “Kearney”) in view of Farkash et al. (US 20220189611 A1, hereinafter “Farkash”).
Regarding claim 1, 17 and 19:
Kearney teaches a support apparatus that supports diagnosis of a state of disease in a
biological tissue in an oral cavity including at least a tooth and a gingiva (Kearney [0016]-[0017], where Kearney teaches and automated system/apparatus and method for analyzing dental images to determine disease in oral tissues, etc.) the support apparatus comprising:
input processing circuitry configured to receive input of image data showing a
three-dimensional geometry of the biological tissue (Kearney [0126], [0139], [0184]-[0187], where Kearney teaches The method 100 may include receiving 102 an image. The image may be an image of patient anatomy indicating the periodontal condition of the patient. Accordingly, the image may be of a of a patient's mouth obtained by means of an X-ray (intra-oral or extra-oral, full mouth series (FMX), panoramic, cephalometric), computed tomography (CT) scan, cone-beam computed tomography (CBCT) scan, intra-oral image capture using an optical camera, magnetic resonance imaging (MRI), or other imaging modality); and
computing processing circuitry configured to derive support information including information on positions of at least the tooth and the gingiva by using the image data inputted from the input processing circuitry (Kearney [0224], [0239], [0353], where Kearney teaches Machine learning models may be trained to identify and measure dental anatomy that may be used to determine the appropriateness of root canal therapy at a given tooth position such as crown-to-root-ratio, calculus, root length, relative distance to adjacent teeth, furcation, fracture, and whether the tooth at that tooth position is missing) and an estimation model for support of diagnosis of the state of disease in the biological tissue based on the image data, the estimation model being trained by machine learning to derive the support information (Kearney [0224], [0239], [0380], Kearney teaches The illustrated system 2600 may be used to estimate the surface of a tooth in which caries are present).
Kearney fails to explicitly teach the information includes information on positions of at least the tooth and the gingiva relative to each other.
However, Kearney teaches, For example, the system 800 may be used to label anatomical features such as the cementum enamel junction (CEJ), bony points on the maxilla or mandible that are relevant to the diagnosis of periodontal disease, gingival margin, junctional epithelium, or other anatomical feature. In order to establish the correct diagnosis from dental images, it is often useful to identify the gingival margin. This soft tissue point can be difficult to identify in dental X-ray, CBCT, and intra-oral images because the soft tissue point is not always clearly differentiated from other parts of the image and might be obfuscated by overlapping anatomy from adjacent teeth or improper patient setup and image acquisition geometry. To solve this problem, the system 800 may be used to identify the gingival margin as the anatomical feature of interest.
Machine learning models may be trained to identify and measure dental anatomy that may be used to determine the appropriateness of root canal therapy at a given tooth position such as crown-to-root-ratio, calculus, root length, relative distance to adjacent teeth, furcation, fracture, and whether the tooth at that tooth position is missing (Kearney [0239], [0353]). Furthermore, Farkash, in the same line of endeavor teaches methods and apparatuses described herein may relate to oral scanners and methods of their use, and particularly for generating three-dimensional (3D) representations of the teeth and gingiva and other soft tissues of the mouth. In particular, described herein are methods and apparatuses that may be useful in scanning, including 3D scanning, and analyzing the intraoral cavity for detection, diagnosis, treatment, and longitudinal tracking of oral conditions. These methods and apparatuses may generate a three-dimensional (3D) model of a subject's gums and teeth that includes both surface topography and internal structures of the teeth (e.g., roots, dentin, dental fillings, cracks and/or caries) and the periodontium (e.g., gingiva, periodontal ligament, cementum and/or alveolar bone). (Farkash [0003], [0006]-[0008], [0010], [0057], figs. 1A, 3, 11 and 16).
Therefore, taking the teachings of Kearney and Farkash as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to take 3D image/scan of an organ, such as inside of the mouth. Using the 3D image and machine learning to determine not only disease and disease progression in and/or around the organ but also position relationship between all the tissues in the mouth, the teeth, the roots, Gingiva, etc in order to provide appropriate treatment to prevent and/or to alleviate or cured disease and/or to limit disease progression.
Regarding claims 2, 18 and 20:
Kearney in view of Farkash teaches wherein the support information includes position information corresponding to each of a plurality of levels each indicating a position of the biological tissue present along a prescribed direction of measurement at a prescribed measurement point in the biological tissue (Kearney [0145], [0260], [0348]-[0350]; Farkash [0018], [0079]. Note; see US 20120171634 [0109] for example).
Regarding claim 3:
Kearney in view of Farkash teaches Wherein the support information includes a distance between a plurality of levels each indicating a position of the biological tissue present along a prescribed direction of measurement. at a prescribed measurement point in the biological tissue (Kearney [0145], [0260], [0348]-[0350], [0353], [0392] ; Farkash [0018], [0079]).
Regarding claim 4:
Kearney in view of Farkash teaches wherein the computing processing circuitry is further configured to derive, as the support information, position information corresponding to each of a plurality of levels each indicating a position of the biological tissue present along a prescribed direction of measurement at a prescribed measurement point in the biological tissue, calculate a distance between the plurality of levels based on the position information corresponding to each of the plurality of levels, and calculate at least one of information indicating a type of the state of disease and information indicating a degree of progress of the state of disease based on the distance between the plurality of levels (Kearney [0145], [0260], [0348]-[0350], [0353], [0392] ; Farkash [0013]-[0018]).
Regarding claim 5:
Kearney in view of Farkash teaches Wherein the support information includes at least one of information indicating a type of the state of disease and information indicating a degree of progress of the state of disease (Kearney [0356], [0593] ; Farkash [0013]-[0018]).
Regarding claim 6:
Kearney in view of Farkash teaches Wherein the estimation model is trained by the machine learning with training data including the image data and the support information associated with the image data (Kearney [0140], [0170], [0209], [0326]-0327], [0594]-[0596]; Farkash [0013]-[0018]).
Regarding claim 7:
Kearney in view of Farkash teaches Wherein the prescribed measurement point is a point located around the tooth when a coronal portion included in the tooth is viewed from top (Kearney [0348], [0367], [0388]; Farkash [0004], [0018]).
Regarding claim 8:
Kearney in view of Farkash teaches Wherein the prescribed direction of measurement is a direction along a tooth axis of the Tooth (Kearney [0188], [0348], [0367], [0388], [0802]; Farkash [0004], [0018]).
Regarding claim 9:
Kearney in view of Farkash teaches Wherein the plurality of levels indicate any position in the biological tissue of a top of a coronal portion, a margin of the gingiva, a top of an alveolar bone, a junction between the tooth and the gingiva, a root apex portion, a furcation, a junction between cementum and enamel of the tooth, and a deepest portion of a defect portion of the alveolar bone that is lost in the furcation (Kearney [0326]-[0345]; Farkash [0018], [0079]).
Regarding claim 10:
Kearney in view of Farkash teaches Wherein the computing processing circuitry is configured to derive, with the estimation model, the support information for a site along the prescribed direction of measurement at the prescribed measurement point in the biological tissue based on the prescribed measurement point and the prescribed direction of measurement in addition to the image data inputted from the input processing circuitry (Kearney [0188], [0326]-[0345], [0348], [0367], [0388], [0802]).
Regarding claim 11:
Kearney in view of Farkash teaches Wherein the computing processing circuitry is configured to derive, with the estimation model, information indicating a degree of progress of the state of disease as the support information based on at least one of a sex, an age, and information on a bone density of a patient having the biological tissue in addition to the image data inputted from the input processing circuitry (Kearney [0140], [0324], [0427], [0436], [0692], claim 1; Farkash [0014]).
Regarding claim 12:
Kearney in view of Farkash teaches Wherein the image data is generated based on optical scanner data obtained by scanning by an optical scanner, the optical scanner data including position information of each point in a point group indicating a surface of the biological tissue, and computed tomography (CT) data obtained by CT scanning of the biological tissue (Kearney [0139], [0187]-[0188]; Farkash [0008]-[0010], [0015]; Farkash [0012], [0082]).
Regarding claim 13:
Kearney in view of Farkash teaches Wherein the optical scanner data includes information indicating a color of the surface of the biological tissue (Kearney [0230], where the color of the teach and/or the gum can indicate whether they are healthy or not; Farkash [0008]-[0010]).
Regarding claim 14:
Kearney in view of Farkash teaches Wherein the computing processing circuitry is configured to cause a display to show the support information as being superimposed on a designated position in the biological tissue (Kearney [0286], [0494], [0494] figs. 36, 37C).
Regarding claim 15:
Kearney in view of Farkash teaches Wherein the computing processing circuitry is configured to cause the display to show the support information in a color in accordance with the support information (Kearney [0286], [0436]-[0437], [0494], [0702], figs. 36, 37C; Farkash [0008]-[0010], [0065]).
Regarding claim 16:
Kearney in view of Farkash teaches Wherein the computing processing circuitry is configured to cause the display to show, based on the support information, simulation information that shows prediction of future change of the support information (Kearney [0286], [0427], [0436]-[0437], [0494], [0702], figs. 36, 37C, claim 1; Farkash [0065]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEDNEL CADEAU whose telephone number is (571)270-7843. The examiner can normally be reached Mon-Fri 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chieh Fan can be reached at 571-272-3042. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WEDNEL CADEAU/Primary Examiner, Art Unit 2632 March 1, 2026