DETAILED ACTION
This communication is a Non-Final Rejection Office Action in response to the 12/12/2023 filling of Application 18/537,622. Claims 1-13 are now presented.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marshall US 20230035538 A1 in view of Sachdeva US 2020/0066391 A1.
As per Claim 1 Marshall teaches a method of generating manufacturing parameters during a dental procedure for a dental prosthesis, the method comprising the steps of:
a) intra-orally scanning a patient to generate intra-oral data; Marshall paras. 67-68 teach the example dental impression station 106 can be configured to generate a dental impression 108 of dentition of patient 150. The dental impression 108 can be a geometric representation of the dentition of the patient 150, which may include teeth (if any) and edentulous (gum) tissue. The dental impression 108 can also be a physical impression captured using an impression material, such as sodium alginate, polyvinylsiloxane or another impression material. The dental impression 108 may also be a digital impression. The digital impression 108 may be represented by one or more of a point cloud, a polygonal mesh, a parametric model, or voxel data. The digital impression 108 can be generated directly from the dentition of the patient 150, using, for example, an intraoral scanner or other image capture system 107.
b) extra-orally scanning the patient during a mandibular motion to generate dynamic extra-oral maxillofacial data; Marshall paras. 8-9 teach this document generally describes technology for generating a digital tooth setup, which can be used to produce any of a variety of physical dental appliances, such as dentures, orthodontia, liners, dental implants (e.g., crowns, bridges), and/or other dental appliances. More particularly, the disclosed technology can be used to automatically determine an appropriate setup (e.g., design, fabrication, installation) of dentures or other dental appliances based on movement of a particular patient's mouth and other information about the patient's mouth that is unique to that particular patient. The disclosed technology can combine accurate motion data with three-dimensional (3D) intraoral scans (IOS), 3D cone beam CTs (CBCT), or other types of imaging data for improved treatment planning, digital appliance design, and case presentation. The disclosed technology provides a digital workflow presentable through various user-interactive user interfaces with precision data for designing crowns, bridges, dentures, splints, and other types of digital appliances for the different needs of different patients. Para. 70-72 teach the motion capture system 200 can generate the motion data 110 from optical measurements of dental arches that are captured while the dentition of the patient 150 is moving. The optical measurements can be extracted from image and/or video data recorded (such as by the image capture system 107) while the dentition of the patient 150 is moving. The optical measurements can also be captured indirectly, such as by being extracted from images and/or video data generated by one or more other devices (e.g., a patient assembly such as patient assembly 204 described in FIGS. 2 and 3) that are secured to a portion of the dentition of the patient 150. The motion data 110 can be generated using other processes in some implementations. Further, the motion data 110 can include transformation matrices that represent position and orientation of the dental arches of the patient 150. Transformation matrices can describe relative position and orientation of the lower arch to the upper arch of the patient's teeth for each video frame. This information can be used to accurately animate the lower arch so that tooth relationships between the arches, throughout the motion, can be assessed and used to reposition any one or more of the teeth. These animated tooth positions can identify potential tooth interferences, which therefore can inform appropriate appliance design. Teeth can be repositioned, reshaped, and/or removed to avoid such interferences. For some appliances, multiple tooth contact points may be desirable as this can provide better support and potentially prevent fracturing at a single point of contact. The motion data 110 may also include a series of transformation matrices that represent various motions or functional paths of movement for the patient 150's dentition.
Moreover, still images and/or video data can be captured, by the image capture system 107, of the patient 150's dentition while the dentition is positioned in various bite locations. Image processing techniques can be used to determine positions of the patient 150's upper and lower arches relative to each other (either directly or based on the positions of the attached patient assembly 204). In some implementations, the motion data 110 can be generated, by the motion capture system 200, by interpolating between the positions of the upper and lower arches as determined from at least some of the captured images. Additional motion frames can be interpolated between the images that have been captured.
The motion data 110 may be captured with the patient 150's jaw is in various static positions and/or moving through various motions. For example, the motion data 110 may include a static measurement representing a centric occlusion (e.g., the patient's mandible closed with teeth fully engaged) and/or centric relation (e.g., the patient's mandible nearly closed, just before any shift occurs that is induced by tooth engagement or contact) bite. The motion data 110 may also include static measurements or sequences of data corresponding to protrusive (e.g., the patient's mandible being shifted forward while closed), lateral excursive (e.g., the patient's mandible shifted/rotated left and right while closed), hinging (e.g., the patient's mandible opening and closing without lateral movement), chewing (e.g., the patient's mandible chewing naturally to, for example, determine the most commonly used tooth contact points), and/or border movements (e.g., the patient's mandible is shifted in all directions while closed, for example, to determine the full range of motion). In some implementations, the motion data 110 can be captured while the patient 150 is using a Lucia jig or leaf gauge so that the patient 150's teeth (for patients who are not completely edentulous, for example) may not impact/contribute to the generated movement data. This motion data 110 may be used to determine properties of the patient 150's temporomandibular joint (TMJ). For example, hinging motion identified in the motion data 110 may be used, by the denture design system 116 described herein, to determine location of the hinge axis of the patient 150's TMJ. Knowing the location of the hinge axis for the particular patient 150 can be beneficial to design accurate and appropriately-fitting dentures for the patient 150.
c) resolving the intra-oral data with the dynamic extra-oral maxillofacial data to generate a combined data model; Marshall paras. 8-9 teach the disclosed technology can combine accurate motion data with three-dimensional (3D) intraoral scans (IOS), 3D cone beam CTs (CBCT), or other types of imaging data for improved treatment planning, digital appliance design, and case presentation. The disclosed technology provides a digital workflow presentable through various user-interactive user interfaces with precision data for designing crowns, bridges, dentures, splints, and other types of digital appliances for the different needs of different patients.
Motion and/or image data can be captured of a patient's mouth and used in combination with a 3D model of the patient's mouth/teeth to design and set up dentures or other dental appliances for the patient. For example, motion data can be applied to a mandibular arch of the patient's teeth in the 3D model, since the mandibular arch affects tooth positioning. Applying the motion data to the arch can set the arch into motion, and a relevant user, such as a dentist, caregiver, or other operator, can use graphical user interface (GUI) features presented at a computing device to select which teeth to be moved. The motion data can be applied, by a computer system, to the selected teeth, thereby causing the computer system to automatically displace the selected moveable teeth to avoid interference with other teeth throughout an animation sequence. This animation sequence can help inform the user of an appropriate design for a dental appliance based on actual movement of the patient's jaw.
d) generating a model representation of the dental prosthesis based on the combined data model and displaying the model representation to a user; Marshall para. 21 teaches in some implementations, repositioning, by the computing system, the digital dental model can include: automatically adjusting a position of a condyle hinge axis for each frame in the motion data, rotating at least a portion of lower teeth represented by the digital dental model to a first contact point based on the adjusted condyle hinge axis, and returning a tooth setup indicating an arrangement of the portion of lower teeth based on the first contact point. Returning, by the computing system, the repositioned digital dental model can include generating a digital representation of a dental appliance based on the repositioned digital dental model. The digital representation of the dental appliance can include instructions for manufacturing the dental appliance. The method can also include transmitting, by the computing system, at least one of the digital representation of the dental appliance and the instructions for manufacturing the dental appliance to a rapid fabrication machine that can be configured to fabricate the dental appliance.
e) generating manufacturing parameters based on the combined data model; and Marshall para. 21 teaches the digital representation of the dental appliance can include instructions for manufacturing the dental appliance. The method can also include transmitting, by the computing system, at least one of the digital representation of the dental appliance and the instructions for manufacturing the dental appliance to a rapid fabrication machine that can be configured to fabricate the dental appliance.
Marshall does not teach further comprising a step prior to step c), of identifying the patient using facial recognition based on the dynamic extra-oral maxillofacial data and/or the static extra-oral maxillofacial data; and However, Sachdeva para. 183 teaches the method 1400a illustrates, at step 1401a, capturing patient data, such as by using an input capturing device. The input capturing device may include such as a scanner, an image capturing device like a camera, an input capture mechanism available on a mobile device, and other similar technologies available for capturing multiple 2D images of the user. In an example, the image capturing technology may include capturing patient's facial morphology using computer vision technology, such as using free open source tools for facial recognition and facial orientation detection. In some example embodiments, computer vision may be used in combination with the image capturing technology to ensure that the user takes pictures of the all appropriate orientations (facing front, right-side, left-side) required by a system, such as the orthodontic care management platform 102a. Some open source computer vision technologies, such as OpenCV also include a statistical machine learning library that may be used to learn the user's facial features. Once learned, the system can automatically detect the user the next time around. Thus, the captured user data may also be used as part of a biometric authentication system for access to the patient's dental/medical records, which may be stored in a database. Once patient data is successfully captured, the method 1400 may proceed to, at step 1402, to analyze patient morphology.
a further step after step d), of exporting the intra-oral data and associating it with a dental record of the patient. However, Sachdeva para. 172 teaches FIG. 9 illustrates another exemplary method 900 for providing an orthodontic care management solution. The method 900 includes, at step 901, collecting user data. The user data may be data about user preferences, user's facial anatomy, smile anatomy, picture, scan and the like as discussed previously. Once the user data is collected, the method 900 may include, at step 902, identifying related user data using artificial intelligence techniques. The related user data may include such as data about other users and/or patients who have undergone such similar treatment, patients with similar facial anatomy or smile anatomy, the success and failure of treatment involving similar patients, patients treated by the same orthodontist and the like. In some example embodiments, the data identification may involve performing pattern recognition, pattern matching, natural language processing (NLP), generating a learning model for automatic patient data matching, providing auto-suggestions and the like. The related user data and the user data identified in this manner may be used, at step 903, for creating a structured database. The organization of all the data in the form of a structured database provides the unique advantage of ease of access and faster retrieval of data. This data may be used, at step 904, for modification of the data. Data modification may be done, such as to provide additional details about the user, pattern matching and identifying similar data. The modified data may further, at step 905, be encrypted for data confidentiality and security purpose. The encrypted data may be used, at step 906, for performing diagnostic analysis on user data. The diagnostic analysis may include, at step 907, identifying if user wants treatment simulation. If yes, then at step 907a1, some user specific constraints may be identified. Further, at step 907a2, it may be identified based on patient data analysis whether the patient needs professional intervention. If yes, then at step 907a3, a desired professional may be identified based on a plurality of factors. Alternately, the method 900 may proceed to step 907b1 if it is identified at step 907 that the patient does not need treatment simulation. In this case, at step 907b1, pattern recognition may be performed to identify similar user profiles. Further, at step 907b2, these similar profiles may be displayed to the user, such as on the display interface of the user device. Both Marshall and Sachdeva are direct to dental scanning. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Marshall to include further comprising a step prior to step c), of identifying the patient using facial recognition based on the dynamic extra-oral maxillofacial data and/or the static extra-oral maxillofacial data; and a further step after step d), of exporting the intra-oral data and associating it with a dental record of the patient as taught by Sachdeva to deliver care to patients in a more efficient and effective manner (see para. 107).
As per Claim 2 Marshall teaches the method of claim 1, wherein during step c), more than one said combined data model is generated;
wherein during step d), more than one said model representation of the dental prosthesis is generated based on each of the more than one combined data models respectively; further comprising a step after step d) and before step e), of selecting one representation of the dental prosthesis; and wherein during step e), manufacturing parameters are generated based on the combined data model of the selected one representation. Marshall para. 180 teaches in some implementations, any of the user interfaces described in reference to FIGS. 10-18 can allow the user to iterate through the techniques described herein for positioning digital denture teeth repeatedly and in any order/sequence. Therefore, the user can iterate through variations of digital denture teeth designs to efficiently determine a preferred design for the particular patient. Moreover, the digital denture teeth in the user interfaces of FIGS. 10-18 can behave as actual teeth would in the physical world, thereby enabling patient-specific standardization in denture design with time-saving functionality and improved efficiency and accuracy. Real-time collision detection techniques, as implemented by the computer system, can improve quality in denture design.
As per Claim 3 Marshall teaches the method of claim 1, further comprising a step after step d) and before step e), of amending the combined data model based on amendments made to the model representation or the selected one representation of the dental prosthesis displayed to the user. Marshall para. 171 teaches Once the digital denture teeth are in their initial positions, their positions may be further refined. For example, an additional user interface can be presented to the user at their computing device, which can be configured to receive user input to adjust the positions of one or more of the digital denture teeth relative to each other (or independently of each other).
As per Claim 6 Marshall teaches the method of claim 1, wherein during step a), the generation of intra-oral data includes intra-orally scanning a full arch of the patient’s mandibular teeth and/or maxillary teeth. Marshall para. 107 teaches the position indicator system 228 can be magnetically attached to an upper arch projector via a repeatable kinematic mount. This coupling can make it possible to easily remove the position indicator system 228 once an upper projector registration is completed. The tip of the position indicator system 228 can be sufficiently close to the upper arch dentition such that both it and a partial scan of the upper arch dentition can be captured together in an intraoral (IOS) scan. The teeth in this partial IOS scan can then be used to best fit the partial IOS scan to a complete IOS scan of the upper arch dentition. The position indicator system 228 can also be calibrated to the upper arch projector during manufacturing such that a location and orientation of laser beams emitted by the projector can be known relative the position indicator system 228. The emitted laser beams can define a coordinate system for the projector. Since the projector coordinate system can be known relative the position indicator system 228 via calibration and the location and orientation of the position indicator system 228 are known relative the patient's dentition via the IOS scans, it may be possible to accurately relate the upper projector coordinate system to the patient's upper dentition. A complete IOS scan of the lower arch may also be completed and properly related to the full arch upper arch scan using features provided by the IOS scanner. The bite relationship established by the IOS scanner software can typically be a centric occlusion (CO) bite, but may not be limited to this type of bit. A clinician can also capture the same bite relationship using a motion capture system described throughout this disclosure. Since the relationship of the upper and lower IOS scans may be known together with upper and lower projector coordinate systems for the same bite relationship, it can also be possible to animate the IOS scans using recorded motion frames. Other types of 3D scans, such as those from a cone beam CT scanner or 3D camera, can also be related to the motion data, typically by best fitting features, such as teeth, that are common to both the IOS scan and the other 3D scans.
As per Claim 7 Marshall teaches the method of claim 1, wherein step b), further includes extra-orally scanning the patient while a mandible of the patient is in a closed position to generate static extra-oral maxillofacial data; and Marshall para. 72 teaches the motion data 110 may be captured with the patient 150's jaw is in various static positions and/or moving through various motions. For example, the motion data 110 may include a static measurement representing a centric occlusion (e.g., the patient's mandible closed with teeth fully engaged) and/or centric relation (e.g., the patient's mandible nearly closed, just before any shift occurs that is induced by tooth engagement or contact) bite. The motion data 110 may also include static measurements or sequences of data corresponding to protrusive (e.g., the patient's mandible being shifted forward while closed), lateral excursive (e.g., the patient's mandible shifted/rotated left and right while closed), hinging (e.g., the patient's mandible opening and closing without lateral movement), chewing (e.g., the patient's mandible chewing naturally to, for example, determine the most commonly used tooth contact points), and/or border movements (e.g., the patient's mandible is shifted in all directions while closed, for example, to determine the full range of motion). In some implementations, the motion data 110 can be captured while the patient 150 is using a Lucia jig or leaf gauge so that the patient 150's teeth (for patients who are not completely edentulous, for example) may not impact/contribute to the generated movement data. This motion data 110 may be used to determine properties of the patient 150's temporomandibular joint (TMJ). For example, hinging motion identified in the motion data 110 may be used, by the denture design system 116 described herein, to determine location of the hinge axis of the patient 150's TMJ. Knowing the location of the hinge axis for the particular patient 150 can be beneficial to design accurate and appropriately-fitting dentures for the patient 150.
during step c) the combined data model is based on resolving the intra-oral data with both the dynamic and static extra-oral maxillofacial data. Marshall para. 8 teaches this document generally describes technology for generating a digital tooth setup, which can be used to produce any of a variety of physical dental appliances, such as dentures, orthodontia, liners, dental implants (e.g., crowns, bridges), and/or other dental appliances. More particularly, the disclosed technology can be used to automatically determine an appropriate setup (e.g., design, fabrication, installation) of dentures or other dental appliances based on movement of a particular patient's mouth and other information about the patient's mouth that is unique to that particular patient. The disclosed technology can combine accurate motion data with three-dimensional (3D) intraoral scans (IOS), 3D cone beam CTs (CBCT), or other types of imaging data for improved treatment planning, digital appliance design, and case presentation. The disclosed technology provides a digital workflow presentable through various user-interactive user interfaces with precision data for designing crowns, bridges, dentures, splints, and other types of digital appliances for the different needs of different patients.
As per Claim 8 Marshall teaches the method of claim 1, further comprising a step before step e), of prototyping the model representation or the one selected model representation of the dental prosthesis. However, Sachdeva para. 134 teaches the orthodontic care management platform 102a may be in communication with the orthodontic appliance management system for designing the orthodontic appliance 103. In some example embodiments, the orthodontic appliance management system may be configured to generate a prototype of the orthodontic appliance 103, such as using a 3D printing workflow. In some other embodiments, the orthodontic appliance management system 102b may be configured for directly generating the orthodontic appliance 103, such using in-clinic 3D printers. The orthodontic appliance management system 102b may aid in designing appropriate appliances for each stage and sequence of treatment. In some example embodiments, the orthodontic appliance management system may include software interfaces specifically implemented for designing of the orthodontic appliance 103. The design files associated with the design of the orthodontic appliance 103 may either be stored locally, such as on the orthodontic appliance management system 102b or may be sent to a remote system for manufacturing. The manufacturing of the orthodontic appliance 103 may be done using any of the technologies know in the art, such as through subtractive machining, additive manufacturing, die casting, assembling and the like. Both Marshall and Sachdeva are direct to dental scanning. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Marshall to include further comprising a step before step e), of prototyping the model representation or the one selected model representation of the dental prosthesis as taught by Sachdeva to use the prototype to verify fir and function of an appliance before it is manufactured
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marshall US 20230035538 A1 in view of Sachdeva US 2020/0066391 A1 as applied to Claim 1 and in further view of Stegall US 2013/0260340 A1.
As per Claim 4 Marshall teaches the method of claim 1, wherein during step b), the mandibular motion is facilitated by a contrast-enhancing medium. However, Stegall Abstract teaches A method for intraoral image scanning using a powder with enhanced feature contrast. The method includes applying the powder to an intraoral structure and using an intraoral scanner in order to obtain electronic digital scan images of the intraoral structure. The powder includes a material providing enhanced feature contrast of the intraoral structure such as black particles combined with a white powder. The scan images can be used to create a 3D digital impression or model of the intraoral structure. Both Marshall and Stegall are direct to dental scanning. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Marshall to include wherein during step b), the mandibular motion is facilitated by a contrast-enhancing medium as taught by Stegall to obtain an improved image of a patients teeth (as suggested by para. 26).
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marshall US 20230035538 A1 in view of Sachdeva US 2020/0066391 A1 as applied to Claim 1 and in further view of Abrams US 2013/0122468 A1.
As per Claim 5 Marshall teaches the method of claim 1, wherein during step a), the generation of intra-oral data includes intra-orally scanning a single tooth of the patient. However, Abrams para. 69 teaches FIG. 4 shows a hierarchical diagram 350 of an exemplary yet non-limiting embodiment of the user interface where the user may navigate among a variety of screens. The user interface comprises a series of welcome and login screens 355, which direct the user to a home screen 360, where the user may select between a series of setting screens 365 and patient/clinical screens 370. For example, the "History" screen shown in FIG. 3 is accessed by selecting the "History" screen 375 from the patient/clinical screens 365. Additional patient screens include the patient profile for entering an managing patient information, a scanning screen for scanning individual teeth using the diagnostic device, a treatment screen for entering treatment notes, a reporting screen for generating a patient encounter report, and a risk screen for entering patient risk factors and/or reviewing a risk assessment generated as described above. The user interface may be provided as a touchscreen interface for rapid and convenient navigation among the various screens, as shown in FIGS. 5(a) and 5(b). Both Marshall and Abrams are direct to dental scanning. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Marshall to include wherein during step a), the generation of intra-oral data includes intra-orally scanning a single tooth of the patient as taught by Abrams to increase system flexibility to allow for scanning or individual or multiple teeth.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 9 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Marshall US 2023/0035538 A1.
As per Claim 9 Marshall teaches An in-situ dental prosthesis manufacturing system, the system comprising:
an intra-oral scanning device; Marshall para. 68 teaches the dental impression 108 may also be a digital impression. The digital impression 108 may be represented by one or more of a point cloud, a polygonal mesh, a parametric model, or voxel data. The digital impression 108 can be generated directly from the dentition of the patient 150, using, for example, an intraoral scanner or other image capture system 107.
an extra-oral scanning device; Para. 70-72 teach the motion capture system 200 can generate the motion data 110 from optical measurements of dental arches that are captured while the dentition of the patient 150 is moving. The optical measurements can be extracted from image and/or video data recorded (such as by the image capture system 107) while the dentition of the patient 150 is moving. The optical measurements can also be captured indirectly, such as by being extracted from images and/or video data generated by one or more other devices (e.g., a patient assembly such as patient assembly 204 described in FIGS. 2 and 3) that are secured to a portion of the dentition of the patient 150. The motion data 110 can be generated using other processes in some implementations. Further, the motion data 110 can include transformation matrices that represent position and orientation of the dental arches of the patient 150. Transformation matrices can describe relative position and orientation of the lower arch to the upper arch of the patient's teeth for each video frame. This information can be used to accurately animate the lower arch so that tooth relationships between the arches, throughout the motion, can be assessed and used to reposition any one or more of the teeth. These animated tooth positions can identify potential tooth interferences, which therefore can inform appropriate appliance design. Teeth can be repositioned, reshaped, and/or removed to avoid such interferences. For some appliances, multiple tooth contact points may be desirable as this can provide better support and potentially prevent fracturing at a single point of contact. The motion data 110 may also include a series of transformation matrices that represent various motions or functional paths of movement for the patient 150's dentition.
Moreover, still images and/or video data can be captured, by the image capture system 107, of the patient 150's dentition while the dentition is positioned in various bite locations. Image processing techniques can be used to determine positions of the patient 150's upper and lower arches relative to each other (either directly or based on the positions of the attached patient assembly 204). In some implementations, the motion data 110 can be generated, by the motion capture system 200, by interpolating between the positions of the upper and lower arches as determined from at least some of the captured images. Additional motion frames can be interpolated between the images that have been captured.
a computing device in communication with both the intra-oral scanning device and the extra-oral scanning device, Marshall para. 144 teaches Referring to the process 900, at operation 902, digital patient data, including motion data and a digital dental model, can be acquired by the computer system. For example, the digital patient data may include imaging data of the patient dentition, as captured by an imaging system. The imaging data may be captured using various imaging modalities, as described herein. In some implementations, the imaging data can include a 3D digital dental model of the patient's dentition. The 3D digital dental model may be captured using an intraoral scanner. The 3D digital dental model may be captured by scanning a physical impression or mold formed from a physical impression using a 3D scanner. In some implementations, the 3D digital dental model may also be generated by the computer system based on the imaging data and other digital patient data.
the computing device having a display and a processor; and Marshall para. 226-227 teaches the computing device 2950 includes, in some embodiments, at least one processing device 2960, such as a central processing unit (CPU). A variety of processing devices are available from a variety of manufacturers, for example, Intel or Advanced Micro Devices. In this example, the computing device 2950 also includes a system memory 2962, and a system bus 2964 that couples various system components including the system memory 2962 to the processing device 2960. The system bus 2964 is one of any number of types of bus structures including a memory bus, or memory controller; a peripheral bus; and a local bus using any of a variety of bus architectures.
Examples of computing devices suitable for the computing device 2950 include a desktop computer, a laptop computer, a tablet computer, a mobile computing device (such as a smartphone, an iPod® or iPad® mobile digital device, or other mobile devices), or other devices configured to process digital instructions. Further, para. 154 teaches in some implementations, the denture design system 116 can include a user interface that displays the digital dental model, the occlusal plane, and/or both. The user interface may be configured to receive user input to adjust the vertical dimension of occlusion or the position of the occlusal plane. For example, the user interface may be configured to receive a drag (e.g., click-and-drag or touch-and-drag) input from a relevant user (e.g., care provider, dentist, physician) to interactively move the mandibular arch of the digital dental model up or down along an arch defined by the motion data or a hinge axis inferred from the motion data. Similarly, the user interface may be configured to interactively move the occlusal plane along the arch between the mandibular arch and maxillary arch of the digital dental model.
a manufacturing device in communication with the computing device; Marshall para. 192-193 teach
The example dental lab 2004 can include the 3D scanner 112, a treatment planning and appliance design system 2016, the rapid fabrication machine 119, and an appliance fabrication station 2022. Although shown as a single dental lab, the dental lab 2004 can also include multiple dental labs. For example, the 3D scanner 112 can be in a different dental lab than one or more of the other components shown in the dental lab 2004. Further, one or more of the components shown in the dental lab 2004 may not be in a dental lab. For example, one or more of the 3D scanner 112, appliance design system 2016, rapid fabrication machine 119, and appliance fabrication station 2022 can be in the dental office 102. Sometimes, the system 2000 may not include all of the components shown in the dental lab 2004.
The treatment planning and appliance design system 2016 can be configured to generate one or both of appliance data 2018 and a treatment plan 2025. The appliance data 2018 can be 3D digital data that represents an appliance component 2020 and can be in a format suitable for fabrication using the rapid fabrication machine 119.
wherein the intra-oral scanning device is configured to generate intra-oral data; Marshall paras. 67-68 teach the example dental impression station 106 can be configured to generate a dental impression 108 of dentition of patient 150. The dental impression 108 can be a geometric representation of the dentition of the patient 150, which may include teeth (if any) and edentulous (gum) tissue. The dental impression 108 can also be a physical impression captured using an impression material, such as sodium alginate, polyvinylsiloxane or another impression material. The dental impression 108 may also be a digital impression. The digital impression 108 may be represented by one or more of a point cloud, a polygonal mesh, a parametric model, or voxel data. The digital impression 108 can be generated directly from the dentition of the patient 150, using, for example, an intraoral scanner or other image capture system 107.
wherein the extra-oral scanning device is configured to generate dynamic extra-oral maxillofacial data; Marshall paras. 8-9 teach this document generally describes technology for generating a digital tooth setup, which can be used to produce any of a variety of physical dental appliances, such as dentures, orthodontia, liners, dental implants (e.g., crowns, bridges), and/or other dental appliances. More particularly, the disclosed technology can be used to automatically determine an appropriate setup (e.g., design, fabrication, installation) of dentures or other dental appliances based on movement of a particular patient's mouth and other information about the patient's mouth that is unique to that particular patient. The disclosed technology can combine accurate motion data with three-dimensional (3D) intraoral scans (IOS), 3D cone beam CTs (CBCT), or other types of imaging data for improved treatment planning, digital appliance design, and case presentation. The disclosed technology provides a digital workflow presentable through various user-interactive user interfaces with precision data for designing crowns, bridges, dentures, splints, and other types of digital appliances for the different needs of different patients. Para. 70-72 teach the motion capture system 200 can generate the motion data 110 from optical measurements of dental arches that are captured while the dentition of the patient 150 is moving. The optical measurements can be extracted from image and/or video data recorded (such as by the image capture system 107) while the dentition of the patient 150 is moving. The optical measurements can also be captured indirectly, such as by being extracted from images and/or video data generated by one or more other devices (e.g., a patient assembly such as patient assembly 204 described in FIGS. 2 and 3) that are secured to a portion of the dentition of the patient 150. The motion data 110 can be generated using other processes in some implementations. Further, the motion data 110 can include transformation matrices that represent position and orientation of the dental arches of the patient 150. Transformation matrices can describe relative position and orientation of the lower arch to the upper arch of the patient's teeth for each video frame. This information can be used to accurately animate the lower arch so that tooth relationships between the arches, throughout the motion, can be assessed and used to reposition any one or more of the teeth. These animated tooth positions can identify potential tooth interferences, which therefore can inform appropriate appliance design. Teeth can be repositioned, reshaped, and/or removed to avoid such interferences. For some appliances, multiple tooth contact points may be desirable as this can provide better support and potentially prevent fracturing at a single point of contact. The motion data 110 may also include a series of transformation matrices that represent various motions or functional paths of movement for the patient 150's dentition.
Moreover, still images and/or video data can be captured, by the image capture system 107, of the patient 150's dentition while the dentition is positioned in various bite locations. Image processing techniques can be used to determine positions of the patient 150's upper and lower arches relative to each other (either directly or based on the positions of the attached patient assembly 204). In some implementations, the motion data 110 can be generated, by the motion capture system 200, by interpolating between the positions of the upper and lower arches as determined from at least some of the captured images. Additional motion frames can be interpolated between the images that have been captured.
The motion data 110 may be captured with the patient 150's jaw is in various static positions and/or moving through various motions. For example, the motion data 110 may include a static measurement representing a centric occlusion (e.g., the patient's mandible closed with teeth fully engaged) and/or centric relation (e.g., the patient's mandible nearly closed, just before any shift occurs that is induced by tooth engagement or contact) bite. The motion data 110 may also include static measurements or sequences of data corresponding to protrusive (e.g., the patient's mandible being shifted forward while closed), lateral excursive (e.g., the patient's mandible shifted/rotated left and right while closed), hinging (e.g., the patient's mandible opening and closing without lateral movement), chewing (e.g., the patient's mandible chewing naturally to, for example, determine the most commonly used tooth contact points), and/or border movements (e.g., the patient's mandible is shifted in all directions while closed, for example, to determine the full range of motion). In some implementations, the motion data 110 can be captured while the patient 150 is using a Lucia jig or leaf gauge so that the patient 150's teeth (for patients who are not completely edentulous, for example) may not impact/contribute to the generated movement data. This motion data 110 may be used to determine properties of the patient 150's temporomandibular joint (TMJ). For example, hinging motion identified in the motion data 110 may be used, by the denture design system 116 described herein, to determine location of the hinge axis of the patient 150's TMJ. Knowing the location of the hinge axis for the particular patient 150 can be beneficial to design accurate and appropriately-fitting dentures for the patient 150.
wherein the processor of the computing device is configured to resolve the intra-oral data with the dynamic extra-oral maxillofacial data to generate a combined data model; Marshall paras. 8-9 teach the disclosed technology can combine accurate motion data with three-dimensional (3D) intraoral scans (IOS), 3D cone beam CTs (CBCT), or other types of imaging data for improved treatment planning, digital appliance design, and case presentation. The disclosed technology provides a digital workflow presentable through various user-interactive user interfaces with precision data for designing crowns, bridges, dentures, splints, and other types of digital appliances for the different needs of different patients.
Motion and/or image data can be captured of a patient's mouth and used in combination with a 3D model of the patient's mouth/teeth to design and set up dentures or other dental appliances for the patient. For example, motion data can be applied to a mandibular arch of the patient's teeth in the 3D model, since the mandibular arch affects tooth positioning. Applying the motion data to the arch can set the arch into motion, and a relevant user, such as a dentist, caregiver, or other operator, can use graphical user interface (GUI) features presented at a computing device to select which teeth to be moved. The motion data can be applied, by a computer system, to the selected teeth, thereby causing the computer system to automatically displace the selected moveable teeth to avoid interference with other teeth throughout an animation sequence. This animation sequence can help inform the user of an appropriate design for a dental appliance based on actual movement of the patient's jaw.
wherein the display of the computing device is configured to display a model representation of the dental prosthesis based on the combined data model; Marshall para. 21 teaches in some implementations, repositioning, by the computing system, the digital dental model can include: automatically adjusting a position of a condyle hinge axis for each frame in the motion data, rotating at least a portion of lower teeth represented by the digital dental model to a first contact point based on the adjusted condyle hinge axis, and returning a tooth setup indicating an arrangement of the portion of lower teeth based on the first contact point. Returning, by the computing system, the repositioned digital dental model can include generating a digital representation of a dental appliance based on the repositioned digital dental model. The digital representation of the dental appliance can include instructions for manufacturing the dental appliance. The method can also include transmitting, by the computing system, at least one of the digital representation of the dental appliance and the instructions for manufacturing the dental appliance to a rapid fabrication machine that can be configured to fabricate the dental appliance.
wherein the processor is further configured to output manufacturing parameters based on the combined data model; and Marshall para. 21 teaches the digital representation of the dental appliance can include instructions for manufacturing the dental appliance. The method can also include transmitting, by the computing system, at least one of the digital representation of the dental appliance and the instructions for manufacturing the dental appliance to a rapid fabrication machine that can be configured to fabricate the dental appliance.
wherein the manufacturing device is configured to manufacture the dental prosthesis based on the combined data model. Marshall para. 80 teaches the rapid fabrication machine 119 can include one or more 3D printers. Another example of the rapid fabrication machine 119 can include stereolithography equipment. Yet another example of the rapid fabrication machine 119 can be a milling device, such as a computer numerically controlled (CNC) milling device. In some implementations, the rapid fabrication machine 119 can be configured to receive files in STL format. Other embodiments of the rapid fabrication machine 119 may also be possible. The rapid fabrication machine 119 can be configured to receive the denture data 118 from the denture design system 116 and use that data 118 to fabricate (e.g., manufacture, build, produce) denture component 120.
As per Claim 10 Marshall teaches an in-situ dental prosthesis manufacturing system as claimed in claim 9, wherein the processor of the computing device is further configured to generate more than one said combined data model; wherein the display is configured to display the more than one said model representation based on the more than one combined data model respectively; and wherein the computing device is further configured to receive an input from a user and selects one combined data model based on the selected one model representation. Marshall para. 180 teaches in some implementations, any of the user interfaces described in reference to FIGS. 10-18 can allow the user to iterate through the techniques described herein for positioning digital denture teeth repeatedly and in any order/sequence. Therefore, the user can iterate through variations of digital denture teeth designs to efficiently determine a preferred design for the particular patient. Moreover, the digital denture teeth in the user interfaces of FIGS. 10-18 can behave as actual teeth would in the physical world, thereby enabling patient-specific standardization in denture design with time-saving functionality and improved efficiency and accuracy. Real-time collision detection techniques, as implemented by the computer system, can improve quality in denture design. Para. 256 teaches for example, the computer system can receive user input to display and/or adjust one or more tooth datums (operation 2226). The tooth datums can indicate points according to standards in the industry that are used to properly align teeth. The user can provide input (e.g., selecting a selectable option such as a button or checkbox) indicating a desire to view the datums overlaying the teeth in the model. Refer to FIG. 11 for further discussion about displaying the datums. The user can also provide input at their computing device to move locations of one or more of the tooth datums as they are depicted overlaying the teeth in the model. For example, the user can click and drag on a datum that the user desires to move. The user can move the datum to another desired location over one or more of the teeth.
As per Claim 11 Marshall teaches an in-situ dental prosthesis manufacturing system as claimed in claim 10 wherein the intra-oral scanning device includes an oral scanning device having an intra-oral detachable scanning element. Marshall para. 6 teaches the present invention relates to methods and apparatuses for using a wand of an intraoral scanner to scan both a subject's intraoral cavity as well as the subject's face. The subject's intraoral cavity is typically scanned to form a 3D model (e.g., a digital 3D model) of the intraoral cavity; the subject's face may be scanned to form one or more 2D images and/or in some variations a 3D image. These methods and apparatuses may store the intraoral cavity data (e.g., the scans and/or the 3D model of intraoral cavity) along with the face data (e.g., the scans and/or the 2D/3D model of the subject's face) as a single patient-specific data structure. Any appropriate scanning modality may be used, including confocal, structured light, etc. Para. 9 teaches in some variations a removable sleeve may include the second optical path for facial scanning, or for adapting the first optical path (the intraoral scanning optical path) for scanning the subject's face, e.g., by adjusting the depth of focus. For example, a removable sleeve may include lenses and/or other optical components (e.g., filters, mirrors, beamsplitters, prisms, diffraction gratings, etc.) forming the second optical path for imaging the subject's face. In particular, a removable sleeve may include a second optical path that modifies the first optical path (typically within the body of the wand) for intraoral scanning so that all or some of the sensors and/or optical components for intraoral scanning may be used for scanning the face.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marshall US 2023/0035538 A1 in view of Peleg US 2021/0128281 A1.
As per Claim 12 Marshall does not teach an in-situ dental prosthesis manufacturing system as claimed in claim 11, wherein the intra-oral scanning device includes two detachable scanning elements with different operation ranges that scan a single tooth and a full arch of the patient’s mandibular teeth and/or maxillary teeth, respectively, and/or wherein the extra-oral scanning device includes an extra-oral detachable scanning element which is compatible with the oral scanning device. However, Peleg para.6 teaches The present invention relates to methods and apparatuses for using a wand of an intraoral scanner to scan both a subject's intraoral cavity as well as the subject's face. The subject's intraoral cavity is typically scanned to form a 3D model (e.g., a digital 3D model) of the intraoral cavity; the subject's face may be scanned to form one or more 2D images and/or in some variations a 3D image. These methods and apparatuses may store the intraoral cavity data (e.g., the scans and/or the 3D model of intraoral cavity) along with the face data (e.g., the scans and/or the 2D/3D model of the subject's face) as a single patient-specific data structure. Any appropriate scanning modality may be used, including confocal, structured light, etc. [0009] In some variations a removable sleeve may include the second optical path for facial scanning, or for adapting the first optical path (the intraoral scanning optical path) for scanning the subject's face, e.g., by adjusting the depth of focus. For example, a removable sleeve may include lenses and/or other optical components (e.g., filters, mirrors, beamsplitters, prisms, diffraction gratings, etc.) forming the second optical path for imaging the subject's face. In particular, a removable sleeve may include a second optical path that modifies the first optical path (typically within the body of the wand) for intraoral scanning so that all or some of the sensors and/or optical components for intraoral scanning may be used for scanning the face. Both Marshall and Peleg are direct to dental scanning. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Marshall to include wherein the extra-oral scanning device includes an extra-oral detachable scanning element which is compatible with the oral scanning device as taught by Peleg to quickly and easily convert the system (e.g., a scanning wand) between face scanning and intraoral scanning (see para 3).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 13 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Peleg US 2021/0128281 A1.
As per Claim 13 Peleg teaches An oral scanning device comprising:
a scanner body; and Peleg para. 51 teaches the enclosure of the wand an include a main body 108 and a scanning portion 106, which includes one or more optical components 104 (e.g., optical window) that transmit optical signals to and/or from the internal optical components. The scanning portion 106 or probe that can have a shape and size adapted to maneuver around the patient's dentition and position an optical component 104 with respect to the patient's dentition and/or face. In some embodiments, the scanning portion 106 is at a distal end of the scanner 101 with the one or more optical component 104 at one side of the scanning portion 106. In some cases, at least part of the scanning portion 106 may enter into or come near the patient's mouth during an intraoral scanning operation. The scanning portion 106 can be connected to a main body 108 at a non-parallel angle to provide better access and maneuverability around the patient's dentition. The main body 108 can include a handle 110 that is sized and shaped for a practitioner to hold by hand. The main body 108 can include one or more controls 112 (e.g., actuators, buttons, switches, touchpads and/or sliders) for activating one or more functions of the scanner. In some cases, the main body includes one or more vents 116 (e.g., openings) that allow airflow to and from a ventilation component in the internal chamber of the scanner 101 for cooling the internal components of the scanner. In some cases, a proximal end of the main body 108 tapers at cable interface region 114 that couples the cable 109 to the main body 108.
a plurality of detachable scanning element selectably engagable with the scanner body, Peleg para. 51 teaches the enclosure of the wand an include a main body 108 and a scanning portion 106, which includes one or more optical components 104 (e.g., optical window) that transmit optical signals to and/or from the internal optical components. The scanning portion 106 or probe that can have a shape and size adapted to maneuver around the patient's dentition and position an optical component 104 with respect to the patient's dentition and/or face. In some embodiments, the scanning portion 106 is at a distal end of the scanner 101 with the one or more optical component 104 at one side of the scanning portion 106. In some cases, at least part of the scanning portion 106 may enter into or come near the patient's mouth during an intraoral scanning operation. The scanning portion 106 can be connected to a main body 108 at a non-parallel angle to provide better access and maneuverability around the patient's dentition. The main body 108 can include a handle 110 that is sized and shaped for a practitioner to hold by hand. The main body 108 can include one or more controls 112 (e.g., actuators, buttons, switches, touchpads and/or sliders) for activating one or more functions of the scanner. In some cases, the main body includes one or more vents 116 (e.g., openings) that allow airflow to and from a ventilation component in the internal chamber of the scanner 101 for cooling the internal components of the scanner. In some cases, a proximal end of the main body 108 tapers at cable interface region 114 that couples the cable 109 to the main body 108.
wherein one of the plurality of detachable scanning elements is an intra-oral scanning element, and Peleg para. 9 teaches In some variations a removable sleeve may include the second optical path for facial scanning, or for adapting the first optical path (the intraoral scanning optical path) for scanning the subject's face, e.g., by adjusting the depth of focus. For example, a removable sleeve may include lenses and/or other optical components (e.g., filters, mirrors, beamsplitters, prisms, diffraction gratings, etc.) forming the second optical path for imaging the subject's face. In particular, a removable sleeve may include a second optical path that modifies the first optical path (typically within the body of the wand) for intraoral scanning so that all or some of the sensors and/or optical components for intraoral scanning may be used for scanning the face.
one of the plurality of detachable scanning elements is an extra-oral scanning element.
Peleg para. 66 teaches As mentioned in some variations the apparatuses described herein may be configured for use with a Multi structured light (MSL) scanner for 3D scanning. In some variations these apparatuses may be configured as multi-use, add-on, sleeves that allow the MSL (Multi Structured Light) wand tip to capture a subject's face in 3D, as part of a complete “smile” design flow. The 3D capture can be based on passive stereo imaging (white light) or active structured light (laser based). Para. 71 teaches For example, the wand may include a plurality of (e.g., 5-6) full color cameras for purposes of the intraoral scanning. For capturing a scan of the subject's face (e.g., a 3D color face stereo capture), a plurality of cameras (e.g., two or more cameras) may be included, resulting in simple and low cost enhanced sleeve. In some variations, for better performance of 3D stereo reconstruction, the most distant camera pair is preferably used, e.g., for facial reconstructions. In some variations, one camera may be used to capture multiple 2D images. Post processing algorithms, such as SLAM (Simultaneous Localization And Mapping) may be used to construct a 3D model/image (with or without additional data such as IMU (Inertial Measurement Unit)). In some embodiments, one camera may be used in conjunction with structured light illumination, and the 3D data generation may be done as image processing, after the images are captured. In another embodiment, three or more cameras may be used to enhance the 3D capture quality.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEIRDRE D HATCHER whose telephone number is (571)270-5321. The examiner can normally be reached Monday-Friday 8-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached at 571-270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DEIRDRE D HATCHER/Primary Examiner, Art Unit 3625