DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 2 recites the limitation “the determining of, based on the type of the object, whether to activate or deactivate the scan filtering mode comprises:” in the preamble, which is objected to for containing grammatical and/or idiomatic errors. It is suggested to amend the preamble to read –determining whether to activate or deactivate the scan filtering mode based on the type of the object comprises:–.
Claim 7 is objected to because of the following informalities: the claim recites the limitation “The data processing method of claim 6, wherein the identifying of the type of the object further comprises ” in the preamble, which appears to contain a typographical error. It is suggested to amend the claim to include a colon following the term ‘comprising’ to remain consistent with the claims and conform with standard U.S. practice.
Claim 13 which is objected to because of the following informalities: the claim recites the limitation “to determine, based on the type of object being not a model, to activate the scan filtering mode”, which appears to contain grammatical/idiomatic errors. The claim should be amended to recite “to determine, based on the type of object not being [[not]] a model, to activate the scan filtering mode”.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 10 is/are rejected under 35 U.S.C. §101 because the claimed invention(s) is/are directed to nonstatutory subject matter. Claim 10 recites the limitations “A computer-readable recording medium having recorded thereon a program for executing a data processing method, the data processing method comprising: identifying a type of an object; determining, based on the type of the object, whether to activate or deactivate a scan filtering mode; and obtaining 3D scan data on the object by activating or deactivating the scan filtering mode according to the determining”. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claim(s) recite software per se.
As the courts' definitions of machines, manufactures and compositions of matter indicate, a product must have a physical or tangible form in order to fall within one of these statutory categories [Digitech, 758 F.3d at 1348, 111 USPQ2d at 1719]. Thus, the Federal Circuit has held that a product claim to an intangible collection of information, even if created by human effort, does not fall within any statutory category [Digitech, 758 F.3d at 1350, 111 USPQ2d at 1720] (claimed "device profile" comprising two sets of data did not meet any of the categories because it was neither a process nor a tangible product). Similarly, software expressed as code or a set of instructions detached from any medium is an idea without physical embodiment [See Microsoft Corp. v. AT&T Corp., 550 U.S. 437, 449, 82 USPQ2d 1400, 1407 (2007); see also Benson, 409 U.S. 67, 175 USPQ2d 675] (An "idea" is not patent eligible). Thus, a product claim to a software program that does not also contain at least one structural limitation (such as a "means plus function" limitation) has no physical or tangible form, and thus does not fall within any statutory category.
Claim(s) 1-9 and 11-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 recites the limitations “A data processing method performed by a data processing apparatus, the data processing method comprising: identifying a type of an object; determining, based on the type of the object, whether to activate or deactivate a scan filtering mode; and obtaining 3D scan data on the object by activating or deactivating the scan filtering mode according to the determining”; and claim 11 recites the limitations “A data processing apparatus comprising at least one processor configured to execute at least one instruction, wherein the at least one processor is further configured to execute the at least one instruction to: identify a type of an object; determine, based on the type of the object, whether to activate or deactivate a scan filtering mode; and obtain 3D scan data on the object by activating or deactivating the scan filtering mode according to the determining”. The claim(s) do not fall within at least one of the four categories of patent eligible subject matter because the broadest reasonable interpretation of the limitations provided in the claims are directed to an abstract idea without significantly more.
The limitations of “identifying a type of an object; determining, […] whether to activate or deactivate a scan filtering mode; and” in claim 1, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. Similarly, the limitations of “identify a type of an object; determine, […] whether to activate or deactivate a scan filtering mode; and” in claim 11 covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “A data processing method performed by a data processing apparatus” in the preamble of claim 1, and reciting “A data processing apparatus comprising at least one processor configured to execute at least one instruction, wherein the at least one processor is further configured to execute the at least one instruction to” in claim 11, nothing in the claim element precludes the step from practically being performed in the mind.
For example, but for the “performed by a data processing apparatus” and “processor configured to” language, the “identify” step in the context of these claims encompass the user visually inspecting an ‘object’ to determine what ‘type’ it belongs to. This amounts to a user looking at any form of ‘data’ and determining whether an ‘object’ present in the data is a part of a body, an artificial structure within the body, or is a model modeled after a part of the body (e.g., the oral cavity, teeth and gingivae, a plaster/impression model of the oral cavity, an artificial structure insertable into the oral cavity, and/or a plaster model or impression model of an artificial structure). These inspection steps, as drafted, are processes that, under the broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. Similarly, the limitations of “determining, based on the type of the object, whether to activate or deactivate a scan filtering mode;” and “determine, based on the type of the object, whether to activate or deactivate a scan filtering mode;” in claims 1 and 11, respectively, are processes that, under the broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, but for the “performed by a data processing apparatus” and “processor configured to” language, the “determining” step in the context of these claims encompass the user thinking about activating a particular filter to isolate or eliminate an identified object. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. The claim limitations “obtaining 3D scan data on the object by activating or deactivating the scan filtering mode according to the determining” in claim 1 and “obtain 3D scan data on the object by activating or deactivating the scan filtering mode according to the determining” in claim 11 recite the collection of data, which amounts to insignificant post-solution activity (data acquisition) and does not integrate the judicial exception into a practical application. The claim only recites one additional element – using a ‘data processing apparatus’ and ‘processor’ to perform the ‘identification’ and ‘determination’ steps. The ‘data processing apparatus’ and ‘processor’ are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of identifying objects, and determining whether to filter out an identified object) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a ‘data processing apparatus’/processor to perform both the ‘identification’ and ‘determination’ steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The collection of data recited in the ‘obtaining 3D scan data’ step amounts to insignificant post-solution activity (data acquisition) and does not integrate the judicial exception into a practical application. In view of the above, claim(s) 1 and 11 fail to recite patent-eligible subject matter under 35 U.S.C. § 101 and are not patent eligible.
Dependent claim(s) 2-9 and 12-20 fail to add additional elements that integrate the judicial exception into a practical application. Claims 2-3, 5-9, 12-14 and 16-20 fail to cure the deficiencies of claim(s) 1 and 11 by merely reciting additional abstract ideas or further limitations on the abstract idea already recited. The limitations “outputting a user interface screen” in claim 4 and “output a user interface screen” in claim 15, are recitations which are not significantly more than what is well-known, routine and conventional in the art. In particular, the recitation “outputting a user interface screen” in claim 4 and “output a user interface screen” in claim 15 constitutes post-solution activity. Based on the claim language, as drafted, these limitations describe the display of information, which is not significantly more than what is well-known, routine and conventional in the art. Thus, the dependent claim(s) 2-9 and 12-20 are rejected under 35 U.S.C. § 101. Claims 1-20 are not patent eligible and are rejected under 35 U.S.C. § 101.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2-9 and 12-20 are also rejected at least by virtue of dependency upon a rejected base claim.
First, claim(s) 1, 10 and 11 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential steps, such omission amounting to a gap between the steps. See MPEP § 2172.01. The omitted steps are: the initial acquisition of ‘raw data’ using sufficient structure (e.g., a ‘3D scanner’) as well as the subsequent processing of the ‘raw data’ into ‘3D information’ using sufficient structure (e.g., using the ‘3D scanner’, using a ‘data processing apparatus), etc.); and the transmission (e.g., through a communication network) of either the ‘raw data’ or ‘3D information’ to a structure (e.g., the ‘data processing apparatus, and ‘external device’) which performs the data processing. For the purposes of examination, the broadest reasonable interpretation of the claim language in view of the instant disclosure of Applicant’s specification is applied to examine the limitations.
Claim 1 recites the limitation “identifying a type of an object; determining, based on the type of the object, whether to activate or deactivate a scan filtering mode; and obtaining 3D scan data on the object by activating or deactivating the scan filtering mode according to the determining” which renders the claim indefinite. Similarly, the limitations “identifying a type of an object; determining, based on the type of the object, whether to activate or deactivate a scan filtering mode; and obtaining 3D scan data on the object by activating or deactivating the scan filtering mode according to the determining” in claim 10; and “identify a type of an object; determine, based on the type of the object, whether to activate or deactivate a scan filtering mode; and obtain 3D scan data on the object by activating or deactivating the scan filtering mode according to the determining” in claim 11 render these claims indefinite. First, it is not clear what any of the method steps or processing instructions are ‘identifying an object’ from (e.g., ‘3D scan data’, raw data, other information, etc.). Next, it is unclear what it means to ‘activate or deactivate a scan filtering mode’, because there are no corresponding structures recited in the claims which are linked to the activation/deactivation limitations (e.g., a scanner, a processer, etc.). Similarly, it is not clear what structures are obtaining the ‘3D scan data’ (e.g., scanner, processor, from an external device, etc.) in response to the previously recited limitations.
Furthermore, the steps of ‘determining whether to activate or deactivate a scan filtering mode’ and ‘obtaining 3D scan data by activating or deactivating the scan filtering mode’ recited in the independent claims are unclear and render the claims indefinite. It is not clear if a ‘scan filter’ (or what type of ‘scan filter’) is initially used to obtain the data or information from which the ‘object’ is ‘identified’. Accordingly, given the branching language of either ‘activating’ or ‘deactivating’ it is not clear
It is suggested to amend the claims to clarify the functional steps and to point out what specific structures perform the functions. For the purposes of examination the broadest reasonable interpretation of the claim language – including those discussed above – is applied to the limitations.
Claim 2 recites the limitations “based on the type of the object being a model, determining to deactivate the scan filtering mode; and based on the type of the object not being a model, determining to activate the scan filtering mode” which renders the claim indefinite. The limitation does not define what a ‘model’ is within the context of the claim language (e.g., a 3D/virtual model, a physical model of an anatomical feature, a virtual or physical model of an artificial structure, etc.). The recitations in claims 12 (i.e., “the at least one processor is further configured to execute the at least one instruction to determine, based on the type of the object being a model, to deactivate the scan filtering model”) and claim 13 (i.e., “the at least one processor is further configured to execute the at least one instruction and to determine, based on the type of object being not a model, to activate the scan filtering mode”) are similarly rejected under the same logic as applied to claim 2. It is suggested to amend the claims to define what the ‘model’ actually is. For the purposes of examination any type of model (virtual, physical, anatomical or artificial structure, etc.) is applied to the limitation.
Claims 3 and 14 recite the limitation “changing a current activated or deactivated state of the scan filtering mode based on the current activated or deactivated state of the scan filtering mode not corresponding to the determining” which is unclear and renders the claims indefinite. The recursive language does not particularly point out what the ‘current state of the scan filtering mode’ is, when the ‘changing’ occurs relative to the ‘obtaining’ step nor how it is performed. It is not clear if the ‘current activated or deactivated state’ refers to the ‘activated or deactivated state’ from the ‘determining’ step in claims 1 and 11 or if it refers to the ‘activated or deactivated state’ from the ‘obtaining’ step. The claim wording is indefinite regarding whether the ‘activation/deactivation’ occurs following the ‘determining’ step, during the ‘obtaining’ step or after the ‘obtaining step’. For example, in an interpretation the ‘state’ may be initially ‘activated’ during the ‘determining’ step in claims 1 and 11, may be ‘deactivated’ for the ‘obtaining’ step in claim 1 and 11, and then may be ‘activated’ again in claims 3 and 14. In another interpretation the limitation of claims 3 and 14 is already performed by the limitations of claim 1 and 11 respectively. It is suggested to amend the claims to clearly define what the ‘activation/deactivation’ process is in view of the limitations of the independent claim.
Claim 4 recites the limitations “outputting a user interface screen for changing the current activated or deactivated state of the scan filtering mode based on the current activated or deactivated state of the scan filtering mode not corresponding to the determining;” which renders the claim indefinite. It is not clear if this ‘user interface screen’ is outputting a simple display or is used to manually change the ‘current activated or deactivated state’, nor is it clear what the ‘user interface screen’ is being displayed upon (e.g., a monitor, a printout image, an electronic report, etc.). For the purposes of examination any type of ‘user interface screen’ messaging – including those discussed above – is applied to the limitations.
Claim 5 recites the limitations “receiving a signal for changing […] the signal being input in response to the outputting of the user interface screen;”, which is unclear. It is not clear how the ‘signal’ is being input; in distinct interpretations the ‘signal’ may be generated as input from an automatic process (e.g., a processor), and in another interpretation the ‘signal’ may be input from a user or operator performing the data processing method. For the purposes of examination the broadest reasonable interpretation any type of ‘input signal’ as discussed above is applied to the limitations.
Claims 6 and 17 recite the limitation “a percentage of pixels identified as the specific class among all pixels included in a frame received from a scanner is greater than or equal to a reference value” which renders the claim indefinite. Neither the instant claim 6 nor the independent claim 1 clearly define the ‘identification’ step by indicating where the ‘object’ is generated. Accordingly, the relationship between the ‘frame received form a scanner’ and the ‘object’ is not clear. It is not certain or definite that the ‘type of object’ is identified from the ‘frame’ recited in claims 6 and 17, or from another type of data received as part of the data processing method of claims 1 and 11. The claims must be amended to clearly indicate where and how the ‘type of object’ is identified, and to clearly link the ‘frame received from a scanner’ in claims 6 and 17 to the ‘object’ from claims 1 and 11 respectively. For the purposes of examination the broadest reasonable interpretation of the claim language is applied to the limitations.
Claims 7 and 18 recite the limitation “identifying the object as the specific class when a percentage of frames identified as the specific class among a reference number of frames obtained after a scan operation starts is greater than or equal to a reference value” which renders the claims indefinite. There is insufficient antecedent basis for these limitations in the claim. It is not clear if the ‘reference value’ in the instant claims 7 and 8 refers to the ‘reference value’ recited in claims 6 and 17 or is referring to a new ‘reference value’. Furthermore the instant claims 7 and 18 introduces a plurality of ‘frames’, while claims 6 and 17 (upon which claims 7 and 18 depend) recites only a single ‘frame’ from which the ‘specific class’ is identified – these claims are not consistent and it is not clear that the ‘scan operation’ of claims 7 and 18 acquires a plurality of frames because there is no prior recitation of what the ‘scan operation’ entails. The claim(s) must be amended to clearly indicate how the ‘scan operation’ is performed (e.g., acquiring a single frame, acquiring a plurality of frames, etc.), how the ‘frame’ or ‘frames’ are selected for the identification process, what constitutes the ‘reference value’, and so on. For the purposes of examination the broadest reasonable interpretation of the claim language is applied to the limitations.
Claims 8 recite the limitations “wherein the scan filtering mode comprises a plurality of activation modes depending on objects to be filtered, the obtaining of the 3D data on the object by activating the scan filtering mode comprises obtaining filtered 3D scan data on the object” which renders the claim indefinite. The claim language is unclear and is not consistent regarding the number of ‘objects’ which are identified. It is not clear whether the ‘scan filtering mode’ determines multiple objects or a single object. The use of ‘the 3D data’ in claim 8 lacks sufficient antecedent basis, because it is not clear if this is new ‘3D data’ or is referring to the ‘3D scan data’ recited in claim 1. Furthermore it is unclear if the ‘filtered 3D scan data’ refers to the ‘3D scan data’ which is obtained or is generated in addition to the ‘3D scan data’. The mirrored limitations in claim 19 are rejected under similar rationale as applied to claim 8. For the purposes of examination the broadest reasonable interpretation of the claim language includes any number of ‘objects’ and number of ‘filtered/3D scan data’.
Claims 9 and 20 recite the limitation “identifying pixels of a class to be filtered in the one activation mode among pixels included in a frame received from a scanner” which renders the claim indefinite. Neither the instant claims 9 and 20, nor claims 1 and claim 8 (upon which claim 9 depends) or claims 11 and 19 (upon which claim 20 depends) clearly define the ‘identification’ step by indicating how the ‘pixels included in a frame’ are related to the ‘object’ from claims 1 and 8, nor how the ‘type of an object’ is related to the ‘pixels of a class to be filtered’. The claims must be amended to clearly indicate how the ‘pixels’ are identified, and to clearly link the ‘frame received from a scanner’ in claim 9 to the ‘object’ from claims 1 and 8. Similar consideration must be applied to the language of claim 20 as discussed regarding claim 9. For the purposes of examination the broadest reasonable interpretation of the claim language is applied to the limitations.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(2) as being clearly anticipated by Minchenkov et al. (US20200349698A1, 2020-11-05; hereinafter “Minchenkov”).
Regarding claim 1, Minchenkov teaches a data processing method performed by a data processing apparatus (“A method includes processing an input comprising data from an intraoral image using a trained machine learning model that has been trained to classify regions of dental sites” [abst]; “A method comprising: […] processing a plurality of inputs using a trained machine learning model that has been trained to classify regions of dental sites,” [clm 19]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6]), the data processing method comprising:
identifying a type of an object (“wherein for each intraoral image of the plurality of intraoral images, the trained machine learning model outputs a probability map comprising, for each pixel in the intraoral image, a first probability that the pixel belongs to a first dental class and a second probability that the pixel belongs to a second dental class,” [clm 19]; “Intraoral scan application 115 may include logic (e.g., intraoral image classifying module 119) for automatically segmenting intraoral images generated by scanner 150 during intraoral scanning.” [0049]; “a machine learning model 255 is trained to segment intraoral images by classifying regions of those intraoral images into one or more dental classes.” [0068]; The machine learning model processes input data to classify pixels (i.e., identify object type) into either of a first or second dental class [0040-0052, 0083-0094], [fig. 1-3C, 5A-6]);
determining, based on the type of the object, whether to activate or deactivate a scan filtering mode (“for each point in the three-dimensional model, determining whether the point is classified as excess material based on at least one of a) the one or more first probabilities or b) the one or more second probabilities; and” [clm 19]; “The machine learning models may be trained to automatically classify and/or segment intraoral scans during or after an intraoral scanning session, and the segmentation/classification may be used to automatically remove excess material from the images.” [0067]; The probability map for each pixel is used to determine pixels that represent excess material, wherein the excess material is classified/segmented and may be subsequently removed [0040-0052, 0083-0110], [fig. 1-3C, 5A-6]); and
obtaining 3D scan data on the object by activating or deactivating the scan filtering mode according to the determining (“generating a three-dimensional model of the dental site […] modifying the three-dimensional model by hiding or removing from the three-dimensional model those points that are classified as excess material.” [clm 19]; “Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image. This may include actually removing the pixels labeled as excess material from the intraoral image, applying a filter to the intraoral image, or modifying the pixels of the intraoral image labeled as excess material to a value that indicates that there is no surface at the pixel […] The images that are used to generate the virtual 3D model may be modified images in which excess material has been removed” [0107]; The generated virtual 3D model may be updated based on the output of the machine learning model to reflect the removal of excess material [0040-0052, 0083-0110], [fig. 1-3C, 5A-6]).
Regarding claim 2, Minchenkov teaches the data processing method of claim 1, Minchenkov further teaching wherein the determining of, based on the type of the object, whether to activate or deactivate the scan filtering mode comprises:
based on the type of the object being a model, determining to deactivate the scan filtering mode (“The trained machine learning model 255 outputs a probability map 260, where each point in the probability map corresponds to a pixel in the image and indicates probabilities that the pixel represents one or more dental classes.” [0105]; The pixels that are identified as teeth or gums are not removed from the intraoral image [0040-0052, 0083-0110], [fig. 1-3C, 5A-6], [see claim 1 rejection]); and
based on the type of the object not being a model, determining to activate the scan filtering mode (“The intraoral image 248 (and optionally other data) is input into trained model 255,” [0104]; “The probability map may be used to determine pixels that represent excess material. Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image. This may include actually removing the pixels labeled as excess material from the intraoral image, applying a filter to the intraoral image, or modifying the pixels of the intraoral image labeled as excess material to a value that indicates that there is no surface at the pixel (e.g., reducing a height map value for the pixel to zero or another predefined value).” [0107]; Pixels which are identified as excess material may be filtered out of the intraoral image [0040-0052, 0083-0110], [fig. 1-3C, 5A-6], [see claim 1 rejection]).
Regarding claim 3, Minchenkov teaches the data processing method of claim 1, Minchenkov further teaching wherein the obtaining of the 3D scan data on the object comprises:
changing a current activated or deactivated state of the scan filtering mode based on the current activated or deactivated state of the scan filtering mode not corresponding to the determining (“generating a three-dimensional model of the dental site […] modifying the three-dimensional model by hiding or removing from the three-dimensional model those points that are classified as excess material.” [clm 19]; “Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image. This may include actually removing the pixels labeled as excess material from the intraoral image, applying a filter to the intraoral image, or modifying the pixels of the intraoral image labeled as excess material to a value that indicates that there is no surface at the pixel […] The images that are used to generate the virtual 3D model may be modified images in which excess material has been removed” [0107]; The generated virtual 3D model may be updated based on the output of the machine learning model to reflect the removal of excess material [0040-0052, 0083-0110], [fig. 1-3C, 5A-6], [see claim 1 rejection]); and
obtaining 3D scan data on the object based on the changed activated or deactivated state of the scan filtering mode (“the probability map is used to update the intraoral image to generate a modified intraoral image. The probability map may be used to determine pixels that represent excess material. Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image.” [0107]; [0040-0052, 0083-0110], [fig. 1-3C, 5A-6], [see claim 1 rejection]).
Regarding claim 4, Minchenkov teaches the data processing method of claim 3, Minchenkov further teaching further comprising
outputting a user interface screen for changing the current activated or deactivated state of the scan filtering mode based on the current activated or deactivated state of the scan filtering mode not corresponding to the determining (“Computing device 105 and computing device 106 may each include one or more processing devices, memory, secondary storage, one or more input devices (e.g., such as a keyboard, mouse, tablet, and so on), one or more output devices (e.g., a display, a printer, etc.), and/or other hardware components.” [0042]; “Intraoral scan application 115 may generate a 3D model from intraoral images, and may display the 3D model to a user (e.g., a doctor) via a user interface. The 3D model can then be checked visually by the doctor. […] The doctor may review (e.g., visually inspect) the generated 3D model of an intraoral site and determine whether the 3D model is acceptable (e.g., whether a margin line of a preparation tooth is accurately represented in the 3D model).” [0048]; [0040-0052, 0083-0110], [fig. 1-3C, 5A-6]),
wherein the user interface screen comprises at least one of the type of the object, a message guiding changing of the activated or deactivated state of the scan filtering mode based on the type of the object, and a message guiding changing of a scan operation (“Intraoral scan application 115 may generate a 3D model from intraoral images, and may display the 3D model to a user (e.g., a doctor) via a user interface. The 3D model can then be checked visually by the doctor. […] The doctor may review (e.g., visually inspect) the generated 3D model of an intraoral site and determine whether the 3D model is acceptable (e.g., whether a margin line of a preparation tooth is accurately represented in the 3D model).” [0048]; “processing logic modifies the virtual 3D model by removing from the 3D model those points that are classified as excess material. In some embodiments, this includes filtering out the points without actually removing the points from the 3D model. Accordingly, a user may turn off the filtering to view the excess material” [0147]; [0040-0052, 0083-0110], [fig. 1-3C, 5A-6]).
Regarding claim 5, Minchenkov teaches the data processing method of claim 4, Minchenkov further teaching wherein the changing of the current activated or deactivated state of the scan filtering mode comprises:
receiving a signal for changing the current activated or deactivated state of the scan filtering mode, the signal being input in response to the outputting of the user interface screen (“Intraoral scan application 115 may generate a 3D model from intraoral images, and may display the 3D model to a user (e.g., a doctor) via a user interface. The 3D model can then be checked visually by the doctor. The doctor can virtually manipulate the 3D model via the user interface with respect to up to six degrees of freedom (i.e., translated and/or rotated with respect to one or more of three mutually orthogonal axes) using suitable user controls (hardware and/or virtual) to enable viewing of the 3D model from any desired direction. The doctor may review (e.g., visually inspect) the generated 3D model of an intraoral site and determine whether the 3D model is acceptable (e.g., whether a margin line of a preparation tooth is accurately represented in the 3D model).” [0048]; “processing logic modifies the virtual 3D model by removing from the 3D model those points that are classified as excess material. In some embodiments, this includes filtering out the points without actually removing the points from the 3D model. Accordingly, a user may turn off the filtering to view the excess material.” [0147]; [0040-0052, 0083-0110], [1-3C, 5A-6]); and
changing the current activated or deactivated state of the scan filtering mode based on the signal (“Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image. This may include actually removing the pixels labeled as excess material from the intraoral image, applying a filter to the intraoral image, or modifying the pixels of the intraoral image labeled as excess material to a value that indicates that there is no surface at the pixel […] The images that are used to generate the virtual 3D model may be modified images in which excess material has been removed” [0107]; “a user may turn off the filtering to view the excess material.” [0147]; [0040-0052, 0083-0110], [fig. 1-3C, 5A-6], [see claim 1 rejection]).
Regarding claim 6, Minchenkov teaches the data processing method of claim 1,
Minchenkov further teaching wherein the identifying of the type of the object comprises identifying the object as a specific class when a percentage of pixels identified as the specific class among all pixels included in a frame received from a scanner is greater than or equal to a reference value (“the trained machine learning model outputs a probability map comprising, for each pixel in the intraoral image, a first probability that the pixel belongs to a first dental class and a second probability that the pixel belongs to a second dental class,” [clm 19]; ‘When a scan session is complete (e.g., all images for an intraoral site or dental site have been captured), intraoral scan application 115 may generate a virtual 3D model of one or more scanned dental sites.” [0045]; “In the case of teeth/gums/excess material segmentation, three valued labels are generated for each pixel. The corresponding predictions have a probability nature: for each pixel there are three numbers that sum up to 1.0 and can be interpreted as probabilities of the pixel to correspond to these three classes.” [0105]; “processing logic determines whether the probability of a pixel being excess material exceeds a threshold probability. If the probability of the excess material class (e.g., represented as a blue component) has a value that is greater than some probability threshold, then the pixel is classified as excess material.” [0112]; Every pixel within each intraoral image is assigned a probability of falling within a specific class, wherein a probability threshold for pixels may be used to identify excess material [0040-0052, 0083-0094], [fig. 1-3C, 5A-6]).
Regarding claim 7, Minchenkov teaches the data processing method of claim 6, Minchenkov further teaching wherein the identifying of the type of the object further comprises
identifying the object as the specific class when a percentage of frames identified as the specific class among a reference number of frames obtained after a scan operation starts is greater than or equal to a reference value (“wherein each of the plurality of inputs comprises data from at least two sequential intraoral images from the plurality of intraoral images” [clm 22]; “image registration is performed for adjacent or overlapping intraoral images (e.g., each successive frame of an intraoral video).” [0046]; “In the case of teeth/gums/excess material segmentation, three valued labels are generated for each pixel. The corresponding predictions have a probability nature: for each pixel there are three numbers that sum up to 1.0 and can be interpreted as probabilities of the pixel to correspond to these three classes.” [0105]; “processing logic determines whether the probability of a pixel being excess material exceeds a threshold probability. If the probability of the excess material class (e.g., represented as a blue component) has a value that is greater than some probability threshold, then the pixel is classified as excess material.” [0112]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 6 rejection]).
Regarding claim 8, Minchenkov teaches the data processing method of claim 1,
Minchenkov further teaching wherein the scan filtering mode comprises a plurality of activation modes depending on objects to be filtered (“the machine learning model may be trained to generate a probability map, where each point in the probability map corresponds to a pixel of an input image and indicates one or more of a first probability that the pixel represents a first dental class, a second probability that the pixel represents a second dental class, a third probability that the pixel represents a third dental class, a fourth probability that the pixels represents a fourth dental class, a fifth probability that the pixel represents a fifth dental class, and so on.” [0080]; A plurality of dental classes (e.g., teeth, gums, excess material) may be segmented by the trained machine learning model [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 1 rejection]),
the obtaining of the 3D data on the object by activating the scan filtering mode comprises obtaining filtered 3D scan data on the object in one activation mode among the plurality of activation modes (“The trained machine learning model 255 outputs a probability map 260, where each point in the probability map corresponds to a pixel in the image and indicates probabilities that the pixel represents one or more dental classes. In the case of teeth/gums/excess material segmentation, three valued labels are generated for each pixel” [0105]; “In a first technique for applying the probability map, processing logic determines whether the probability of a pixel being excess material is larger than the probabilities of the pixel being anything else (e.g., larger than the probability of the pixel being gums or teeth. A pixel is then determined to be in the excess material class” [0111]; The trained machine model assigns probabilities to the pixels, and then may apply a filter to remove excess material based on the determination that one probability is greater than other probabilities [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 1 rejection]), and
the one activation mode among the plurality of activation modes comprises at least one of an activation mode selected by a user, an activation mode selected by default, and an activation mode used in a previous scan operation (“given a probability map that includes probabilities of pixels belonging to various dental classes, where at least one of those dental classes is an excess material dental class, the system can filter out pixels that correspond to excess material.” [0110]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 1 rejection]).
Regarding claim 9, Minchenkov teaches the data processing method of claim 8,
Minchenkov further teaching wherein the obtaining of the filtered 3D scan data on the object in the one activation mode of the plurality of activation modes comprises:
identifying pixels of a class to be filtered in the one activation mode among pixels included in a frame received from a scanner (“the trained machine learning model outputs a probability map comprising, for each pixel in the intraoral image, a first probability that the pixel belongs to a first dental class and a second probability that the pixel belongs to a second dental class,” [clm 19]; ‘When a scan session is complete (e.g., all images for an intraoral site or dental site have been captured), intraoral scan application 115 may generate a virtual 3D model of one or more scanned dental sites.” [0045]; “In the case of teeth/gums/excess material segmentation, three valued labels are generated for each pixel. The corresponding predictions have a probability nature: for each pixel there are three numbers that sum up to 1.0 and can be interpreted as probabilities of the pixel to correspond to these three classes.” [0105]; “processing logic determines whether the probability of a pixel being excess material exceeds a threshold probability. If the probability of the excess material class (e.g., represented as a blue component) has a value that is greater than some probability threshold, then the pixel is classified as excess material.” [0112]; Every pixel within each intraoral image is assigned a probability of falling within a specific class, wherein a probability threshold for pixels may be used to identify excess material [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 6 rejection]); and
obtaining filtered 3D scan data on the object by removing pixels of the identified class (“Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image. This may include actually removing the pixels labeled as excess material from the intraoral image, applying a filter to the intraoral image, or modifying the pixels of the intraoral image labeled as excess material to a value that indicates that there is no surface at the pixel […] The images that are used to generate the virtual 3D model may be modified images in which excess material has been removed” [0107]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 1 rejection]).
Regarding claim 10, Minchenkov teaches a computer-readable recording medium having recorded thereon a program for executing a data processing method (“A method includes processing an input comprising data from an intraoral image using a trained machine learning model that has been trained to classify regions of dental sites” [abst]; “The methods may be performed by a processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof” [0050]; “A method comprising: […] processing a plurality of inputs using a trained machine learning model that has been trained to classify regions of dental sites,” [clm 19]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6]), the data processing method comprising:
identifying a type of an object (“wherein for each intraoral image of the plurality of intraoral images, the trained machine learning model outputs a probability map comprising, for each pixel in the intraoral image, a first probability that the pixel belongs to a first dental class and a second probability that the pixel belongs to a second dental class,” [clm 19]; “Intraoral scan application 115 may include logic (e.g., intraoral image classifying module 119) for automatically segmenting intraoral images generated by scanner 150 during intraoral scanning.” [0049]; “a machine learning model 255 is trained to segment intraoral images by classifying regions of those intraoral images into one or more dental classes.” [0068]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 1 rejection]);
determining, based on the type of the object, whether to activate or deactivate a scan filtering mode (“for each point in the three-dimensional model, determining whether the point is classified as excess material based on at least one of a) the one or more first probabilities or b) the one or more second probabilities; and” [clm 19]; “The machine learning models may be trained to automatically classify and/or segment intraoral scans during or after an intraoral scanning session, and the segmentation/classification may be used to automatically remove excess material from the images.” [0067]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 1 rejection]); and
obtaining 3D scan data on the object by activating or deactivating the scan filtering mode according to the determining (“generating a three-dimensional model of the dental site […] modifying the three-dimensional model by hiding or removing from the three-dimensional model those points that are classified as excess material.” [clm 19]; “Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image. This may include actually removing the pixels labeled as excess material from the intraoral image, applying a filter to the intraoral image, or modifying the pixels of the intraoral image labeled as excess material to a value that indicates that there is no surface at the pixel […] The images that are used to generate the virtual 3D model may be modified images in which excess material has been removed” [0107]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 1 rejection]).
Regarding claim 11, Minchenkov teaches a data processing apparatus comprising at least one processor configured to execute at least one instruction (“A method includes processing an input comprising data from an intraoral image using a trained machine learning model that has been trained to classify regions of dental sites” [abst]; “at least some operations of the methods are performed by a computing device executing an intraoral scan application 115 and/or an intraoral image classifying module 119” [0050]; “A method comprising: […] processing a plurality of inputs using a trained machine learning model that has been trained to classify regions of dental sites,” [clm 19]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6]),
wherein the at least one processor is further configured to execute the at least one instruction to:
identify a type of an object (“wherein for each intraoral image of the plurality of intraoral images, the trained machine learning model outputs a probability map comprising, for each pixel in the intraoral image, a first probability that the pixel belongs to a first dental class and a second probability that the pixel belongs to a second dental class,” [clm 19]; “Intraoral scan application 115 may include logic (e.g., intraoral image classifying module 119) for automatically segmenting intraoral images generated by scanner 150 during intraoral scanning.” [0049]; “a machine learning model 255 is trained to segment intraoral images by classifying regions of those intraoral images into one or more dental classes.” [0068]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 1 rejection]);
determine, based on the type of the object, whether to activate or deactivate a scan filtering mode (“for each point in the three-dimensional model, determining whether the point is classified as excess material based on at least one of a) the one or more first probabilities or b) the one or more second probabilities; and” [clm 19]; “The machine learning models may be trained to automatically classify and/or segment intraoral scans during or after an intraoral scanning session, and the segmentation/classification may be used to automatically remove excess material from the images.” [0067]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 1 rejection]); and
obtain 3D scan data on the object by activating or deactivating the scan filtering mode according to the determining (“generating a three-dimensional model of the dental site […] modifying the three-dimensional model by hiding or removing from the three-dimensional model those points that are classified as excess material.” [clm 19]; “Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image. This may include actually removing the pixels labeled as excess material from the intraoral image, applying a filter to the intraoral image, or modifying the pixels of the intraoral image labeled as excess material to a value that indicates that there is no surface at the pixel […] The images that are used to generate the virtual 3D model may be modified images in which excess material has been removed” [0107]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 1 rejection]).
Regarding claim 12, Minchenkov teaches the data processing apparatus of claim 11,
Minchenkov further teaching wherein the at least one processor is further configured to execute the at least one instruction to determine, based on the type of the object being a model, to deactivate the scan filtering mode (“The trained machine learning model 255 outputs a probability map 260, where each point in the probability map corresponds to a pixel in the image and indicates probabilities that the pixel represents one or more dental classes.” [0105]; The pixels that are identified as teeth or gums are not removed from the intraoral image [0040-0052, 0083-0110], [fig. 1-3C, 5A-6], [see claim 2 rejection]).
Regarding claim 13, Minchenkov teaches the data processing apparatus of claim 11,
Minchenkov further teaching wherein the at least one processor is further configured to execute the at least one instruction and to determine, based on the type of object being not a model, to activate the scan filtering mode (“The intraoral image 248 (and optionally other data) is input into trained model 255,” [0104]; “The probability map may be used to determine pixels that represent excess material. Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image. This may include actually removing the pixels labeled as excess material from the intraoral image, applying a filter to the intraoral image, or modifying the pixels of the intraoral image labeled as excess material to a value that indicates that there is no surface at the pixel (e.g., reducing a height map value for the pixel to zero or another predefined value).” [0107]; Pixels which are identified as excess material may be filtered out of the intraoral image [0040-0052, 0083-0110], [fig. 1-3C, 5A-6], [see claim 2 rejection]).
Regarding claim 14, Minchenkov teaches the data processing apparatus of claim 11, Minchenkov further teaching wherein the at least one processor is further configured to execute the at least one instruction to:
change a current activated or deactivated state of the scan filtering mode based on the current activated or deactivated state of the scan filtering mode not corresponding to the determining (“generating a three-dimensional model of the dental site […] modifying the three-dimensional model by hiding or removing from the three-dimensional model those points that are classified as excess material.” [clm 19]; “Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image. This may include actually removing the pixels labeled as excess material from the intraoral image, applying a filter to the intraoral image, or modifying the pixels of the intraoral image labeled as excess material to a value that indicates that there is no surface at the pixel […] The images that are used to generate the virtual 3D model may be modified images in which excess material has been removed” [0107]; [0040-0052, 0083-0110], [fig. 1-3C, 5A-6], [see claim 3 rejection]); and
obtain 3D scan data on the object based on the changed activated or deactivated state of the scan filtering mode (“the probability map is used to update the intraoral image to generate a modified intraoral image. The probability map may be used to determine pixels that represent excess material. Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image.” [0107]; [0040-0052, 0083-0110], [fig. 1-3C, 5A-6], [see claim 3 rejection]).
Regarding claim 15, Minchenkov teaches the data processing apparatus of claim 14,
Minchenkov further teaching further comprising a display (“Computing device 105 and computing device 106 may each include one or more processing devices, memory, secondary storage, one or more input devices (e.g., such as a keyboard, mouse, tablet, and so on), one or more output devices (e.g., a display, a printer, etc.), and/or other hardware components.” [0042]; “The computing device 1300 also may include a video display unit 1310 (e.g., a liquid crystal display (LCD)” [0155]; [0040-0052, 0083-0110], [fig. 1-3C, 5A-6]),
wherein the at least one processor is further configured to execute the at least one instruction to output a user interface screen for changing the current activated or deactivated state of the scan filtering mode based on the current activated or deactivated state of the scan filtering mode not corresponding to the determining “Intraoral scan application 115 may generate a 3D model from intraoral images, and may display the 3D model to a user (e.g., a doctor) via a user interface. The 3D model can then be checked visually by the doctor. […] The doctor may review (e.g., visually inspect) the generated 3D model of an intraoral site and determine whether the 3D model is acceptable (e.g., whether a margin line of a preparation tooth is accurately represented in the 3D model).” [0048]; [0040-0052, 0083-0110], [fig. 1-3C, 5A-6]),
wherein the user interface screen comprises at least one of the type of the object, a message guiding changing of the activated or deactivated state of the scan filtering mode based on the type of the object, and a message guiding changing of a scan operation (“Intraoral scan application 115 may generate a 3D model from intraoral images, and may display the 3D model to a user (e.g., a doctor) via a user interface. The 3D model can then be checked visually by the doctor. […] The doctor may review (e.g., visually inspect) the generated 3D model of an intraoral site and determine whether the 3D model is acceptable (e.g., whether a margin line of a preparation tooth is accurately represented in the 3D model).” [0048]; “processing logic modifies the virtual 3D model by removing from the 3D model those points that are classified as excess material. In some embodiments, this includes filtering out the points without actually removing the points from the 3D model. Accordingly, a user may turn off the filtering to view the excess material” [0147]; [0040-0052, 0083-0110], [fig. 1-3C, 5A-6]).
Regarding claim 16, Minchenkov teaches the data processing apparatus of claim 15,
Minchenkov further teaching comprising a user input unit (“Computing device 105 and computing device 106 may each include one or more processing devices, memory, secondary storage, one or more input devices (e.g., such as a keyboard, mouse, tablet, and so on), one or more output devices (e.g., a display, a printer, etc.), and/or other hardware components.” [0042]; “Intraoral scan application 115 may generate a 3D model from intraoral images, and may display the 3D model to a user (e.g., a doctor) via a user interface.” [0048]; [0040-0052, 0083-0110], [fig. 1-3C, 5A-6]),
wherein the at least one processor is further configured to execute the at least one instruction to:
receive a signal for changing the activated or deactivated state of the scan filtering mode, the signal being input through the user interface screen in response to the outputting of the user interface screen (“Intraoral scan application 115 may generate a 3D model from intraoral images, and may display the 3D model to a user (e.g., a doctor) via a user interface. The 3D model can then be checked visually by the doctor. The doctor can virtually manipulate the 3D model via the user interface with respect to up to six degrees of freedom (i.e., translated and/or rotated with respect to one or more of three mutually orthogonal axes) using suitable user controls (hardware and/or virtual) to enable viewing of the 3D model from any desired direction. The doctor may review (e.g., visually inspect) the generated 3D model of an intraoral site and determine whether the 3D model is acceptable (e.g., whether a margin line of a preparation tooth is accurately represented in the 3D model).” [0048]; “processing logic modifies the virtual 3D model by removing from the 3D model those points that are classified as excess material. In some embodiments, this includes filtering out the points without actually removing the points from the 3D model. Accordingly, a user may turn off the filtering to view the excess material.” [0147]; [0040-0052, 0083-0110], [1-3C, 5A-6], [see claim 4, 5 rejections]); and
change the current activated or deactivated state of the scan filtering mode based on the signal (“Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image. This may include actually removing the pixels labeled as excess material from the intraoral image, applying a filter to the intraoral image, or modifying the pixels of the intraoral image labeled as excess material to a value that indicates that there is no surface at the pixel […] The images that are used to generate the virtual 3D model may be modified images in which excess material has been removed” [0107]; “a user may turn off the filtering to view the excess material.” [0147]; [0040-0052, 0083-0110], [fig. 1-3C, 5A-6], [see claim 5 rejection]).
Regarding claim 17, Minchenkov teaches the data processing apparatus of claim 11,
Minchenkov further teaching wherein the at least one processor is further configured to execute the at least one instruction to identify the object as a specific class when a percentage of pixels identified as the specific class among all pixels included in a frame received from a scanner is greater than or equal to a reference value (“the trained machine learning model outputs a probability map comprising, for each pixel in the intraoral image, a first probability that the pixel belongs to a first dental class and a second probability that the pixel belongs to a second dental class,” [clm 19]; ‘When a scan session is complete (e.g., all images for an intraoral site or dental site have been captured), intraoral scan application 115 may generate a virtual 3D model of one or more scanned dental sites.” [0045]; “In the case of teeth/gums/excess material segmentation, three valued labels are generated for each pixel. The corresponding predictions have a probability nature: for each pixel there are three numbers that sum up to 1.0 and can be interpreted as probabilities of the pixel to correspond to these three classes.” [0105]; “processing logic determines whether the probability of a pixel being excess material exceeds a threshold probability. If the probability of the excess material class (e.g., represented as a blue component) has a value that is greater than some probability threshold, then the pixel is classified as excess material.” [0112]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 6 rejection]).
Regarding claim 18, Minchenkov teaches the data processing apparatus of claim 17,
Minchenkov further teaching wherein the at least one processor is further configured to execute the at least one instruction to identify the object as the specific class when a percentage of frames identified as the specific class among a reference number of frames obtained after a scan operation starts is greater than or equal to a reference value (“wherein each of the plurality of inputs comprises data from at least two sequential intraoral images from the plurality of intraoral images” [clm 22]; “image registration is performed for adjacent or overlapping intraoral images (e.g., each successive frame of an intraoral video).” [0046]; “In the case of teeth/gums/excess material segmentation, three valued labels are generated for each pixel. The corresponding predictions have a probability nature: for each pixel there are three numbers that sum up to 1.0 and can be interpreted as probabilities of the pixel to correspond to these three classes.” [0105]; “processing logic determines whether the probability of a pixel being excess material exceeds a threshold probability. If the probability of the excess material class (e.g., represented as a blue component) has a value that is greater than some probability threshold, then the pixel is classified as excess material.” [0112]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 7 rejection]).
Regarding claim 19, Minchenkov teaches the data processing apparatus of claim 11, Minchenkov further teaching wherein the scan filtering mode comprises a plurality of activation modes depending on objects to be filtered (“the machine learning model may be trained to generate a probability map, where each point in the probability map corresponds to a pixel of an input image and indicates one or more of a first probability that the pixel represents a first dental class, a second probability that the pixel represents a second dental class, a third probability that the pixel represents a third dental class, a fourth probability that the pixels represents a fourth dental class, a fifth probability that the pixel represents a fifth dental class, and so on.” [0080]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 8 rejection]),
the at least one processor is further configured to execute the at least one instruction to obtain filtered 3D scan data on the object in one activation mode among the plurality of activation modes (“The trained machine learning model 255 outputs a probability map 260, where each point in the probability map corresponds to a pixel in the image and indicates probabilities that the pixel represents one or more dental classes. In the case of teeth/gums/excess material segmentation, three valued labels are generated for each pixel” [0105]; “In a first technique for applying the probability map, processing logic determines whether the probability of a pixel being excess material is larger than the probabilities of the pixel being anything else (e.g., larger than the probability of the pixel being gums or teeth. A pixel is then determined to be in the excess material class” [0111]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 8 rejection]), and
the one activation mode among the plurality of activation modes comprises at least one of an activation mode selected by a user, an activation mode selected by default, and an activation mode used in a previous scan operation (“given a probability map that includes probabilities of pixels belonging to various dental classes, where at least one of those dental classes is an excess material dental class, the system can filter out pixels that correspond to excess material.” [0110]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 8 rejection]).
Regarding claim 20, Minchenkov teaches the data processing apparatus of claim 19, Minchenkov further teaching wherein the at least one processor is further configured to execute the at least one instruction to:
identify pixels of a class to be filtered in the one activation mode among pixels included in a frame received from a scanner (“the trained machine learning model outputs a probability map comprising, for each pixel in the intraoral image, a first probability that the pixel belongs to a first dental class and a second probability that the pixel belongs to a second dental class,” [clm 19]; ‘When a scan session is complete (e.g., all images for an intraoral site or dental site have been captured), intraoral scan application 115 may generate a virtual 3D model of one or more scanned dental sites.” [0045]; “In the case of teeth/gums/excess material segmentation, three valued labels are generated for each pixel. The corresponding predictions have a probability nature: for each pixel there are three numbers that sum up to 1.0 and can be interpreted as probabilities of the pixel to correspond to these three classes.” [0105]; “processing logic determines whether the probability of a pixel being excess material exceeds a threshold probability. If the probability of the excess material class (e.g., represented as a blue component) has a value that is greater than some probability threshold, then the pixel is classified as excess material.” [0112]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 9 rejection]); and
obtain filtered 3D scan data on the object by removing pixels of the identified class (“Data for pixels labeled as excess material may then be removed from or hidden in the intraoral image. This may include actually removing the pixels labeled as excess material from the intraoral image, applying a filter to the intraoral image, or modifying the pixels of the intraoral image labeled as excess material to a value that indicates that there is no surface at the pixel […] The images that are used to generate the virtual 3D model may be modified images in which excess material has been removed” [0107]; [0040-0052, 0083-0094], [fig. 1-3C, 5A-6], [see claim 9 rejection]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Souza et al. (US20130089246A1, 2013-04-11) teaches a method for noise suppression in a 3-D volume image, executed at least in part on a logic processor, obtains the 3-D volume image, applies diffusion to the volume image according to a parameter that relates to image scale and is specified in an operator instruction, and displays the volume image modified according to the applied diffusion [abst].
Shi et al. (US20130169639A1, 2013-07-04) teaches three dimensional (3-D) medical image processing, and more particularly to methods that facilitate segmentation of anatomical structures using interactive contouring with GPU (graphics processing unit) processing [0001].
Souza et al. (US20140314291A1, 2014-10-23) teaches a method for analyzing a subject tooth [abst]. The invention relates generally to dental imaging, and in particular to a radiographic imaging apparatus for viewing volume images having highlighted fracture information for teeth [0002].
Michaeli et al. (US20150029309A1, 2015-01-29) teaches obtaining images in a dental environment, and, more particularly, to a method, system, apparatus, and computer program for 3D acquisition and caries detection [0002].
Sandholm et al. (US20150366525A1, 2015-12-24) teaches x-ray imaging systems and methods which use a priori information to address artifacts in 3D imaging [0001].
Sabina et al. (US20190231492A1, 2019-08-01) teaches methods and apparatuses for taking, using and displaying three-dimensional (3D) volumetric models of a patient's dental arch [abst].
Schnabel et al. (US20210085238A1, 2021-03-25) teaches a method, a system and computer readable storage media for detecting errors in three-dimensional (3D) measurements [0001].
Shen et al. (US20130308846A1, 2013-11-21) teaches a image processing in x-ray computed tomography and, in particular, to automatic tooth segmentation, teeth alignment detection, and manipulation in a digital CBCT volume [abst].
Elbaz et al. (US20180028065A1, 2018-02-01) teaches intraoral scanning methods and apparatuses for generating a three-dimensional model of a subject's intraoral region (e.g., teeth) including both surface features and internal features. These methods and apparatuses may be used for identifying and evaluating lesions, caries and cracks in the teeth [abst].
Sharma et al. (US20190117078A1, 2019-04-25) teaches optical systems and methods to detect caries, cracks, tooth defects and oral pathologies [0002].
Inglese et al. (US20200129069A1, 2020-04-30) teaches optical apparatus and methods which can provide reduced errors in generating a dental 3D surface mesh [abst].
Any inquiry concerning this communication or earlier communications from the examiner should be directed to James F. McDonald III whose telephone number is (571)272-7296. The examiner can normally be reached M-F; 8AM-6PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Koharski can be reached at 5712727230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JAMES FRANKLIN MCDONALD III
Examiner
Art Unit 3797
/CHRISTOPHER KOHARSKI/Supervisory Patent Examiner, Art Unit 3797