DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed January 12, 2026 has been entered. Claims 1-14 remain pending in the application. Applicant’s amendments to the Drawings have overcome the objection previously set forth in the Non-Final Office Action mailed September 11, 2025.
Response to Arguments
Applicant’s arguments, see Pages 8-10 of Remarks, filed January 12, 2026, with respect to the rejection(s) of claims 1-14 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Ciriello et al. (US 20240261068 A1).
Applicant's arguments filed January 12, 2026, regarding claim 7, have been fully considered but they are not persuasive. The applicant argues that Kopelman (US 6845175 B2) fails to teach anything regarding the shape or characteristics of treated teeth. However, Kopelman teaches performing alignment by using features in the treated teeth (Col. 4 lines 35-44 – “teeth may be displaced on the virtual three-dimensional image of teeth model in a manner they are expected to be shifted during the course of the orthodontic treatment. Thus, for example, by marking various landmarks on a displaced teeth and marking and then displacing the same landmarks in the cephalometric model, it may be possible to check on both images whether the orthodontic treatment achieves a result which matches a certain acceptable norm or how changes should be made to achieve such a norm”; Note: the displaced teeth are the treated teeth, and the cephalometric image is aligned to the displacement in the 3D image by using teeth landmarks, which are features).
Drawings
The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the “four corner points of the front teeth when viewed as a rectangle” in claim 3 must be shown or the feature(s) canceled from the claim(s). No new matter should be entered.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 3 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
In claim 3, “the processor is further configured to align the VR image and the color image based on an area of front teeth and four corner points of the front teeth when viewed as a rectangle” is considered as new matter because it introduces limitations not supported by the disclosure. In paragraph 0054 of the specification, the examiner acknowledges the description: “the alignment function 140 can also enlarge and reduce, based on the area of the front teeth of each of the VR image and the color image, four end points in a case where the image is regarded as a rectangle”. However, claim 3 and paragraph 0054 express different scopes; claim 3 specifies “four corner points of front teeth when viewed as a rectangle”, while paragraph 0054 specifies “four end points in a case where the image is regarded as a rectangle”. Corner points of front teeth are not the necessarily same as end points of an image, and “viewed as” can hold a different meaning from “regarded as”. Therefore, claim 3 introduces new matter.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claim 3, it is unclear what is meant by “four corner points of the front teeth when viewed as a rectangle”. For example, it is unclear whether or not the four corner points refer specifically to corner points on the front teeth or corner points of the front teeth images. It is also unclear what entails the points being “viewed as a rectangle”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 9, and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Ciriello et al. (US 20240261068 A1), hereinafter Ciriello.
Regarding claim 1, Ciriello teaches a medical information processing device comprising: a memory and a processor (Paragraph 0024 – “a computing device comprising a processor and a non-transitory computer readable storage medium with a computer program including instructions executable by the processor causing the processor to: align an OCT image captured by the first image sensor with a color image captured by the second image sensor; and superimpose the color image onto a surface boundary extracted from the OCT image.”; Note: the computing device is equivalent to the medical information processing device, and the non-transitory computer readable storage medium is equivalent to the memory) configured to:
generate a volume rendering (VR) image from a tomographic medical image of a subject captured by a tomographic device (Paragraph 0075 – “a plurality of OCT scans, taken from different positions and orientations are transformed into the coordinate system of the mechanical positioning system's body and are thereby used to generate a single volumetric 3D model of an extended volume of dental tissue (such as a dental arch)”; Note: a single volumetric 3d model, which is equivalent to the volume rendering, is generated of a person’s dental tissue, which is the subject. The 3D model is generated from an OCT scan medical image);
acquire a color image acquired by one or a plurality of optical imaging units (Paragraph 0092, 0094, 0099 – “the device uses a second scanner adjunct to an OCT scanner with a separate optical path (such as the embodiment in which the two scanners are spatially adjacent to each other but have separate optical paths)…In some embodiments, a first scanner comprises an OCT scanner and a second scanner comprises a color camera…a broadband visible-range LED illuminates the dental tissue and a color camera is used to capture a color image of the outer surface of the dental tissue”; Note: the color camera is equivalent to the optical imaging unit), the color image sharing at least a part of an imaging region with the VR image (Paragraph 0094-0095 – “a known relative position and orientation between the color camera and OCT scanner are used to map the color image onto a 3D surface imaged by OCT… the color images mapped onto the surface of the OCT data is utilized to highlight tooth defects, such as cavities, chips, and tooth decay”; Note: the color image and the OCT data cover the same imagining region since they are mapped to highlight the same landmarks);
and superimpose at least a part of the color image on the VR image (Paragraph 0099 – “the captured color image is mapped onto a 3D surface imaged by OCT”),
wherein each of the imaging units is an imaging device that captures a 2D image in a visible light band (Paragraph 0094, 0099 – “the mapping of color image onto the 3D surface of OCT data is done by registration of the color image with a 2D projection image generated from 3D OCT data…a broadband visible-range LED illuminates the dental tissue and a color camera is used to capture a color image of the outer surface of the dental tissue”; Note: the color camera captures images in a visible light band. It is obvious that that color camera captures 2D images, because the images are registered to a 2D projection of the OCT data. If the images were not 2D, there would be no need to create a 2D projection for registration).
Ciriello does not directly teach the limitation: “generate a volume rendering (VR) image from an X-ray computed tomographic medical image of a subject captured by an X-ray computed tomographic device”. However, Ciriello separately teaches the X-ray computed tomographic medical image of a subject captured by an X-ray computed tomographic device (Paragraph 0093, 0121 – “at least one scanner comprises…an intraoral x-ray CT… the scanning method employs X-ray computed tomography where the illumination is handled using a single or array of source and detection is handled using a single or an array of detectors surrounding the imaged object, where the source(s) and detector(s) are both located intraorally. In some embodiments, the sensor may be held stationary, and emitter may be moved relative to the target anatomy (for example the tooth)”; Note: the x-ray CT captures an image of an object in the mouth, such as a tooth), and generating a volume rendering from a tomographic medical image (Paragraph 0075 – “a plurality of OCT scans, taken from different positions and orientations are transformed into the coordinate system of the mechanical positioning system's body and are thereby used to generate a single volumetric 3D model of an extended volume of dental tissue (such as a dental arch)”; Note: a single volumetric 3d model, which is equivalent to the volume rendering, is generated of a person’s dental tissue, which is the subject. The 3D model is generated from an OCT scan medical image). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the OCT of Ciriello could have been substituted for the X-ray CT of Ciriello because both the OCT and X-ray CT serve the purpose of producing medical tomographic images of a subject and both can be used to generate a volume rendering. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of forming a volume rendering. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the OCT for the X-ray CT according to known methods to yield the predictable result of representing a subject in a volume rendering.
Regarding claim 9, Ciriello teaches the medical information processing device according to claim 1. Ciriello further teaches wherein the processor is further configured to set a transparency of the color image to be superimposed on the VR image (Paragraph 0099, 0122 – “a color camera is used to capture a color image of the outer surface of the dental tissue. In some embodiments, the captured color image is mapped onto a 3D surface imaged by OCT…a 3D model of a tooth is displayed in a shaded view, a cross sectional view, a partially transparent view, a shadow view; or any combination thereof…color tooth surface data can be toggled on or off or made to have varying opacity by the user”; Note: the color tooth surface data comes from the color image, and its transparency can be set by the user. It is implied that the processor must be configured to change the transparency in response to the user settings in order for the changes to appear).
Regarding claim 13, Ciriello teaches a medical information processing method (Paragraph 0012 – “method of overlaying color onto an image captured by an optical coherence tomography (OCT) system”) comprising:
generating, by a processor (Paragraph 0024 – “a processor”), a volume rendering (VR) image from a tomographic medical image of a subject captured by a tomographic device (Paragraph 0075 – “a plurality of OCT scans, taken from different positions and orientations are transformed into the coordinate system of the mechanical positioning system's body and are thereby used to generate a single volumetric 3D model of an extended volume of dental tissue (such as a dental arch)”; Note: a single volumetric 3d model, which is equivalent to the volume rendering, is generated of a person’s dental tissue, which is the subject. The 3D model is generated from an OCT scan medical image. It is implied that a processor performs the generation, because the generation cannot occur without a processor);
acquiring, by a processor (Paragraph 0024 – “causing the processor to: align an OCT image captured by the first image sensor with a color image captured by the second image sensor; and superimpose the color image onto a surface boundary extracted from the OCT image”; Note: it is implied that the processor has to acquire the color image before performing any processes on it), a color image acquired by one or a plurality of optical imaging units (Paragraph 0092, 0094, 0099 – “the device uses a second scanner adjunct to an OCT scanner with a separate optical path (such as the embodiment in which the two scanners are spatially adjacent to each other but have separate optical paths)…In some embodiments, a first scanner comprises an OCT scanner and a second scanner comprises a color camera…a broadband visible-range LED illuminates the dental tissue and a color camera is used to capture a color image of the outer surface of the dental tissue”; Note: the color camera is equivalent to the optical imaging unit), the color image sharing at least a part of an imaging region with the VR image (Paragraph 0094-0095 – “a known relative position and orientation between the color camera and OCT scanner are used to map the color image onto a 3D surface imaged by OCT… the color images mapped onto the surface of the OCT data is utilized to highlight tooth defects, such as cavities, chips, and tooth decay”; Note: the color image and the OCT data cover the same imagining region since they are mapped to highlight the same landmarks);
and superimposing, by a processor (Paragraph 0024 – “causing the processor to: align an OCT image captured by the first image sensor with a color image captured by the second image sensor; and superimpose the color image onto a surface boundary extracted from the OCT image”), at least a part of the color image on the VR image (Paragraph 0099 – “the captured color image is mapped onto a 3D surface imaged by OCT”),
wherein each of the imaging units is an imaging device that captures a 2D image in a visible light band (Paragraph 0094, 0099 – “the mapping of color image onto the 3D surface of OCT data is done by registration of the color image with a 2D projection image generated from 3D OCT data…a broadband visible-range LED illuminates the dental tissue and a color camera is used to capture a color image of the outer surface of the dental tissue”; Note: the color camera captures images in a visible light band. It is obvious that that color camera captures 2D images, because the images are registered to a 2D projection of the OCT data. If the images were not 2D, there would be no need to create a 2D projection for registration).
Ciriello does not directly teach the limitation: “generating, by a processor, a VR image from an X-ray computed tomographic medical image of a subject captured by an X-ray computed tomographic device”. However, Ciriello separately teaches the X-ray computed tomographic medical image of a subject captured by an X-ray computed tomographic device (Paragraph 0093, 0121 – “at least one scanner comprises…an intraoral x-ray CT… the scanning method employs X-ray computed tomography where the illumination is handled using a single or array of source and detection is handled using a single or an array of detectors surrounding the imaged object, where the source(s) and detector(s) are both located intraorally. In some embodiments, the sensor may be held stationary, and emitter may be moved relative to the target anatomy (for example the tooth)”; Note: the x-ray CT captures an image of an object in the mouth, such as a tooth), and generating a volume rendering from a tomographic medical image (Paragraph 0075 – “a plurality of OCT scans, taken from different positions and orientations are transformed into the coordinate system of the mechanical positioning system's body and are thereby used to generate a single volumetric 3D model of an extended volume of dental tissue (such as a dental arch)”; Note: a single volumetric 3d model, which is equivalent to the volume rendering, is generated of a person’s dental tissue, which is the subject. The 3D model is generated from an OCT scan medical image). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the OCT of Ciriello could have been substituted for the X-ray CT of Ciriello because both the OCT and X-ray CT serve the purpose of producing medical tomographic images of a subject and both can be used to generate a volume rendering. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of forming a volume rendering. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the OCT for the X-ray CT according to known methods to yield the predictable result of representing a subject in a volume rendering.
Regarding claim 14, Ciriello teaches a non-transitory computer-readable medium having a program stored therein and configured to cause a processor to execute (Paragraph 0024 – “a computing device comprising a processor and a non-transitory computer readable storage medium with a computer program including instructions executable by the processor causing the processor to: align an OCT image captured by the first image sensor with a color image captured by the second image sensor; and superimpose the color image onto a surface boundary extracted from the OCT image”):
generating a VR image from a tomographic medical image of a subject captured by a tomographic device (Paragraph 0075 – “a plurality of OCT scans, taken from different positions and orientations are transformed into the coordinate system of the mechanical positioning system's body and are thereby used to generate a single volumetric 3D model of an extended volume of dental tissue (such as a dental arch)”; Note: a single volumetric 3d model, which is equivalent to the volume rendering, is generated of a person’s dental tissue, which is the subject. The 3D model is generated from an OCT scan medical image);
acquiring a color image acquired by one or a plurality of optical imaging units (Paragraph 0092, 0094, 0099 – “the device uses a second scanner adjunct to an OCT scanner with a separate optical path (such as the embodiment in which the two scanners are spatially adjacent to each other but have separate optical paths)…In some embodiments, a first scanner comprises an OCT scanner and a second scanner comprises a color camera…a broadband visible-range LED illuminates the dental tissue and a color camera is used to capture a color image of the outer surface of the dental tissue”; Note: the color camera is equivalent to the optical imaging unit), the color image sharing at least a part of an imaging region with the VR image (Paragraph 0094-0095 – “a known relative position and orientation between the color camera and OCT scanner are used to map the color image onto a 3D surface imaged by OCT… the color images mapped onto the surface of the OCT data is utilized to highlight tooth defects, such as cavities, chips, and tooth decay”; Note: the color image and the OCT data cover the same imagining region since they are mapped to highlight the same landmarks);
and superimposing at least a part of the color image on the VR image (Paragraph 0099 – “the captured color image is mapped onto a 3D surface imaged by OCT”),
wherein each of the imaging units is an imaging device that captures a 2D image in a visible light band (Paragraph 0094, 0099 – “the mapping of color image onto the 3D surface of OCT data is done by registration of the color image with a 2D projection image generated from 3D OCT data…a broadband visible-range LED illuminates the dental tissue and a color camera is used to capture a color image of the outer surface of the dental tissue”; Note: the color camera captures images in a visible light band. It is obvious that that color camera captures 2D images, because the images are registered to a 2D projection of the OCT data. If the images were not 2D, there would be no need to create a 2D projection for registration).
Ciriello does not directly teach the limitation: “generating a VR image from an X-ray computed tomographic medical image of a subject captured by an X-ray computed tomographic device”. However, Ciriello separately teaches the X-ray computed tomographic medical image of a subject captured by an X-ray computed tomographic device (Paragraph 0093, 0121 – “at least one scanner comprises…an intraoral x-ray CT… the scanning method employs X-ray computed tomography where the illumination is handled using a single or array of source and detection is handled using a single or an array of detectors surrounding the imaged object, where the source(s) and detector(s) are both located intraorally. In some embodiments, the sensor may be held stationary, and emitter may be moved relative to the target anatomy (for example the tooth)”; Note: the x-ray CT captures an image of an object in the mouth, such as a tooth), and generating a volume rendering from a tomographic medical image (Paragraph 0075 – “a plurality of OCT scans, taken from different positions and orientations are transformed into the coordinate system of the mechanical positioning system's body and are thereby used to generate a single volumetric 3D model of an extended volume of dental tissue (such as a dental arch)”; Note: a single volumetric 3d model, which is equivalent to the volume rendering, is generated of a person’s dental tissue, which is the subject. The 3D model is generated from an OCT scan medical image). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the OCT of Ciriello could have been substituted for the X-ray CT of Ciriello because both the OCT and X-ray CT serve the purpose of producing medical tomographic images of a subject and both can be used to generate a volume rendering. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of forming a volume rendering. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the OCT for the X-ray CT according to known methods to yield the predictable result of representing a subject in a volume rendering.
Claims 2 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Ciriello in view of Kopelman et al. (US 6845175 B2), hereinafter Kopelman.
Regarding claim 2, Ciriello teaches the medical information processing device according to claim 1. Ciriello does not teach wherein the processor is further configured to extract a common feature from each of the VR image and the color image, and to align the VR image and the color image based on the feature. However, Kopelman teaches extracting a common feature from each of the VR image and the color image (Col. 7 lines 56-58, Col. 8 lines 1-3 – “basic landmarks are marked on discernable objects in the three-dimensional virtual teeth model as represented in image 111…a cephalometric image of the same patient is input and on this image, the same key points are then marked”; Note: the landmarks are a common feature. Additionally, the VR image and color image were previously taught from the rejection of claim 1), and aligning the VR image and the color image based on the feature (Col. 8 lines 3-7 – “the two images may be matched, which may be by way of super-position as shown above, which can be represented on a screen, or by any other way of mapping of each location in one image to that of the other image”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ciriello to incorporate the teachings of Kopelman to extract a common feature and align images based on that feature for the benefit of having images that are accurately positioned. If the features were not common to both images, they would not be lined up properly, making the final image difficult to view.
Regarding claim 7, Ciriello teaches the medical information processing device according to claim 2. Ciriello does not teach wherein the processor is further configured to perform alignment by using shapes of treated teeth or features in the treated teeth. However, Kopelman teaches performing alignment by using features in the treated teeth (Col. 4 lines 35-44 – “teeth may be displaced on the virtual three-dimensional image of teeth model in a manner they are expected to be shifted during the course of the orthodontic treatment. Thus, for example, by marking various landmarks on a displaced teeth and marking and then displacing the same landmarks in the cephalometric model, it may be possible to check on both images whether the orthodontic treatment achieves a result which matches a certain acceptable norm or how changes should be made to achieve such a norm”; Note: the displaced teeth are the treated teeth, and the cephalometric image is aligned to the displacement in the 3D image by using teeth landmarks, which are features). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ciriello to incorporate the teachings of Kopelman to perform alignment by using features in treated teeth for the benefit of using the image to “determine whether shifts in various elements such as the jaw, are within permitted physiological or aesthetical limits” (Kopelman: Col. 4 lines 62-64) or “whether desired proportional measurements have been reached in such teeth displacement or whether any medication should be made” (Kopelman: Col. 8 lines 16-18).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Ciriello in view of Kopelman and Sachdeva et al. (US 20040029068 A1), hereinafter Sachdeva.
Regarding claim 4, Ciriello in view of Kopelman teaches the medical information processing device according to claim 2. Ciriello further teaches wherein the medical image is a computed tomography (CT) image in an oral cavity (Paragraph 0093, 0121 – “at least one scanner comprises…an intraoral x-ray CT… the scanning method employs X-ray computed tomography where the illumination is handled using a single or array of source and detection is handled using a single or an array of detectors surrounding the imaged object, where the source(s) and detector(s) are both located intraorally. In some embodiments, the sensor may be held stationary, and emitter may be moved relative to the target anatomy (for example the tooth)”; Note: the x-ray CT captures an image of an object in the mouth, such as a tooth). Ciriello does not teach wherein the processor is further configured to perform alignment in consideration of a height component of the CT image and an image on which three-dimensional mapping has been executed. However, Sachdeva teaches performing alignment in consideration of a height component of the CT image (Paragraph 0103 – “the radiographic image can be a computed tomographic image volume. As previously mentioned, the orthodontic data contains three-dimensional images of the surface of the orthodontic structure”) and an image on which three-dimensional mapping has been executed (Paragraph 0100, 0105 – “the scaling factor 212 determination is based on an assumption that the scan data will have a linear error term in each of the x, y and z axis, such that a single scaling factor is determined and used to scale each of the teeth as well as the other aspects of the orthodontic structure of the patient…To more accurately map the two-dimensional images of a tooth onto the three-dimensional model, multiple angles of the tooth should be used. Accordingly, a side, a front, and a bottom view of the tooth should be taken and mapped to the scaled digital model of the tooth. Note that the bone and other portions of the orthodontic structure are scaled in a similar manner. Further note that MRI images, and any other images obtained of the orthodontic patient, may also be scaled in a similar manner”; Note: the images/models are scaled, which takes into consideration the x, y, and z axes of the data. The y axis represents height. Additionally, 3D mapping is performed to map the 2D image onto the 3D model). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ciriello to incorporate the teachings of Sachdeva to perform alignment while considering height of the CT image and 3D mapped image for the benefit of ensuring that the images are of the same or similar size, which would make the aligned result more accurate and visually appealing. Additionally, “When digital image data from multiple sources are combined or superimposed relative to each other to create a composite model, it may be necessary to scale data from one set to the other in order to create a single composite model in a single coordinate system in which the anatomical data from both sets have the same dimensions in three-dimensional space. Hence, some scaling may be required” (Sachdeva: Paragraph 0099).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Ciriello in view of Kopelman and National Institute of Biomedical Imaging and Bioengineering (Computed Tomography (CT)), hereinafter NIBIB.
Regarding claim 5, Ciriello in view of Kopelman teaches the medical information processing device according to claim 2. Ciriello further teaches wherein the medical image is a computed tomography (CT) image in an oral cavity (Paragraph 0093, 0121 – “at least one scanner comprises…an intraoral x-ray CT… the scanning method employs X-ray computed tomography where the illumination is handled using a single or array of source and detection is handled using a single or an array of detectors surrounding the imaged object, where the source(s) and detector(s) are both located intraorally. In some embodiments, the sensor may be held stationary, and emitter may be moved relative to the target anatomy (for example the tooth)”; Note: the x-ray CT captures an image of an object in the mouth, such as a tooth). Ciriello does not teach wherein the processor is further configured to reconstruct, in three dimensions, an image obtained by performing alignment using an image obtained by converting the CT image into two dimensions. However, NIBIB teaches reconstructing, in three dimensions, an image obtained by performing alignment using an image obtained by converting the CT image into two dimensions (Paragraph 3 on Page 2 and Paragraph 1 on Page 3 – “Each time the x-ray source completes one full rotation, the CT computer uses sophisticated mathematical techniques to construct a two-dimensional image slice of the patient… When a full slice is completed, the image is stored and the motorized bed is moved forward incrementally into the gantry. The x-ray scanning process is then repeated to produce another image slice. This process continues until the desired number of slices is collected. Image slices can either be displayed individually or stacked together by the computer to generate a 3D image of the patient”; Note: a 3D image is reconstructed by aligning 2D image slices, which were converted from the original CT x-ray source). NIBIB is used to support Ciriello to show the process of how CT works. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ciriello to incorporate the teachings of NIBIB to reconstruct a 3D image aligning 2D images from CT images because the process is common to CT scans, where a 3D image is reconstructed from slices for medical experts to view.
Claims 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Ciriello in view of Kopelman and Cofar et al. (US 20230048898 A1), hereinafter Cofar.
Regarding claim 6, Ciriello in view of Kopelman teaches the medical information processing device according to claim 2. Ciriello does not teach wherein the processor is further configured to estimate shapes of teeth exposed from gums and to perform alignment. However, Cofar teaches estimating shapes of teeth exposed from gums (Paragraph 0236, 0242 – “determining 1003 a limited set of parameters, for example less than 20 or less than 15 parameters, indicative of: a size of the tooth; and a shape of said tooth…The at least one parameter for describing a shape of the tooth may comprise exactly two parameters (e.g. a3 and a4, or a3 and a5, see FIGS. 5A through 5F) for describing a first transition line, and exactly two parameters (e.g. a6 and a7, or a6 and a8) for describing a second transition line”; Note: Fig.5A-5F show that the shapes of teeth exposed from gums being estimated; see modified screenshot of Fig. 5A-5F below). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ciriello to incorporate the teachings of Cofar to estimate shapes of teeth exposed from gums because the shape of the teeth is a “highly compact manner” of representing and identifying each tooth (Cofar: Paragraph 0239).
PNG
media_image1.png
572
533
media_image1.png
Greyscale
Modified screenshot of Fig. 5A-5F (taken from Cofar)
Ciriello modified by Cofar still does not teach performing alignment. However, Kopelman teaches performing alignment (Col. 8 lines 3-7 – “the two images may be matched, which may be by way of super-position as shown above, which can be represented on a screen, or by any other way of mapping of each location in one image to that of the other image”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ciriello to incorporate the teachings of Kopelman to perform alignment because “For the purpose of proper design of orthodontic treatment it would have been high advantageous to have a method and system whereby information which can be acquired from one type of image can be transferred or superpositioned to information available from another type of image”. Alignment of different images “allows better appreciation of the three-dimensional structure of the teeth and the relative position of different teeth” (Kopelman: Col. 1 lines 47-56).
Regarding claim 8, Ciriello in view of Kopelman teaches the medical information processing device according to claim 2. Ciriello does not teach wherein the processor is further configured to perform alignment without using a feature of a change in shapes of teeth, the shapes of the teeth having been changed by treatment. However, Kopelman teaches performing alignment without using a feature of a change in shapes of teeth (Col. 3 lines 45-53 – “The basic landmarks which are used for registering the two sets of images, are typically defined points at either the base or the apex of certain selected teeth e.g. the incisors and the first molars. Such basic landmarks may be selected by the user or may be automatically selected by the system's processor, e.g. based on established norms. After selecting the basic landmarks and marking them in one of the images, then the landmarks may be marked in the other images to allow to register both images”; Note: alignment is performed using basic landmarks, which is not a feature of a change in the shape of the teeth). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ciriello to incorporate the teachings of Kopelman to perform alignment without using a feature of a change in the shape of the teeth because the alignment would not be affected even after the shape of the teeth changes. Additionally, using landmarks instead of shape is a consistent and easy way to identify positions of the teeth for alignment. Furthermore, Ciriello modified by Kopelman still does not teach the shapes of the teeth having been changed by treatment. However, Cofar teaches the shapes of the teeth having been changed by treatment (Paragraph 0223-0224 – “the computer program may also show a photo-realistic image of the patient with the newly envisioned teeth. In this way, the patient gets an impression of what he or she will look like after dental treatment… reference is made to the example of FIGS. 15A and 15B, where the patient can clearly see the current clinical situation before dental treatment (in FIG. 15A) and the future look after dental treatment (in FIG. 15B)”; Note: the shape of the teeth is changed after treatment; see screenshot of Fig. 15A and 15B below).
PNG
media_image2.png
313
268
media_image2.png
Greyscale
Screenshot of Fig. 15A and 15B (taken from Cofar)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ciriello to incorporate the teachings of Cofar wherein the shape of the teeth has changed after treatment because there are many common dental procedures that result in teeth shape to change, including contouring and installing crowns.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Ciriello in view of L&V Tutorials (How to Change the Color of an Overlay in Adobe Photoshop), hereinafter L&V.
Regarding claim 10, Ciriello teaches the medical information processing device according to claim 1. Ciriello does not teach wherein the processor is further configured to set a hue of the color image to be superimposed on the VR image. However, L&V teaches setting a hue of the color image to be superimposed on another image (Screenshots – The hue of a superimposed color image is set; see modified screenshots below).
PNG
media_image3.png
1191
1908
media_image3.png
Greyscale
PNG
media_image4.png
1196
1936
media_image4.png
Greyscale
Modified screenshots (taken from L&V)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ciriello to incorporate the teachings of L&V to set the hue of the color image for the benefit of making the color image easier to see, appear more accurate to the real-life subject, or appear more visually appealing.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Ciriello in view of Curt et al. (Optimal digital color image correlation), hereinafter Curt.
Regarding claim 11, Ciriello teaches the medical information processing device according to claim 1. Ciriello does not teach wherein the processor is further configured to interpolate, when color information in at least a partial region of the color image to be overlaid on the VR image is not acquirable, the color information on the region from surrounding color information. However, Curt teaches interpolating, when color information in at least a partial region of the color image is not acquirable, the color information on the region from surrounding color information (Paragraph 2 in 2nd Col. of Page 1 – “Many standard color cameras are equipped with the so-called Color Filter Array (CFA) technology. It is assumed that the color fields are continuous and mostly smooth. Thus, the color components are not acquired at every pixel location; they are sampled on a regular array. At each pixel location, a single color component is stored, whereas other ones are calculated thanks to interpolation schemes from neighboring pixels”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ciriello to incorporate the teachings of Curt to use color interpolation for the benefit of having a smooth color image. Furthermore, it is common that cameras have CFA technology to capture color and interpolate color, as stated by Curt (Paragraph 2 in 2nd Col. of Page 1 – “Many standard color cameras are equipped with the so-called Color Filter Array (CFA) technology”).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Ciriello in view of Boudet (An Introduction to Dental Photography), hereinafter Boudet.
Regarding claim 12, Ciriello teaches the medical information processing device according to claim 1. Ciriello does not teach wherein the color image is an image in an oral cavity, the image being acquired by a five-image method. However, Boudet teaches wherein the color image is an image in an oral cavity (Fig. 6a, 6b – The figures show color images of an oral cavity; see screenshot of Fig. 6a and 6b below), the image being acquired by a five-image method (Paragraph 3 on Page 3 – “Five intraoral photos: Five retracted views, including an anterior view, a right view and a left view, and two mirror occlusal shots (one of the mandible and one of the maxilla)”).
PNG
media_image5.png
283
812
media_image5.png
Greyscale
Screenshot of Fig. 6a and 6b (taken from Boudet)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ciriello to incorporate the teachings of Boudet to acquire an oral cavity image by a five-image method because the five-image method captures images of the oral cavity from different views, which helps create an accurate representation of the oral cavity. The images could then be used by dentists to examine the health of the patient.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Laubersheimer et al. (US 20120205828 A1) teaches a method of generating a dental restoration part by superimposing color images to create a 3D image. Li et al. (The research of corner detector of teeth image based on the curvature scale space corner algorithm) teaches a method of detecting the corner points of teeth in images. Lam et al. (Mapping intraoral photographs on virtual teeth model) teaches a method of aligning images of teeth based on points on the teeth.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE HAU MA whose telephone number is (571)272-2187. The examiner can normally be reached M-Th 7-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571) 270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHELLE HAU MA/Examiner, Art Unit 2617
/KING Y POON/Supervisory Patent Examiner, Art Unit 2617