DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/28/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered and attached by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Schmidt-Krulig (U.S. Patent Pub. No. 2024/0144600) in view of Kopelman (U.S. Patent Pub. No. 2018/0168781).
Regarding Claim 1, Schmidt-Krulig teaches an intraoral scanning system, comprising (Fig. 3:)
an intraoral scanner (Fig. 2) to generate a plurality of images of a dental site (¶67 The intraoral scanner includes a handpiece in which multiple cameras and an illuminating light source are disposed. The cameras can include, e.g., a camera configured to acquire images in which ultraviolet light is projected and red, green, and blue monochrome cameras (configured to capture red, green, and blue monochrome images);) and
a computing device, connected to the intraoral scanner by a wired or wireless connection, wherein the computing device is to perform the following during intraoral scanning (¶112 FIG. 12 is a block diagram of an exemplary processing system, which can be configured to perform operations disclosed herein. Referring to FIG. 12, a processing system 1200 can include one or more processors 1202, memory 1204, one or more input/output devices 1206, one or more sensors 1208, one or more user interfaces 1210, and one or more actuators 1212. Processing system 1200 can be representative of each computing system disclosed herein:)
receive the plurality of images (¶67 The intraoral scanner includes a handpiece in which multiple cameras and an illuminating light source are disposed. The cameras can include, e.g., a camera configured to acquire images in which ultraviolet light is projected and red, green, and blue monochrome cameras (configured to capture red, green, and blue monochrome images);)
identify a subset of images from the plurality of images that satisfy one or more selection criteria; select the subset of images that satisfy the one or more selection criteria; and discard or ignore a remainder of images of the plurality of images that are not included in the subset of images (Fig. 6; ¶74 At 603 through 607, a series of evaluations are performed in order to determine whether the color image captured at 602 will be elected as a candidate image for use in the computation of a color information value. The scanner continually acquires images and frames at a constant frame rate irrespective of whether scanner motion occurs between consecutive image/frame captures. However, newly acquired color images can be discarded—or previously acquired color images can be deleted, as appropriate—in order to limit the total amount of data that is written to memory (thereby decreasing the size of the scan file) while ensuring that high quality data is acquired, stored, and subsequently utilized in the construction of a point cloud, 3D mesh, and texture atlas; ¶77-78 describe this is done by looking for an improvement (criteria) in the images)
Schmidt-Krulig teaches creating a subset of a singular image but does not explicitly disclose identify a subset of images from the plurality of images that satisfy one or more selection criteria; select the subset of images that satisfy the one or more selection criteria; and discard or ignore a remainder of images of the plurality of images that are not included in the subset of images.
Kopelman is in the same field of art of image analysis. Further, Kopelman teaches identify a subset of images from the plurality of images that satisfy one or more selection criteria; select the subset of images that satisfy the one or more selection criteria; and discard or ignore a remainder of images of the plurality of images that are not included in the subset of images (¶263 At block 2630, processing logic may determine a subset of images from the stream of images that satisfy image selection criteria. At block 2638, processing logic selects the determined subset. The image analysis profiles or models may be used to process each incoming image, and then from the images received so far select an image that is a best match for a particular type of image; ¶263 that previously selected left or right profile view image may be discarded and replaced by the new left or right profile view. The process may continue until no new images are received.)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Schmidt-Krulig by creating a subset of images based on a criteria that is taught by Kopelman; thus, one of ordinary skilled in the art would be motivated to combine the references to improve the efficiency of interfacing with patients, the accuracy of dental procedures and the identification of dental conditions (Kopelman ¶47).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Regarding Claim 2, Schmidt-Krulig in view of Kopelman discloses the intraoral scanning system of claim 1, wherein the computing device is further to perform at least one of: a) store the selected subset of images without storing the remainder of images from the plurality of images; or b) perform further processing of the subset of images without performing further processing of the remainder of images (Kopelman, ¶51 The image capture device may generate a stream of images, and processing logic may analyze the stream of images to select a subset of those images. The selected subset of images may then be saved and used to generate a mode.)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Schmidt-Krulig by creating a subset of images based on a criteria that is taught by Kopelman; thus, one of ordinary skilled in the art would be motivated to combine the references to improve the efficiency of interfacing with patients, the accuracy of dental procedures and the identification of dental conditions (Kopelman ¶47).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Regarding Claim 3, Schmidt-Krulig in view of Kopelman discloses the intraoral scanning system of claim 1, wherein the plurality of images comprise at least one of: a) a plurality of color two-dimensional (2D) images; or b) a plurality of near-infrared (NIR) two-dimensional (2D) images (Schmidt-Krulig, ¶43 The 2D images can include both depth images (e.g. images that record data used to determine the 3D structure of the object that is being scanned/imaged) and color images (e.g. images that record data pertaining to the color of the object that is being scanned/imaged).)
Regarding Claim 4, Schmidt-Krulig in view of Kopelman discloses the intraoral scanning system of claim 1, wherein the computing device is further to:
receive one or more additional images of the dental site generated by the intraoral scanner during the intraoral scanning (Schmidt-Krulig, Fig. 6; ¶74 The scanner continually acquires images and frames at a constant frame rate irrespective of whether scanner motion occurs between consecutive image/frame captures;)
determine that the one or more additional images satisfy the one or more selection criteria and cause an image of the subset of images to no longer satisfy the one or more selection criteria (Schmidt-Krulig, ¶77 the process evaluates, at 605, whether the new color image represents an improvement over the previously saved color image in the same neighborhood;)
select the one or more additional images that satisfy the one or more selection criteria; remove the image that no longer satisfies the one or more selection criteria from the subset of images; and discard or ignore the image that no longer satisfies the one or more selection criteria (Schmidt-Krulig, ¶78 If the process determines, at 605, that the new color image represents an improvement, the previously saved color image in the same neighborhood is deleted at 606. Thereafter, the process stores, at 604, the new color image (captured at 602) as a candidate color image, i.e. as an image that has been elected for consideration when coloring or texturing a 3D model. Alternatively, if the process determines that the new color image would not be an improvement, the process proceeds to 607 where it is determined whether the scan is complete; Kopelman ¶263 also teaches this.)
Regarding Claim 18, Schmidt-Krulig in view of Kopelman discloses the intraoral scanning system of claim 1, wherein the processing device is further to: receive a plurality of intraoral scans of the dental site generated by the intraoral scanner (Schmidt-Krulig, Fig. 5, 501 3D scan & image capture;) generate a three-dimensional (3D) polygonal model of the dental site using the plurality of intraoral scans (Schmidt-Krulig, Fig. 5, 520 Textured 3D model;) and perform texture mapping of the 3D polygonal model based on information from the subset of images without using information from the remainder of the plurality of images (Schmidt-Krulig, ¶78 If the process determines, at 605, that the new color image represents an improvement, the previously saved color image in the same neighborhood is deleted at 606. Thereafter, the process stores, at 604, the new color image (captured at 602) as a candidate color image, i.e. as an image that has been elected for consideration when coloring or texturing a 3D model.)
Regarding Claim 19, Schmidt-Krulig teaches a non-transitory computer readable medium comprising instructions that, when executed by a processing device, cause the processing device to perform operations (¶65 a non-transitory computer readable medium is provided having processor-executable instructions stored thereon. The processor-executable instructions are configured to cause a processor to carry out a method) during an intraoral scanning session, comprising (¶67 FIG. 2 illustrates an intraoral scanner designed to acquire scan data for constructing a 3D virtual model of dentition and oral tissues:)
receiving a plurality of images of a dental site generated by an intraoral scanner (¶67 The intraoral scanner includes a handpiece in which multiple cameras and an illuminating light source are disposed. The cameras can include, e.g., a camera configured to acquire images in which ultraviolet light is projected and red, green, and blue monochrome cameras (configured to capture red, green, and blue monochrome images);)
receiving a plurality of intraoral scans of the dental site generated by the intraoral scanner (¶73 The image set includes at least a depth image and a color image. The depth image provides, for each pixel in a 2D image, a depth value while the color image provides, for each pixel in a 2D image, an RGB value;)
generating a three-dimensional (3D) polygonal model of the dental site using the plurality of intraoral scans (¶68 FIG. 5 is a flow diagram illustrating a process for constructing a textured 3D virtual model of one or more object(s) or area of interest (hereinafter, “object”), such as the dentition and oral structures of a dental patient;)
identifying a subset of images from the plurality of images that satisfy one or more selection criteria; selecting the subset of images that satisfy the one or more selection criteria; discarding or ignoring a remainder of images of the plurality of images that are not included in the subset of images; and (Fig. 6; ¶74 At 603 through 607, a series of evaluations are performed in order to determine whether the color image captured at 602 will be elected as a candidate image for use in the computation of a color information value. The scanner continually acquires images and frames at a constant frame rate irrespective of whether scanner motion occurs between consecutive image/frame captures. However, newly acquired color images can be discarded—or previously acquired color images can be deleted, as appropriate—in order to limit the total amount of data that is written to memory (thereby decreasing the size of the scan file) while ensuring that high quality data is acquired, stored, and subsequently utilized in the construction of a point cloud, 3D mesh, and texture atlas; ¶77-78 describe this is done by looking for an improvement (criteria) in the images)
performing texture mapping of the 3D polygonal model of the dental site based on information from the subset of images without using information from the remainder of the plurality of images (¶39 Texture mapping, i.e., the approach for applying texture to the 3D model consists of mapping an image or collection of images, i.e. a texture image, onto the 3D model. To map pixels of the texture image onto the 3D model, a pixel of the texture image (i.e. a “texel”) is identified for each visible pixel of the 3D model and then mapped to that point. This computation is made when rendering the 3D scene on the screen and is facilitated by a precomputed mapping that, for each point in the 3D model (e.g., each vertex in a 3D mesh model) with 3D coordinates (x,y,z), gives the 2D coordinates (u,v) of its matching pixel in the texture image. This mapping between a 2D image and a 3D image is illustrated in FIG. 1.)
Schmidt-Krulig teaches identifying a subset of images from the plurality of images that satisfy one or more selection criteria; selecting the subset of images that satisfy the one or more selection criteria; discarding or ignoring a remainder of images of the plurality of images that are not included in the subset of images.
Kopelman is in the same field of art of image analysis. Further, Kopelman teaches identifying a subset of images from the plurality of images that satisfy one or more selection criteria; selecting the subset of images that satisfy the one or more selection criteria; discarding or ignoring a remainder of images of the plurality of images that are not included in the subset of images (¶263 At block 2630, processing logic may determine a subset of images from the stream of images that satisfy image selection criteria. At block 2638, processing logic selects the determined subset. The image analysis profiles or models may be used to process each incoming image, and then from the images received so far select an image that is a best match for a particular type of image; ¶263 that previously selected left or right profile view image may be discarded and replaced by the new left or right profile view. The process may continue until no new images are received.)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Schmidt-Krulig by creating a subset of images based on a criteria that is taught by Kopelman; thus, one of ordinary skilled in the art would be motivated to combine the references to improve the efficiency of interfacing with patients, the accuracy of dental procedures and the identification of dental conditions (Kopelman ¶47).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Allowable Subject Matter
Claims 5-17 and 20-22 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claims 5 and 20, no prior art teaches identify, for each image of the plurality of images, one or more faces of the 3D polygonal model associated with the image; for each face of the 3D polygonal model, identify one or more images of the plurality of images that are associated with the face and that satisfy the one or more selection criteria; and add the one or more images to the subset of images.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUSTIN BILODEAU whose telephone number is (571)272-1032. The examiner can normally be reached 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUSTIN BILODEAU/Examiner, Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664