DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1 and 10-13 are amended. Claim 9 is canceled. Claims 1-8 and 10-13 are pending in this application.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-4, 6-8, and 10-13 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Favorskaya et al., “DIGITAL WATERMARKING OF 3D MEDICAL VISUAL OBJECTS”.
Regarding claim 1, Favorskaya discloses an intra image processing method (Section 1. Introduction, first sentence; digital processing methods) performed by an intraoral image processing device comprising a processor (Abstract, first sentence; medical equipment (a processor is implied) provides 3D models of scanning organs), the method comprising:
obtaining a first intraoral image (Section 5. Embedding and Extraction Schemes, first paragraph; Fig. 3a; and Section 6. Experimental Results, first paragraph; each of sliced images related to 3D model of a single tooth, 3D models of several teeth, or 3D model of a whole jaw; original or host slice); and
obtaining a second intraoral image by embedding input additional information including at least one of a text and an image into at least a partial area of the first intraoral image such that the partial area is modified to reflect the additional information (Section 4. Selecting Regions for Embedding; Section 5. Embedding and Extraction Schemes, first paragraph; Figs. 3b-3g; and Section 6. Experimental Results, first paragraph; select carefully regions for embedding and corresponding type of transform for embedding; b textual watermark, c ROI
watermark (increased in 3 times), d fragile watermark (Figs. 3b-3d are additional information), e watermarked slice after embedding of ROI watermark, f watermarked slice after embedding of ROI and textual watermarks, g watermarked slice after embedding of ROI, textual, and fragile watermarks (Figs. 3e-3g are embedded additional information into second intraoral images)),
wherein each of the first intraoral image and second intraoral image is three-dimensional scan data obtained by imaging a surface of an object (Abstract, first sentence; Section 5. Embedding and Extraction Schemes, first paragraph; Fig. 3a and 3e-3g; and Section 6. Experimental Results, first paragraph; medical equipment provides 3D models of scanning organs; each of sliced images related to 3D model of a single tooth, 3D models of several teeth, or 3D model of a whole jaw; original or host slice and embedded slices of original slice are therefore 3D scan data)
Regarding claim 2, the intra image processing method of claim 1, Favorskaya further discloses comprising identifying an additional information input area for displaying the additional information in the first intraoral image (Section 4. Selecting Regions for Embedding; Section 5. Embedding and Extraction Schemes, second paragraph; and Section 6. Experimental Results, first paragraph).
Regarding claim 3, the intra image processing method of claim 2, Favorskaya further discloses wherein the identifying of the additional information input area comprises identifying the additional information input area in remaining areas other than an oral cavity area of the first intraoral image (Section 4. Selecting Regions for Embedding; Section 5. Embedding and Extraction Schemes, second paragraph; and Section 6. Experimental Results, first paragraph).
Regarding claim 4, the intra image processing method of claim 2, Favorskaya further discloses comprising
receiving a selection of at least one of a tooth and gingiva as a target (Section 5. Embedding and Extraction Schemes, first paragraph),
wherein the identifying of the additional information input area comprises identifying the additional information input area in remaining area other than an area within a certain range from the at least one selected from the tooth and the gingiva (Section 4. Selecting Regions for Embedding; Section 5. Embedding and Extraction Schemes, second paragraph; and Section 6. Experimental Results, first paragraph).
Regarding claim 6, the intra image processing method of claim 2, Favorskaya further discloses comprising
outputting a user interface screen for selecting the additional information input area (Figs. 3b-3d and Section 6. Experimental Results, first paragraph),
wherein the identifying of the additional information input area comprises identifying a selected area as the additional information input area in response to the user interface screen (Section 4. Selecting Regions for Embedding; Section 5. Embedding and Extraction Schemes, second paragraph; and Section 6. Experimental Results, first paragraph).
Regarding claim 7, the intra image processing method of claim 2, Favorskaya further discloses comprising outputting at least one of a user interface screen on which the identified additional information input area is displayed and a user interface screen on which the input additional information is displayed in the identified additional information input area (Figs. 3b-3d and Section 6. Experimental Results, first paragraph).
Regarding claim 8, the intra image processing method of claim 7, Favorskaya further discloses wherein,
when the input additional information is greater than or equal to a certain size and when there are a plurality of additional information input areas where the additional information is displayed, the outputting of the user interface screen on which the input additional information is displayed comprises outputting an identifier indicating a position of the additional information input area instead of the additional information, or outputting the additional information input area whose size is reduced (Section 4. Selecting Regions for Embedding; Section 5. Embedding and Extraction Schemes; Figs. 3b-3d; and Section 6. Experimental Results, first paragraph).
Regarding claim 10, the intra image processing method of claim 1, Favorskaya further discloses wherein
the obtaining of the second intraoral image into which the additional information is embedded comprises obtaining the second intraoral image by embedding the additional information into the first intraoral image by replacing variables or color values of pixels of a two-dimensional image mapped to the first intraoral image with values corresponding to the additional information (Section 4. Selecting Regions for Embedding; Section 5. Embedding and Extraction Schemes, first paragraph; Section 6. Experimental Results, first paragraph); and Section 7. Conclusions).
Regarding claim 11, the intra image processing method of claim 1, Favorskaya further discloses wherein
the first intraoral image is the three-dimensional scan data expressed as at least one of dots and mesh (Section 1. Introduction; 3D models are 3D polygonal mesh
models), and
the obtaining of the second intraoral image into which the additional information is embedded comprises obtaining the second intraoral image by embedding the additional information in the first intraoral image by changing a color of at least one of a point, a vertex, and a polygon including the vertex of the first intraoral image located at a position corresponding to an outline of at least one of a text and an image included in the additional information (Section 4. Selecting Regions for Embedding; Section 5. Embedding and Extraction Schemes, first paragraph; Figs. 3b-3g; and Section 6. Experimental Results, first paragraph).
Regarding claim 12, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons.
Regarding claim 13, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Favorskaya et al., “DIGITAL WATERMARKING OF 3D MEDICAL VISUAL OBJECTS” in view of Zarrabi et al., “BlessMark: A Blind Diagnostically-Lossless Watermarking Framework for Medical Applications Based on Deep Neural Networks”.
Regarding claim 5, the intra image processing method of claim 2, Favorskaya further discloses wherein the identifying of the additional information input area comprises identifying the additional information input area from the first intraoral image by using “a digital watermarking scheme” to identify additional information input areas from a plurality of intraoral images (Abstract; Section 4. Selecting Regions for Embedding; Section 5. Embedding and Extraction Schemes, second paragraph; and Section 6. Experimental Results, first paragraph).
Favorskaya discloses claim 5 as enumerated above, but Favorskaya does not explicitly discloses using a trained neural network as claimed.
However, Zarrabi discloses a deep neural network is used to recognize the ROI map in the embedding, extraction, and recovery processes in watermarking framework (Abstract).
Therefore, taking the combined disclosures of Favorskaya and Zarrabi as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a deep neural network is used to recognize the ROI map in the embedding, extraction, and recovery processes in watermarking framework as taught by Zarrabi into the invention of Favorskaya for the benefit of satisfying the confidentiality of the patient information through a blind watermarking system, while it preserves diagnostic/medical information of the image
throughout the watermarking process (Zarrabi: Abstract).
Response to Arguments
Applicant's arguments with respect to claims 1-8 and 10-13 have been considered but are moot in view of the new ground(s) of rejection.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN D HUYNH whose telephone number is (571)270-1937. The examiner can normally be reached 8AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VAN D HUYNH/Primary Examiner, Art Unit 2665