Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1, 3, 5, 6, 7, and 8 are amended, claims 13-15 are newly added, claims 1-15 are pending, claims 9-12 have been withdrawn.
Information Disclosure Statement
The IDS filed 11/14/25 is considered.
Response to arguments
Regarding the rejection claims 1-2, 5 and 7, the arguments have been found convincing (specifically issues with the art being combinable, while Burgin does teach the extraction of engravings on pills, the generation of an image would not be present), and as such another non-final rejection has been supplied with new art.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 5, 7, and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over IWAMI (US 20210019886 A1 Hereinafter “IWAMI”) in view of TAKAMORI et al. (US 20210103766 A1 Hereinafter “TAKAMORI”).
Regarding claim 1, IWAMI teaches a type discrimination device comprising:
an image generation unit that generates an extraction mark image in which a mark appearing on a captured image of a target drug whose type is unidentified is extracted, based on an output image obtained ([0099]: “Therefore, the preprocessing unit 24 performs preprocessing. For an image captured for each of light irradiation directions, the preprocessing uses an edge extraction filter in a direction corresponding to the irradiation direction, which is an edge extraction filter of a size corresponding to the number of pixels of an edge (marked groove) of the identification information appearing in each of the images, thereby generating an edge image for each of irradiation directions, and thereafter, generates a combined image in which a plurality of edge images are combined”. This combined image emphasizes the mark on the drug “Specifically, it is possible to reduce the information other than the engraving, such as the pattern and the scratches smaller than the groove of engraving indicating the identification information on the surface of the medicine, leading to extraction of the engraving” [0100]); and
a discrimination unit that discriminates the type of the target drug, based on a collation result between the extraction mark image generated by the image generation unit and a registration mark image registered in advance for each type of the drug (Fig. 7, [0116]: “More specifically, in the verification step S004, template matching with the master image is performed for each of the plurality of medicine extraction images X to evaluate the similarity (correlation value) with the master image”. This process discriminates the type of drug detected “By repeating the above procedure for the read master images (that is, for the number of medicine types indicated by the prescription condition information acquired by the prescription condition information acquisition unit 22), the type is specified for each of verification target medicines”[0117]).
IWAMI does not expressly disclose using a trained model to extract the mark information of the drug.
However, TAKAMORI teaches using a trained model to generate the extracted mark image by extracting the mark from an input image ([0076]: “In addition to or instead of the regional expansion described above, the preprocessing part 100I may apply the binarization processing, the image inversion processing, and the engraving extraction processing to the image for matching. The engraving extraction can be performed by multiplication with the mask image generated using a neural network for engraving extraction (second layered network)”. The captured image and the mask image are input into the second layered network to be multiplied to generate an image of the extracted engraving “Then, the captured image and the binarized mask image are multiplied and preprocessed (for example, engraving extraction, binarization, inversion, etc.) to generate an image for matching (a part (c) of the same figure)” (Fig. 10c, [0070]). This model is also trained “The second layered network can be a neural network, such as a CNN (Convolutional Neural Network), which is configured by performing machine learning, such as deep learning, and providing the image from which the printing and/or engraving is extracted as teacher data” ([0127]. Deep learning presumes the model is trained. This is further supported by the idea that the output can be extracted as teacher data, and teacher data is used to train the models).
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify IWAMI’s feature extraction to include TAKAMORI’s feature extraction method and trained model because such a modification is taught, suggested, or motivated by the art. More specifically, the motivation to modify IWAMI to include TAKAMORI is expressly provided by TAKAMORI, stating that “In the configuration of Supplementary Note 7, the preprocessing described above allows for even more accurate matching” ([0126]). This preprocessing includes the generation of an engraving extraction image. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify IWAMI’s feature extraction to include TAKAMORI’s feature extraction method and trained model with the motivation of improving the accuracy for matching engravings to known engravings for medicine. The person of ordinary skill in the art would have recognized the benefit of improved accuracy of matching by use of preprocessing.
Regarding claim 2, the combination of IWAMI and TAKAMORI teaches the type discrimination device according to Claim 1, in addition, IWAMI further teaches
wherein the mark is an engraved mark ([0100]: “Specifically, it is possible to reduce the information other than the engraving, such as the pattern and the scratches smaller than the groove of engraving indicating the identification information on the surface of the medicine, leading to extraction of the engraving”. The “or” limitation means only one of the two listed items need be met for a case of obviousness).
Regarding claim 5, the content of claim 5 is similar to the content of claim 1, therefore it is rejected for the same reasons of obviousness as claim 1.
Regarding claim 7, the content of claim 7 is similar to the content of claim 1, therefore it is rejected for the same reasons of obviousness as claim 1.
Regarding claim 14, the combination of IWAMI and TAKAMORI teaches the type discrimination device according to claim 1, in addition, IWAMI further teaches wherein the discrimination unit discriminates the type of the target drug without using the trained model (Fig. 7, [0116]: “More specifically, in the verification step S004, template matching with the master image is performed for each of the plurality of medicine extraction images X to evaluate the similarity (correlation value) with the master image”. This process discriminates the type of drug detected “By repeating the above procedure for the read master images (that is, for the number of medicine types indicated by the prescription condition information acquired by the prescription condition information acquisition unit 22), the type is specified for each of verification target medicines”[0117]. The trained model is not used here).
Regarding claim 15, the combination of IWAMI and TAKAMORI teaches the type discrimination device according to claim 1, IWAMI teaches wherein between the generation of the extracted mark image by the image generation unit and the discrimination of the type of the target drug by the discrimination unit, no processing using the trained model is performed (Fig. 7, [0115]: “After reading the master image from the database DB, the verification unit 25 verifies the type of the verification target medicine and the number of medicines for each of types (S004) using the read master image and the image of the verification target medicine captured by the image capturing unit 16 (more precisely, the medicine extraction image X)”. After the extraction image is obtained, the master image is obtained from a database, and template matching is performed between the extraction image and the master image. Between the obtaining of the extraction image and discrimination step, no processing is performed by the trained model).
Claims 3, 6, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over IWAMI (US 20210019886 A1 Hereinafter “IWAMI”) in view of TAKAMORI et al. (US 20210103766 A1 Hereinafter “TAKAMORI”) in further view of Srinivasan et al. (US 20200320165 A1 Hereinafter “Srinivasan”).
Regarding claim 3, the combination of IWAMI and TAKAMORI teaches the type discrimination device according to Claim 1, in addition, TAKAMORI further teaches
wherein the trained model is a ([0127]: ““The second layered network can be a neural network, such as a CNN (Convolutional Neural Network), which is configured by performing machine learning, such as deep learning, and providing the image from which the printing and/or engraving is extracted as teacher data”. Deep learning presumes the model is trained. This is further supported by the idea that the output can be extracted as teacher data, and teacher data is used to train the models. This model is trained to identify a pixel forming the mark on an image, which can be seen in Fig. 10c, where the extraction image contains the mark on the image).
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify IWAMI’s feature extraction to include TAKAMORI’s feature extraction method and trained model because such a modification is taught, suggested, or motivated by the art. More specifically, the motivation to modify IWAMI to include TAKAMORI is expressly provided by TAKAMORI, stating that “In the configuration of Supplementary Note 7, the preprocessing described above allows for even more accurate matching” ([0126]). This preprocessing includes the generation of an engraving extraction image. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify IWAMI’s feature extraction to include TAKAMORI’s feature extraction method and trained model with the motivation of improved accuracy for matching engravings to known engravings for medicine. The person of ordinary skill in the art would have recognized the benefit of improved accuracy of matching by use of preprocessing.
The combination of IWAMI and TAKAMORI does not expressly disclose using a semantic segmentation model to differentiate pixels of the mark and the background portion of the mark.
However, Srinivasan teaches using a semantic segmentation model to differentiate pixels of the mark and the background portion of the mark ([0045]: “The semantic segmentation subsystem 302 receives the reference single page graphic images 106 and outputs a pixel-wise segmented image of fixed size with labels indicating an image element class associated with each pixel of the single page graphic images 106. The image element classes include text, image, shape, background, or any other distinguishable image element classes of the image elements in the reference single page graphic image 106”)
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify the combination of IWAMI and TAKAMORI’s feature extraction model to include Srinivasan’s semantic segmentation model because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, Srinivasan’s semantic segmentation model permits accurate segmentation of markings on a surface to better distinguish the markings from the background. This known benefit in Srinivasan is applicable to the combination of IWAMI and TAKAMORI’s feature extraction model as they both share characteristics and capabilities, namely, they are directed to feature extraction of markings on a surface. Therefore, it would have been recognized that modifying the combination of IWAMI and TAKAMORI’s feature extraction model to include Srinivasan’s semantic segmentation model would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate Srinivasan’s semantic segmentation model in extraction of markings on a surface and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art.
Regarding claim 6, the content of claim 6 is similar to the content of claim 3, therefore it is rejected for the same reasons of obviousness as claim 3.
Regarding claim 8, the content of claim 8 is similar to the content of claim 3, therefore it is rejected for the same reasons of obviousness as claim 3.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over IWAMI (US 20210019886 A1 Hereinafter “IWAMI”) in view of TAKAMORI et al. (US 20210103766 A1 Hereinafter “TAKAMORI”) in further view of KONTSEVICH (US 20070071285 A1 Hereinafter “KONTSEVICH”).
Regarding claim 4, the combination of IWAMI and TAKAMORI teaches the type discrimination device according to claim 1, in addition, IWAMI further teaches
wherein the discrimination unit discriminates the type of the target drug, based on a collation result between an image obtained (Fig. 7, [0116]: “More specifically, in the verification step S004, template matching with the master image is performed for each of the plurality of medicine extraction images X to evaluate the similarity (correlation value) with the master image”. This process discriminates the type of drug detected “By repeating the above procedure for the read master images (that is, for the number of medicine types indicated by the prescription condition information acquired by the prescription condition information acquisition unit 22), the type is specified for each of verification target medicines”[0117]).
The combination of IWAMI and TAKAMORI does not expressly disclose performing blurring processing on the extraction mark image.
However, KONTSEVICH teaches performing blurring processing on an image ([0007]: “The image contour map first gets blurred in accordance with the so-called chamfer metric, and then all templates get matched with every location in the analyzed image. The advantage of this technique is that matching does not need to be done for the whole area of a template; only the edge points in the template need to be considered. This reduction of the points from area to contours leads to substantial performance gains”).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the combination of IWAMI and TAKAMORI’s feature extraction model to include KONTSEVICH’s blurring preprocessing because such a modification is taught, suggested, or motivated by the art. More specifically, the motivation to modify the combination of IWAMI and TAKAMORI to include KONTSEVICH is expressly provided by KONTSEVICH, stating that “The advantage of this technique is that matching does not need to be done for the whole area of a template; only the edge points in the template need to be considered. This reduction of the points from area to contours leads to substantial performance gains”. By blurring the image, the noise is reduced and prominent features are left which allows for improved matching performance. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of IWAMI and TAKAMORI’s feature extraction model to include KONTSEVICH’s blurring preprocessing with the motivation of improving matching performance. The person of ordinary skill in the art would have recognized the benefit of improved matching performance.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over IWAMI (US 20210019886 A1 Hereinafter “IWAMI”) in view of TAKAMORI et al. (US 20210103766 A1 Hereinafter “TAKAMORI”) in further view of Rashidi (US 20220130033 A1 Hereinafter “Rashidi”).
Regarding claim 13, the combination of IWAMI and TAKAMORI teaches the type discrimination device according to claim 1, in addition, TAKAMORI further teaches wherein the trained model ([0076]: “In addition to or instead of the regional expansion described above, the preprocessing part 100I may apply the binarization processing, the image inversion processing, and the engraving extraction processing to the image for matching. The engraving extraction can be performed by multiplication with the mask image generated using a neural network for engraving extraction (second layered network)”. The captured image and the mask image are input into the second layered network to be multiplied to generate an image of the extracted engraving “Then, the captured image and the binarized mask image are multiplied and preprocessed (for example, engraving extraction, binarization, inversion, etc.) to generate an image for matching (a part (c) of the same figure)” (Fig. 10c, [0070]). This model is also trained “The second layered network can be a neural network, such as a CNN (Convolutional Neural Network), which is configured by performing machine learning, such as deep learning, and providing the image from which the printing and/or engraving is extracted as teacher data” ([0127]. Deep learning presumes the model is trained. This is further supported by the idea that the output can be extracted as teacher data, and teacher data is used to train the models).
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify IWAMI’s feature extraction to include TAKAMORI’s feature extraction method and trained model because such a modification is taught, suggested, or motivated by the art. More specifically, the motivation to modify IWAMI to include TAKAMORI is expressly provided by TAKAMORI, stating that “In the configuration of Supplementary Note 7, the preprocessing described above allows for even more accurate matching” ([0126]). This preprocessing includes the generation of an engraving extraction image. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify IWAMI’s feature extraction to include TAKAMORI’s feature extraction method and trained model with the motivation of improved accuracy for matching engravings to known engravings for medicine. The person of ordinary skill in the art would have recognized the benefit of improved accuracy of matching by use of preprocessing.
The combination of IWAMI and TAKAMORI does not expressly disclose the trained model containing an encoder decoder structure that generates an output image with the same size as the input image.
However, Rashidi teaches a model containing an encoder decoder structure that generates an output image with the same size as the input image (Fig. 2: “ FIG. 2 shows an embodiment of a CNN architecture according to the present invention, the CNN including 5 convolutional encoder layers followed by 5 convolutional decoder layers”. The size of the mage being input is the same as the size of the image being output).
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify the combination of IWAMI and TAKAMORI’s extraction model to include Rashidi’s model with an encoder decoder architecture because such a modification is based on the use of known techniques to improve similar devices in the same way. More specifically, Rashidi’s model with an encoder decoder architecture is comparable to the combination of IWAMI and TAKAMORI’s extraction model because both are convolutional neural networks that extract features from images. The combination of IWAMI and TAKAMORI is silent about the architecture needed to perform feature extraction, just that a CNN is used that is multilayer. Rashidi provides the architecture for a CNN that extracts features. Therefore, it would be obvious to one of ordinary skill in the art to use an encoder decoder architecture in the CNN for extracting the features of images in the combination of IWAMI and TAKAMORI’s extraction model to extract the features from images for further image processes, as taught by Rashidi.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
ITO et al. (CN 104321804 A) teaches matches patterns to master images
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEFANO A DARDANO whose telephone number is (703)756-4543. The examiner can normally be reached Monday - Friday 11:00 - 7:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Greg Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEFANO ANTHONY DARDANO/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698