Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 and 10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) steps of comparing and estimating, which fall under the mental process grouping of the abstract idea, because the steps correspond to the concepts performed in the human mind, e.g., evaluation, judgment and opinion).
This judicial exception is not integrated into a practical application because the additional limitations of the claims, e.g., the steps of obtaining and storing images, correspond to insignificant extra-solution activities to the judicial exception; these limitations correspond to pre-solution activities because they are steps of gathering data for use in the claimed process/system. Moreover, other additional limitations, such as those reciting the computer components also do not integrate the judicial exception into a practical application because they are no more than mere instruction to apply the judicial exception using a genetic computer component.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional steps, e.g., the steps of obtaining and storing images, are considered as well understood, routine and conventional. For example, us patent application publication nos. 2023/0281790 and 2023/0138821, teaches obtaining and storing images for inspection.
For the aforementioned reasons, claims 1 and 10 are not patent eligible.
Claims 2 and 11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) steps of computing, which fall under the mathematical concepts grouping of the abstract idea, because the steps correspond to mathematical calculation/relationship. This judicial exception is not integrated into a practical application because there are no additional limitations of the claims.
For the aforementioned reasons, claims 1 and 10 are not patent eligible.
Claims 3-9 and 12-18 appear patent eligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1 and 10 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Us patent application publication no. 2024/0029238 to Daiku et al. (hereinafter Daiku).
For claim 1, Daiku as applied teaches an image detection device, comprising:
an image capturing module configured to obtain an image of object to be inspected from an object to be inspected (see, e.g., pars. 39-40, 43, 46 and 61 and FIGS. 1 and 6, which teach obtaining an inspection target image via an image reader);
a storage medium configured to store a standard image file array (see, e.g., pars. 39-40, 46 and 60 and FIGS. 1, 2 and 6, which teach obtaining a reference image stored in a memory), wherein the standard image file array comprises a plurality of standard object images (see, e.g., pars. 54 and 79-80 and FIGS. 5A, 6 and 10, which teach that the reference image includes a plurality of divided regions); and
a processor connected to the image capturing module and the storage medium (see, e.g., pars. 39-40 and 44-47 and FIGS. 1-2) and configured to:
obtain a plurality of sample object images from the image of object to be inspected (see, e.g., pars. 54 and 79-80 and FIGS. 5A, 6 and 10, which teach dividing the inspection target image into a plurality of regions);
store the plurality of sample object images in a sample image array (see, e.g., pars. 54 and 79-80 and FIGS. 5A, 6 and 10, which teach that the inspection target image includes a plurality of divided regions), wherein the sample image array and the standard image file array comprise same array index (see, e.g., pars. 54 and 81-86 and FIG. 5A, which teach that the divided regions of the inspection target image and the reference image correspond to one another);
respectively compare the plurality of sample object images of the sample image array and the plurality of standard object images of the standard image file array based on the array index to compute an inference score of each of the plurality of sample object images (see, e.g., pars. 47-54 and 81-89, and FIGS. 5B, 6 and 10, which teach determining whether contents of the reference image and the inspection target image coincide by sequentially comparing corresponding regions of the images and determining differences therebetween; the examiner interprets the differences between the corresponding regions as the claimed inference score); and
estimate a quality of the object to be inspected according to the inference score of the plurality of sample object images (see, e.g., pars. 64, 90-91, and FIGS. 6 and 12, which teach estimating the inspection target’s quality by performing the defect detection processing according to the differences between the regions).
For claim 10, Daiku as applied teaches an image detection method for an object to be inspected, comprising:
obtaining an image of object to be inspected from an object to be inspected (see, e.g., pars. 39-40, 43, 46, and 61 and FIGS. 1 and 6, which teach obtaining an inspection target image via an image reader);
obtaining a plurality of sample object images from the image of object to be inspected(see, e.g., pars. 54 and 79-80 and FIGS. 5A, 6 and 10, which teach dividing the inspection target image into a plurality of regions);
storing the plurality of sample object images to a sample image array (see, e.g., pars. 54 and 79-80 and FIGS. 5A, 6 and 10, which teach that the inspection target image includes a plurality of divided regions), wherein the sample image array and a standard image file array comprise same array index (see, e.g., pars. 54 and 81-86 and FIG. 5A, which teach that the divided regions of the inspection target image and the reference image correspond to one another) and the standard image file array comprises a plurality of standard object images (see, e.g., pars. 54 and 79-80 and FIGS. 5A, 6 and 10, which teach that the reference image includes a plurality of divided regions);
respectively comparing the plurality of sample object images of the sample image array with the plurality of standard object images of the standard image file array to compute an inference score of each of the plurality of sample object images (see, e.g., pars. 47-54 and 81-89 and FIGS. 5B, 6 and 10, which teach determining whether contents of the reference image and the inspection target image coincide by sequentially comparing corresponding regions of the images and determining differences therebetween; the examiner interprets the differences between the corresponding regions as the claimed inference score); and
estimating a quality of the object to be inspected according to the inference score of the plurality of sample object images (see, e.g., pars. 64 and 90-91 and FIGS. 6 and 12, which teach estimating the inspection target’s quality by performing the detect detection processing according to the differences between the regions).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-5 and 11-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Daiku in view of us patent application publication no. 2019/0122913 to Lauber et al. (hereinafter Lauber).
For claims 2 and 11, Daiku as applied teaches before obtaining the plurality of sample object images of the image of object to be inspected, the processor is configured to: compute a rotation bias of the image of object to be inspected (see, e.g., pars. 76-78 and FIGS. 6 and 10, which teach performing the alignment processing by linear/affine transformation, which includes a rotation, before dividing the inspection targe image into regions).
Daiku does not explicitly teach that the storage medium is configured to store a standard tilt angle, and that the processor is configured to: compute a tilt angle of the image of object to be inspected; and compute a rotation bias of the image of object to be inspected according to a difference value between the tilt angle and the standard tilt angle.
Lauber in the analogous art teaches performing skew comparison between the test and reference images by determining a reference scene function from the reference image and test scene function from the test image and comparing the functions to determine the skew angle representing the rotation offset (see, e.g., pars. 20-23, 36-39, 42-45, and 95-99 of Lauber). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Daiku to align images based on the rotation bias as taught by Lauber because doing so would allow the alignment met to be a two-step process, resulting in a reduction in computation requirements and a faster inspection with improved time-to-results (see par. 41 of Lauber).
For claims 3 and 12, Daiku in view of Lauber teaches that before storing the plurality of sample object images to the sample image array (see, e.g., pars. 54 and 79-80 and FIGS. 5A, 6 and 10 of Daiku, which teach that the inspection target image are divided into a plurality of regions after the linear transformation), the processor is configured to:
respectively calibrate a bias of the plurality of sample object images according to the rotation bias (see, e.g., pars. 63 and 75-78 and FIGS. 6 and 10 of Daiku, which teach aligning the inspection target image and the reference image using linear transformation); and
store the plurality of sample object images calibrated to the sample image array (see, e.g., pars. 54, 79-80 and FIGS. 5A, 6, 10, which teach dividing that the inspection target image includes a plurality of divided regions).
For claims 4 and 13, Daiku in view of Lauber teaches that after computing the rotation bias of the image of object to be inspected (see, e.g., pars. 63 and 75-78 and FIGS. 6 and 10 of Daiku; the examiner finds that to perform the linear transformation, its bias needs to be known before the transformation), the processor is configured to:
calibrate a bias of the image of object to be inspected according to the rotation bias (see, e.g., pars. 63 and 75-78 and FIGS. 6 and 10 of Daiku, which teach aligning the inspection target image and the reference image using linear transformation); and
obtain the plurality of sample object images in the image of object to be inspected calibrated (see, e.g., pars. 54, 79-80 and FIGS. 5A, 6, 10, which teach dividing that the inspection target image includes a plurality of divided regions).
For claims 5 and 14, Daiku in view of Lauber teaches that operation of the processor to compute the inference score of each of the plurality of sample object images comprises:
respectively comparing the plurality of sample object images calibrated of the sample image array with the plurality of standard object images of the standard image file array according to the array index to compute the inference score of each of the plurality of sample object images (see, e.g., pars. 47-54 and 81-89 and FIGS. 5B, 6 and 10 of Daiku, which teach determining whether contents of the reference image and the inspection target image coincide by sequentially comparing corresponding regions of the images and determining differences therebetween; the examiner notes that the images are divided after the alignment such that the regions are already calibrated).
Claim(s) 6 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Daiku in view of Lauber and further in view of us patent application publication no. 2020/0134845 to Wang et al. (hereinafter Wang) and us patent application publication no. 2021/0341922 to Das et al. (hereinafter Das).
For claims 6 and 15, while Daiku in view of Lauber does not explicitly teach, Wang in the analogous art teaches:
performing a binary thresholding computation to the image of object to be inspected to obtain a first binary image (see, e.g., par. 93 of Wang, which teaches binarizing images based on thresholds);
detecting a plurality of first object contours of the first binary image and respectively computing a plurality of first center coordinates of the plurality of first object contours (see, e.g., par. 93 of Wang, which teach detecting contours and calculating coordinates of centers).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Daiku in view of Wang to detect centers as taught by Wang because doing so would allow extracting corresponding features for transformation (see par. 91-92 of Wang).
While Daiku in view of Lauber and Wang does not explicitly teach, Das in the analogous art teaches: computing the tilt angle of the image of object to be inspected with respect to a horizontal basis according to at least two coordinates of the plurality of first center coordinates and an origin coordinate (see, e.g., pars. 46-49 and FIG. 5 of Das, which teach determining the tilt angle of the object image as a rotation angle about the x-axis, wherein image includes the origin (0,0) and the centroid are shown in FIG. 5).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Daiku in view of Wang to detect centers as taught by Wang because doing so would allow extracting corresponding features for transformation (see par. 91-92 of Wang).
Claim(s) 8 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Daiku in view of Lauber and further in view of Wang and us patent application publication no. 2024/0362792 to Bilgen et al. (hereinafter Bilgen).
For claims 8 and 17, Daiku as applied teaches processing the standard image before the storage medium stores the standard image file array (see, e.g., pars. 46 and 60, and FIGS. 6 of Daiku, which teach obtaining a reference image stored in a memory; since the reference image is processed after it is divided into the regions, the examiner finds that the reference image is stored after the division and suggests generating image portions before saving).
While Daiku as applied does not explicitly teach, Wang in the analogous art teaches:
perform a binary thresholding computation to a standard image to obtain a second binary image, wherein the standard image comprises a standard tilt angle (see, e.g., par. 93 of Wang, which teach binarizing images based on thresholds);
detect a plurality of second object contours of the second binary image (see, e.g., par. 93 of Wang, which teach detecting contours).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Daiku in view of Wang to detect contours as taught by Wang because doing so would allow extracting corresponding features for transformation (see par. 91-92 of Wang).
While Daiku in view of Lauber and Wang does not explicitly teach, Bilgen in the analogous art teaches: respectively compute a plurality of second minimum bounding rectangles according to the plurality of second object contours (see, e.g., pars. 9, 11 and 48-49 of Bilgen, which teach finding a minimum bounding rectangle surrounding each contour); and respectively use the plurality of second minimum bounding rectangles to obtain the plurality of standard object images of the standard image (see, e.g., pars. 9, 11 and 48-49 of Bilgen, which teach cropping the image according to the minimum bounding rectangles).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Daiku in view of Wang to obtain a plurality of standard object images as taught by Bilgen because doing so would allow segmentation of the images, allowing individual/separate processing of the cropped images (see par. 49 of Bilgen).
Allowable Subject Matter
Claims 7, 9, 16, and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
In regard to claims 7 and 16, when considering each claim as a whole, prior art of record fails to disclose or render obvious, alone or in combination:
“wherein the operation is configured to obtain the plurality of sample object images of the image of object to be inspected by using a plurality of first minimum bounding rectangles, and
the operation of obtaining the plurality of sample object images of the image of object to be inspected comprises:
respectively computing the plurality of first minimum bounding rectangles according to the plurality of first object contours; and
obtaining the plurality of sample object images of the image of object to be inspected by using the plurality of first minimum bounding rectangles.”
In regard to claims 9 and 18, when considering each claim as a whole, prior art of record fails to disclose or render obvious, alone or in combination:
“wherein after detecting the plurality of second object contours of the second binary image, the processor is configured to:
respectively compute a plurality of second center coordinates of the plurality of second object contours; and
compute the standard tilt angle of the standard image with respect to a horizontal basis according to at least two of the plurality of second center coordinates and an origin coordinate.”
Additional Citations
The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action.
Citation
Relevance
Nakao et al. (2017/0148153)
Describes a positioning method for capturing an image of a product to position the product, a visual inspection apparatus for inspecting appearance of the product, a visual inspection method, a program, and a computer readable recording medium. In one embodiment, a standard image of a product to be a standard for an inspection target is displayed, to set a first region so as to surround a standard pattern in the standard image. Further, a second region for characterizing a position and a posture of the standard pattern is set in the standard image. In a first search step, a feature extracted from the first region set in the standard image is searched from an inspection target image, to roughly obtain the position and the posture of the standard pattern in the inspection target image. In the second search step, the feature extracted from the second region set in the standard image is searched from the inspection target image, to minutely obtain at least one of the position and the posture of the standard pattern in the inspection target image.
Liu (2022/0020138)
Discloses a product inspection method and device, producing system and a computer storage medium. The method comprises: conducting image acquisition on a product assembly line to obtain a production line image; extracting a product image including a product to be inspected from the production line image; extracting an inspection point image in a part inspection area in the product image; inputting the inspection point image into an inspection model to obtain an inspection result; and determining that the product to be inspected in the product image has defects under the condition that the inspection result meets any of the following conditions.
Kondo et al. (2019/0139210)
Discloses a defect classification apparatus classifying images of defects of a sample included in images obtained by capturing the sample, the apparatus including an image storage unit for storing the images of the sample acquired by an external image acquisition unit, a defect class storage unit for storing types of defects included in the images of the sample, an image processing unit for extracting images of defects from the images from the sample, processing the extracted images of defects and generating a plurality of defect images, a classifier learning unit for learning a defect classifier using the images of defects of the sample extracted by the image processing unit and data of the plurality of generated defect images, and a defect classification unit for processing the images of the sample by using the classifier learned by the classifier learning unit, to classify the images of defects of the sample.
Table 1
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See Table 1 and form 892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WOO RHIM whose telephone number is (571)272-6560. The examiner can normally be reached Mon - Fri 9:30 am - 6:00 pm et.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WOO C RHIM/Examiner, Art Unit 2676