DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicants
2. This communication is in response to the application filled on 11/16/2023.
3. Claims 1-19 are pending.
4. Limitations appearing inside {} are intended to indicate the limitations not taught by said prior art(s)/combinations.
Information Disclosure Statement
5. The information disclosure statement (IDS) submitted on 11/16/2023 has been considered by the examiner.
Claim Objections
6. Claims 1, 18, and 19 are objected to because of the following informalities:
Claim 1, ln. 10-11 recites “…reference image and the target image to be inspected before subjected to…”, consider correcting to “…reference image and the target image to be inspected that have not been subjected to…”.
Claims 18 and 19 recited analogous limitations to claim 1 in ln. 8-9 and 9-10 respectively.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
7. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
8. Claim 13 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
9. Claim 13 recites the limitation "…the inspection type…" in ln. 4. There is insufficient antecedent basis for this limitation in the claim. Specifically, there is no recitation of an “inspection type” in the preceding claims which claim 13 is dependent upon. Changing the dependency of claim 13 to either 9 or 10 would alleviate this issue. For the sake of compact prosecution, the recitation of “inspection type” in claim 13 was treated analogous to the “inspection type” as defined in claim 9, as including “a dot shape, a linear shape, image unevenness, and a surface shape”.
Claim Rejections - 35 USC § 103
10. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
11. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
12. Claims 1-2, 4-5, 7-8, 11-12, 14-16, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2013/0148987 to Arakawa (hereinafter Arakawa), and further in view of U.S. Patent No. 8,379,932 to Fukunishi (hereinafter Fukunishi).
13. Regarding Claim 1, Arakawa discloses an inspection apparatus for inspecting a target image to be inspected that is obtained by reading an image formed on a recording medium by a printing apparatus ([par. 0006, ln. 1-12] “…an inspection apparatus is configured to inspect a printed product by positioning a reading target image obtainable by reading the printed product relative to a reference image and collating the reading target image with the reference image. The inspection apparatus includes a positioning unit configured to perform positioning processing for the reference image and the reading target image with a first precision, and a detection unit configured to detect an image defect candidate area by collating the reading target image with the reference image, which have been positioned by the positioning unit.”), the inspection apparatus comprising one or more controllers including one or more processors and one or more memories, the one or more controllers configured to ([Fig. 7], [par. 0063, ln. 1-3] “FIG. 7 illustrates an example hardware configuration of the control unit 405 provided in the inspection apparatus 102.”, [par. 0064, ln. 1-8] “The control unit 405 includes a main control unit 703, which is constituted by a CPU, a ROM, and a RAM (although they are not illustrated in the drawing), which are cooperatively operable and function as a control unit. Further, the main control unit 703 can control various operations to be performed by the processing units provided in the control unit 405 to control the overall operation of the inspection apparatus 102.”):
execute image simplifying processing for converting a plurality of image elements in image to a lump of image elements in which the plurality of image elements are concatenated, while maintaining positions of the image elements as-is, on a reference image and the target image to be inspected ([par. 0079, ln. 1-11] “The correction processing includes color conversion processing, gamma correction processing, filter processing (halftone smoothing processing or edge deformation adjustment processing), and bit width adjustment processing. In a case where the correction processing is performed on only the reference image, the image quality difference adjustment unit 707 generates an image equivalent to the scanned image based on simulation using the reference image. This is equivalent to simulating the characteristics of the image forming apparatus 101 and the inspection sensor 403 when no image defect occurs.”, [par. 0080, ln. 1-8] “A resolution conversion unit 708 can convert the resolution of the scanned image or the reference image. The scanned image and the reference image may be mutually different in resolution when these images are input to the control unit 405. Further, the resolution of an image may be changed to control the precision in positioning processing (described below). In such cases… 708 performs resolution conversion processing.”, [par. 0081, ln. 1-18] “…it is now assumed that the scanned image is 600 dpi in the main scanning and 300 dpi in the sub scanning. On the other hand, the reference image is 1200 dpi in the main scanning and 1200 dpi in the sub scanning. If the inspection processing unit 713 requires the resolution of 300 dpi in each of the main scanning and the sub scanning, the resolution conversion unit 708 performs reduction/zoom processing on respective images to obtain images of 300 dpi in both the main scanning and the sub scanning… a conventional method is usable considering the amount of calculations and the required precision… when… 708 performs zooming using the SINC function, it is feasible to obtain a high-precision zooming result although a large amount of calculations is required. When… 708 performs zooming using the Nearest Neighbor Algorithm, it is feasible to reduce the amount of calculations although a low-precision zooming result is obtained.”);
perform alignment between the reference image and the target image to be inspected {before subjected to the image simplifying processing}, using moving related information regarding the alignment between the reference image and the target image to be inspected based on the reference image and the target image to be inspected that have been subjected to the image simplifying processing ([Fig. 10A-11D], [par. 0086, ln. 1-9] “The positioning unit 710 performs positioning processing for the scanned image and the reference image… 710 calculates the affine transformation parameters (i.e., positioning information) to be used when the image deforming unit 709 performs the image geometric deformation processing… the scanned image and the reference image have the same resolution when they are subjected to the positioning processing performed by… 710…”, [par. 0087, ln. 2-10] “…to reduce the amount of calculations… 710 performs positioning of the entire region of the image using image information and positional information of a partial image, e.g., a rectangular area (hereinafter, referred to as a patch or a patch image), not the entire region of the image. The positioning according to the present exemplary embodiment includes three steps of selection of each positioning patch, positioning for each patch, and calculation of affine transformation parameters…”, [par. 0089, ln. 1-10] “… 710 selects a plurality of patches that are suitable for the positioning processing from the reference image. A patch having a larger corner feature quantity in a patch image is an example of the patch suitable for the positioning processing. The corner feature is a feature of an intersection point of two edges where two different standout edges extending in different directions are present at a local region. The corner feature quantity is a feature quantity representing the strength of the edge feature.”, [par. 0093, ln. 1-10] “…710 performs patch selection processing based on a selection parameter. The selection parameter is a parameter that can be used to control the size of each patch to be selected and the number (or the density) of patches. If the patch size becomes greater and the number of patches increases, the positioning accuracy can be improved although the amount of calculations increases.”, [par. 0103, ln. 1-13] “The positional correspondence relationship between the reference patches and the scan patches is described in detail below with reference to FIGS. 11C and 11D… a scan patch illustrated in FIG. 11D corresponds to a reference patch illustrated in FIG. 11C. There are two pieces of information obtainable from the positioning processing applied to the above-described two patches. The first information is center coordinate values (refpX_i, refpY_i) of an i-th (i=1 to N, and N represents the number of patches) reference patch. The second information is coordinate values (scanpX_i, scanpY_i) of a scan patch, which represent a corresponding position of an image represented by the center coordinates of the reference patch.”, [par. 0104, ln. 1-8] “Any shift amount estimation method that can obtain a positional correspondence relationship between the coordinate values (refpX_i, refpY_i) and (scanpX_i, scanpY_i) is employable as a positioning method… a patch pair of a reference patch and a scan patch can be subjected to the Fast Fourier Transform (FFT) to estimate a shift amount by acquiring a correlation between the above-described two patches in a frequency space.”, [par. 0105-0106, ln. 1-14] “The positioning unit 710 can calculate the affine transformation parameters in the following manner. The affine transformation method is a coordinate conversion method that can be expressed by the following conversion formula.
x
'
y
'
=
a
b
c
d
x
y
+
e
f
The above-described formula includes six affine transformation parameters a, b, c, d, e, and f. … (x, y) is (refpX_i, refpY_i) and (x', y') is (scanpX_i, scanpY_i). The positioning unit 710 calculates the affine transformation parameters using N pieces of conversion formulae that can be obtained from N pieces of patch pairs. For example, it is useful to use the least squares method to obtain the affine transformation parameters.”); and
perform inspection by comparing the aligned reference image with the aligned target image to be inspected ([par. 0108, ln. 1-4] “The collation unit 711 receives the scanned image and the reference image after the image deforming unit 709 has completed the image geometric deformation processing, and then performs collation for both images.”, [par. 0109, ln. 1-6] “…the collation unit 711 generates a difference image between the reference image and the scanned image… the difference image can be calculated according to the following formula. Difference image DIF(x,y) = DIS(reference image REF(x,y)-scanned image SCAN(x,y))”, [par. 0118, ln. 1-12] “The determination unit 712 performs determination processing to determine the presence of an image defect candidate with reference to the collation result received from the collation unit 711. More specifically… 712 evaluates the image feature quantity for each pixel block and determines whether the target pixel block is an image defect candidate… 712… uses a criterion illustrated in FIG. 12 in a case where each of the area size and the mean difference value is the image feature quantity, in determining whether the target pixel block is an image defect candidate.”).
Arakawa does not specifically disclose wherein an alignment is made between the reference image and target image that have not been subjected to the image simplifying processing.
However, Fukunishi specifically teaches wherein the alignment of a reference and target image that have not been subjected to the image simplifying processing is based on the alignment of a reference and target image that have been subjected to the image simplifying processing ([col. 12, ln. 60 to col. 13, ln. 12] “In the step S110, an inter-image representative positional displacement amount is determined. The positioning reference image and the positioning subject image are respectively obtained by reducing the reference image and the subject image. Therefore, the representative positional displacement amount is determined by converting the most frequent motion vector, the number of votes for which equals or exceeds the predetermined threshold, at a magnification ratio used to convert the post-reduction positioning images into the pre-reduction images. The magnification ratio used to convert the post-reduction positioning images into the pre-reduction images is calculated as the inverse of a reduction ratio used when the positioning reference image and the positioning subject image are generated from the reference image and the subject image… when the positioning reference image and the positioning subject image are generated by respectively reducing the reference frame and the subject frame to a quarter of their original size, the representative positional displacement amount is determined by quadrupling the determined most frequent vector.”). Specifically, one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize Arakawa and Fukunishi as within the same field of image processing to correct distortions, and as analogous to the claimed invention. The motivation to combine would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, in that by applying an analogous inverse reduction ratio as disclosed in Fukunishi to obtain the representative displacement (or alignment) of the original reference and target image, you can effectively achieve a reduction in processing by aligning the original images at a lower resolution as opposed to the original resolution. One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the apparatus of Arakawa with the original reference and target image alignment analogous to Fukunishi through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 1.
14. Regarding Claim 2, a combination of Arakawa and Fukunishi teaches the apparatus of claim 1. Arakawa further discloses wherein the one or more controllers are further configured to obtain the moving related information by performing alignment between the reference image and the target image to be inspected that have been subjected to the image simplifying processing ([Fig. 10A-11D], [par. 0086, ln. 1-9], [par. 0087, ln. 2-10], [par. 0089, ln. 1-10], [par. 0093, ln. 1-10], [par. 0103, ln. 1-13], [par. 0104, ln. 1-8], [par. 0105-0106, ln. 1-14]). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 2.
15. Regarding Claim 4, a combination of Arakawa and Fukunishi teaches the apparatus of claim 1. Arakawa further discloses wherein, in the image simplifying processing, the one of more controllers are configured to change the image simplifying processing in correspondence with the number of times of the alignment for obtaining the moving related information ([par. 0149, ln. 1-8] “In step S908, the resolution conversion unit 708, the positioning unit 710, the image deforming unit 709 cooperatively perform high-precision positioning processing. The high-precision positioning processing to be performed in step S908 is higher in accuracy than the positioning processing performed in step S902. More specifically, the accuracy of the high-precision positioning processing is a second precision that is higher than the first precision.”, [par. 0151, ln. 1-7] “…the resolution conversion unit 708 converts the partial scanned image and the partial reference image into images whose resolution is suitable for the high-precision positioning processing (e.g., high-resolution of 300 dp.times.300 dpi). The image resolution for the high-precision positioning processing is higher than the image resolution for the positioning to be performed in step S902.”, [par. 0152, ln. 1-15] “…the positioning unit 710 obtains affine transformation parameters (i.e., positioning information) with reference to the partial scanned image and the partial reference image that have been subjected to the above-described resolution conversion processing… 710 uses the selection parameter dedicated to the high-precision positioning processing to obtain the affine transformation parameters. For example, the selection parameter dedicated to the high-precision positioning processing is equivalent to a setting of a dense density according to which one patch can be selected from an area of 200 pixels.times.200 pixels when a large patch size of 128 pixels.times.128 pixels is employed for the patch. The above-described patch density is significantly high compared to the patch density in the positioning processing performed in step S902.”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would specifically recognize that Arakawa effectively changes the image simplifying processing to be of a higher resolution for the second alignment as compared to the first alignment, and therefore changes the image simplifying processing in correspondence with the number of times of the alignment. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 4.
16. Regarding Claim 5, a combination of Arakawa and Fukunishi teaches the apparatus of claim 2. Rejections analogous to claim 4 are further applicable to claim 5. Specifically, Arakawa further discloses wherein, in the image simplifying processing, the one or more controllers are configured to reduce a degree of the image simplifying processing as the number of times of the alignments for obtaining the moving related information increases ([par. 0149, ln. 1-8], [par. 0151, ln. 1-7], [par. 0152, ln. 1-15]). Specifically, one of ordinary skill in the art, before the effective filling date of the claimed invention, would specifically recognize that Arakawa effectively reduces the amount of resolution reduction when performing the second alignment, and therefore reduces a degree of the image simplifying processing in correspondence with the number of times of the alignment. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 5.
17. Regarding Claim 7, a combination of Arakawa and Fukunishi teaches the apparatus of claim 5. Rejections analogous to claim 4 and 5 are further applicable to claim 7. Specifically, Arakawa further discloses in the image simplifying processing, the one or more controllers are configured to, in a case that the image simplifying processing is processing for reducing resolution, reduce the degree of the image simplifying processing by reducing a degree of reduction of the resolution as the number of times of the alignment increases ([par. 0149, ln. 1-8], [par. 0151, ln. 1-7], [par. 0152, ln. 1-15]). Specifically, one of ordinary skill in the art, before the effective filling date of the claimed invention, would specifically recognize that Arakawa effectively reduces the amount of resolution reduction when performing the second alignment, and therefore reduces a degree of reduction of the resolution as the number of times of the alignment increases. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 7.
18. Regarding Claim 8, a combination of Arakawa and Fukunishi teaches the apparatus of claim 5. Rejections analogous to claim 4, 5, and 7 are further applicable to claim 8. Specifically, Arakawa further discloses in a case that the image simplifying processing is dilation processing, reduce the degree of the image simplifying processing by reducing the number of times that the dilation processing is performed as the number of times of the alignment increases ([par. 0149, ln. 1-8], [par. 0151, ln. 1-7], [par. 0152, ln. 1-15]). The examiner specifically notes that the resolution conversion unit of Arakawa also performs analogous functions to dilation ([par. 0081, ln. 1-18] see “zoom”). The examiner also notes that dilation in the context of image processing would encompass resampling and/or resolution scaling, and therefore given the BRI of dilation, is taught by Arakawa. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 8.
19. Regarding Claim 11, a combination of Arakawa and Fukunishi teaches the apparatus of claim 1. Arakawa further disclose wherein the one or more controllers are further configured to obtain the target image to be inspected by reading an image formed on the recording medium ([par. 0057, ln. 1-8] “Subsequently, the printed product is conveyed by a conveyance belt 402 and read by an inspection sensor 403 provided closely to the conveyance belt 402. Although not illustrated, it is useful to provide a pair of inspection sensors on the upper and lower sides of the conveyance belt 402 so that a two-sided printed product can be read by two inspection sensors 403.”, [par. 0058, ln. 1-10] “A control unit 405 performs inspection processing on an image read by the inspection sensor 403 (i.e., a scanned image) and transmits inspection processing result information (inspection determination information) to the image forming apparatus 101 and the finisher 103. A detailed configuration of the control unit 405 and inspection processing that can be performed by the control unit 405 are described in detail below with reference to FIG. 7 and FIG. 9.”, [par. 0065, ln. 1-3] “An image input unit 701 receives a scanned image read by and transmitted from the inspection sensor 403…”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 11.
20. Regarding Claim 12, a combination of Arakawa and Fukunishi teaches the apparatus of claim 1. Arakawa further discloses wherein the one or more controllers are further configured to display an inspection result based on the inspection ([par. 0171, ln. 1-10] “…step S915, the main control unit 703 causes the operation unit 705 to display the inspection processing result. In this case, simply displaying the final determination result image is not useful to let a user recognize the image defect. Therefore, the main control unit 703 generates a composite image by combining the final determination result image with the scanned image and displays the composite image on the operation unit 705…”, [par. 0172, ln. 1-8] “…in step S915, the main control unit 703 causes the operation unit 705 of the inspection apparatus 102 to display the inspection processing result. However, any another configuration capable of notifying a user of the inspection processing result is employable… the main control unit 703 can cause the operation unit 207 of the image forming apparatus 101 to display the inspection processing result.”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 12.
21. Regarding Claim 14, a combination of Arakawa and Fukunishi teaches the apparatus of claim 1. Arakawa further discloses wherein the image simplifying processing is smoothing processing for smoothing the image ([par. 0079, ln. 1-11] see “…halftone smoothing processing…”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 14.
22. Regarding Claim 15, a combination of Arakawa and Fukunishi teaches the apparatus of claim 1. Arakawa further discloses wherein the image simplifying processing is processing for reducing resolution of the image ([par. 0079, ln. 1-11], [par. 0080, ln. 1-8], [par. 0081, ln. 1-18]). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 15.
23. Regarding Claim 16, a combination of Arakawa and Fukunishi teaches the apparatus of claim 1. Arguments analogous to claim 8 are further applicable to claim 16. Arakawa further discloses wherein the image simplifying processing is dilation processing for dilating the image ([par. 0079, ln. 1-11], [par. 0080, ln. 1-8], [par. 0081, ln. 1-18] see “zoom”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 16.
24. Regarding Claim 18, the claim language is analogous to claim one with the exception of “A method of controlling an inspection apparatus for inspecting a target image to be inspected that is obtained by reading an image formed on a recording medium by a printing apparatus, the method comprising:”, wherein the remainder of the claim is analogous to claim 1. Arakawa further disclose a method of controlling an inspection apparatus ([par. 0198, ln. 1-10] “Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment (s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment (s).”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the method of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 18.
25. Regarding Claim 19, the claim language is analogous to claim one with the exception of “A non-transitory computer-readable storage medium storing a program for causing a processor to execute a method of controlling an inspection apparatus for inspecting a target image to be inspected that is obtained by reading an image formed on a recording medium by a printing apparatus, the method comprising:”, wherein the remainder of the claim is analogous to claim 1. Arakawa further disclose a non-transitory computer-readable storage medium storing a program for causing a processor to execute a method ([Fig. 7], [par. 0063, ln. 1-3], [par. 0198, ln. 1-10]). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the non-transitory computer-readable storage medium of Arakawa with the original reference and targe image alignment analogous to Fukunishi to obtain the invention as specified in claim 19.
26. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2013/0148987 to Arakawa in view of U.S. Patent No. 8,379,932 to Fukunishi, and further in view of “Registration of Camera Captured Documents Under Non-rigid Deformation” to Edupuganti et al. (hereinafter Edupuganti).
27. Regarding Claim 3, a combination of Arakawa and Fukunishi teaches the apparatus of claim 1. Arakawa discloses wherein the one or more controllers are configured to perform the alignment between the reference image and the target image to be inspected that have been subjected to the image simplifying processing by means of {non-rigid-body} alignment to obtain the moving related information ([Fig. 10A-11D], [par. 0086, ln. 1-9], [par. 0087, ln. 2-10], [par. 0089, ln. 1-10], [par. 0093, ln. 1-10], [par. 0103, ln. 1-13], [par. 0104, ln. 1-8], [par. 0105-0106, ln. 1-14]). Arakawa and Fukunishi do not specifically disclose wherein the alignment is a non-rigid body alignment.
However, Edupuganti teaches wherein the alignment is a non-rigid body alignment ([pg. 389, col. 1, 2.6. Enhanced TPS-RPM, par. 1, ln. 1 to pg. 390, col. 1, par. 2, ln. 12, Equations (1)-(5)] “We design the enhanced TPS-RPM algorithm to overcome the drawbacks of TPS-RPM. Apart from the template point set
X
r
and test point set
Y
, the algorithm takes into account the correspondence set
C
'
'
. To prevent each template point being moved towards the irrelevant test point we assign different temperature
T
i
to each Gaussian cluster center
x
i
. Finally, the algorithm refines the new correspondences with nearby identical correspondences in
C
'
'
… Let
C
'
'
=
x
i
,
y
i
|
x
i
∈
X
r
,
y
i
∈
Y
be the set of input correspondences computed using the methodology in Section 2.2, where
X
r
=
x
i
:
i
=
1,2
,
…
,
N
and
Y
=
y
j
:
j
=
1,2
,
…
,
M
are the template and test point sets respectively. As we enforce one-one mapping in the correspondence set, N is equal to M. Let f be the underlying Thin-Plate Spline [2] based non-rigid transformation function, and the transformed template point set is
X
r
'
=
x
i
'
=
f
(
x
i
)
:
i
=
1,2
,
…
,
N
. construct a correspondence matrix P to store the probabilities of each test point being assigned to each template point with dimension (N + 1) × (M + 1)… Equation (1)… The inner N ×M sub-matrix defines the probabilities of each
x
i
being assigned to
y
j
. The presence of an extra row and column in the matrix handles outliers in both point sets. Each
p
i
j
is computed as… Equation (2)… The goal of the framework is to find an optimal transformation matrix
P
'
and the optimal transformation function
f
'
that minimizes the energy function
E
P
,
f
as defined below… Equation (4)… The transformation function f uses TPS [2], which can be decomposed into affine and non-affine subspaces, thereby accommodating both rigid and non-rigid transformations.
f
x
i
,
d
,
w
=
x
i
.
d
+
ϕ
x
i
.
w
where
x
i
is the homogeneous point representation of the 2D point
x
i
,
d
is a (D+1)
×
(D+1) affine transformation matrix of the D-dimensional image (For 2D images D=2), and w is a N
×
(D+1) warping coefficient matrix representing non-affine deformation.
ϕ
x
i
is the TPS kernel of size 1
×
(
N
+
1
)
, where each entry
ϕ
k
x
i
=
x
k
-
x
i
2
l
o
g
x
k
-
x
i
.”). Specifically, one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize the TPS alignment of Edupaganti as a non-rigid body alignment. One of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize Edupaganti, Arakawa, and Fukunishi as within the same field of image processing to correct distortions, and Edupaganti and Arakawa as within the same field of correcting distortions specifically for printed material, and as analogous to the claimed invention. The motivation to combine would have been obvious to one of ordinary skill in the art, in that the non-rigid body transformations of Edupaganti allow for real world acquisition of scanned documents despite non-rigid deformations ([pg. 385, col. 1, Abstract, par. 1, ln. 9-14] “We find that the proliferation of camera captured images makes it necessary to address camera noise such as non-uniform lighting, clutter, and highly variable scale/resolution. The absence of a scan bed also leads to challenging non-rigid deformations being seen in paper images.”), and therefore improved registration when compared to a template image ([pg. 392, Figure 6, see 6th column], [pg. 392, col. 1, par. 1, ln. 10-12] “Enhanced TPS-RPM incorporates prior knowledge of correspondences into TPS-RPM and leads to better registration of non-rigidly deformed images.”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi, and further combined the combination of Arakawa and Fukunishi with the non-rigid body alignment of Edupaganti, through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi and the non-rigid body alignment of Edupaganti to obtain the invention as specified in claim 3.
28. Claims 6 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2013/0148987 to Arakawa in view of U.S. Patent No. 8,379,932 to Fukunishi, and further in view of U.S. Publication No. 2017/0004360 to Tanaka et al. (hereinafter Tanaka).
29. Regarding Claim 6, a combination of Arakawa and Fukunishi teaches the apparatus of claim 5. Arakawa further discloses wherein in the image simplifying processing, the one or more controllers are configured to, in a case that the image simplifying processing is smoothing processing, reduce the degree of the image simplifying processing {by reducing size of a filter used in the smoothing processing} as the number of times of the alignment increases ([par. 0149, ln. 1-8], [par. 0151, ln. 1-7], [par. 0152, ln. 1-15]). Arakawa does not specifically disclose that degree of the image simplifying processing is reduces by reducing the size of a filter used in the smoothing processing, though one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize that the image simplifying processing of Arakawa does include both a smoothing processing ([par. 0079, ln. 1-11]) and a reduction of the degree of image simplifying processing as the number of times of the alignment increases ([par. 0149, ln. 1-8], [par. 0151, ln. 1-7], [par. 0152, ln. 1-15]). Likewise, Fukunishi does not specifically disclose reducing the degree of the image simplifying processing by reducing size of a filter used in the smoothing processing.
However, Tanaka teaches a larger size of the smoothing filter can remove information that is conducive to determining a defect ([par. 0227, ln. 1-10] “As with the above basic configuration, the second basic configuration also ensures that density unevenness appearing as a white stripe can be extracted. When the filter size is too large, then a luminance value after a filtering process is not sufficiently high even if a target pixel is within the white stripe, and therefore a defect portion cannot be extracted. For this reason, in this basic configuration, the filter size F is provided with a maximum value Fmax and a minimum value Fmin, and in Step S153 a filter size between Fmax and Fmin is set.”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize Arakawa, Fukunishi, and Tanaka as within the same field of image processing to correct distortions, and Arakawa and Tanaka as more specifically in the same filed of image processing to correct distortions for printed material, and as analogous to the claimed invention. The motivation to combine would have been obvious to one of ordinary skill in the art, in that by incorporating a reduction of the size of a filter for smoothing in the higher accuracy alignment in Arakawa, you prevent noise from being accidentally excluded from subsequent noise removal as taught in Tanaka ([par. 0227, ln. 1-10]). One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi, and further combined the apparatus of the combination of Arakawa and Fukunishi with the reduction of the size of the filter used in the smoothing processing as taught in Tanaka through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi and the reduction of the size of the filter used in the smoothing processing as taught in Tanaka to obtain the invention as specified in claim 6.
30. Claim 9 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2013/0148987 to Arakawa in view of U.S. Patent No. 8,379,932 to Fukunishi, and further in view of U.S. Publication No. 2014/0270397 to Sochi (hereinafter Sochi).
31. Regarding Claim 9, a combination of Arakawa and Fukunishi teaches the apparatus of claim 5. Arakawa further discloses wherein the one or more controllers are further configured to set an {inspection type} and an inspection parameter for the inspection ([par. 0121, ln. 1-16] “The determination unit 712 plots the image feature quantity of each pixel block in the feature quantity space illustrated in FIG. 12 and determines whether the image feature quantity of each pixel block is included in the OK region or the NG region illustrated in FIG. 12. More specifically… 712 determines whether the image feature quantity of each pixel block is equal to or greater than the boundary value (i.e., the first criterion value) of the first criterion graph (see FIG. 12). If it is determined that the image feature quantity of the pixel block is equal to or greater than the threshold value illustrated in FIG. 12… 712 regards the pixel block as an image defect candidate. If it is determined that the image feature quantity of the pixel block is less than the threshold value illustrated in FIG. 12… 712 does not regard the pixel block as an image defect candidate.”), wherein the {inspection type} includes inspection of {a dot shape}, a linear shape ([Fig. 13A-F, see 1306], [Fig. 15A-D, see 1506 and 1507]), image unevenness ([par. 0078, ln. 1-18] “The image quality difference occurs due to influences of pre-print image processing, image forming apparatus characteristics, and scanner characteristics, or differences in image format. The image quality difference occurs regardless of the presence of an image defect. The pre-print image processing includes color conversion processing, gamma processing, and halftone processing. Further, the image forming apparatus characteristics includes color reproducibility, dot gain, and gamma characteristics. Further, the scanner characteristics include color reproducibility, S/N, and scanner MTF. Further, the image format difference indicates that two images are different in the number of bits of one pixel. The image quality difference adjustment unit 707 performs correction processing on both of the scanned image and the reference image or only the reference image to remove these influences so that the scanned image and the reference image become equivalent in image quality if there is not any image defect.”, [par. 0079, ln. 1-11]), and a surface shape ([par. 0082, ln. 1-11] “An image deforming unit 709 can perform image deformation processing on the scanned image and the reference image. In general, paper expansion/contraction or skew in a printing operation or skew in a scanning operation may generate a geometrical difference between the scanned image and the reference image. The image deforming unit 709 corrects the geometrical difference between the scanned image and the reference image by performing image geometric deformation with reference to the skew angle information obtained from the skew detection unit 706 or positioning information obtained from a positioning unit 710.”, [par. 0109, ln. 1-6], see also [par. 0174, ln. 1-17] “…only one criterion is used as the first criterion for the collation/ determination processing in step S903. Alternatively, it is useful to set the first criterion to be variable depending on the paper type (e.g., thick paper or plain paper) of an inspection target printed product. For example, the possibility that a printed product using thick paper is an important printed product (e.g., a front cover) is relatively high. Therefore, it is useful to set a severer criterion to detect an image defect included in each thick paper, compared to that for plain paper. If the criterion becomes severer, not only the number of image defect candidates increases but also the amount of calculations in the following processing increases. As a result, the processing time increases. However, the image forming apparatus 101 has a sufficient processing time because the printing speed required for thick paper is a half of the processing speed required for plain paper.”). Arakawa does not specifically disclose an inspection type or a dot shape, though one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize that Arakawa does disclose at least inspecting of a linear shape, image unevenness, and a surface shape, and inspection parameters relating to said inspection. Likewise, Fukunishi does not specifically disclose an inspection type or a dot shape.
However, Sochi specifically discloses an inspection type ([Fig. 8 and 9], [par. 0066, ln. 1-20] “The defect type determiner unit 434 processes labeling for an image of a page, which has been judged to include a defect in the defect determination. The labeling includes processing to allot the same label to a pixel when neighboring pixels have the same color or the same defect areas continuing within the scanned image (e.g. there is a connection ingredient between pixels). The defect type determiner unit 434 determines the type of defect from a feature quantity of the defect, such as the area or the length of a connection ingredient labeled as existing at the defect position resulting from the defect determination. For example, the feature quantities may be determined beforehand for each type of the defect. In this example, the defect type determiner unit 434 could determine the type of defect having a feature quantity that is the closest to the feature quantity, such as square measurement and/or length, of the connection ingredient labeled as existing at the location of the defect resulting from the defect determination. For example, the defect type determiner unit 434 could determine a black line L, a faint print B1, a faint print B2 and dirty mark D among the defects shown in FIG. 7A.”) and an inspection parameter ([par. 0048, ln. 1-8] “The comparison inspection unit 404 compares the read image data and the master image expressed via 8-bits for each one of R, G and B (total 24 bits) for each corresponding pixel. In particular, for each pixel, the comparison inspection unit 404 calculates pixel value differences for each one of R, G, and B. Based on a comparison of the calculated differences and a threshold, the comparison inspection unit 404 determines whether a defect has occurred in the read image data.”, [par. 0068, ln. 1-11] “The type classification determiner unit 435 determines, for each type of the defect, whether the defect is permissible based on the defect levels input from the defect type determiner unit 434 for each type of defect input from the defect type determiner unit 434. FIG. 8 illustrates a permissible level table stored in the permissible level table database 436. In the permissible level table database 436, a permissible defect level is defined for each of the types of defect. For example, the permissible defect level for a curled side may be defined as "6", and the permissible defect level for defects other than the curled side may be defined as "5".”), wherein the inspection type includes a dot shape ([Fig. 7B, see D], [Fig. 8, see Dirty Mark column 3], [Fig. 9-10B], [par. 0070, ln. 1-8] “FIG. 9 illustrates a determiner result table stored in the determiner result table database 437. The determiner result table, for example, indicates the determination result determined by the type classification determiner unit 435. Referring to FIG. 9, the determiner result table includes information associating the defect level for each type of defect for each page with the determination result, the address stored for the image of the page, and the size of the page.”, [par. 0071, ln. 1-8] “For example, as shown in FIG. 9, in the first page, the defect level for a faint print, a curled side, a toner leak, a dirty mark, an image abnormality, a black line, a white line, and, an attachment of the dust is "1", "2", "3", "2", "2", "4", "1", "1", respectively. Because, in the first page, each of the defect levels is lower than the permissible defect level for the respective types shown in FIG. 8, the result of the defect determination is indicated in the table as normal.”, [par. 0072, ln. 1-9] “On the other hand, in the second page, the defect level for the faint print, the curled side, the toner leak, the dirty mark, the image abnormality, the black line, the white line, and, the attachment of the dust is "2", "2", "6", "8", "2", "1", "4", "1", respectively. Because, in the second page, the defect level for the dirty mark and the curled side are higher than the permissible defect level for the respective type shown in FIG. 8, the result of the defect determination is indicated in the table as abnormal.”), a linear shape ([Fig. 7B, see B1-B2 and L], [Fig. 8, see Black and White line column 6-7], [Fig. 9-10B], [par. 0070, ln. 1-8], [par. 0071, ln. 1-8], [par. 0072, ln. 1-9]), image unevenness ([Fig 8, see Faint Print and Toner Leak, column 1-2], [Fig. 9-10B], [par. 0070, ln. 1-8], [par. 0071, ln. 1-8], [par. 0072, ln. 1-9]), and a surface shape ([Fig. 8, see Curled Selvage, column 3], [Fig. 9-10B, specifically 10B top 2 images with curled selvage], [par. 0070, ln. 1-8], [par. 0071, ln. 1-8], [par. 0072, ln. 1-9]). One of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize Arakawa, Fukunishi, and Sochi as within the same filed of image processing to correct distortions, and Arakawa and Sochi as more specifically in the same filed of image processing to correct distortions for printed material, and as analogous to the claimed invention. Specifically, the motivation to combine would have been obvious to one of ordinary skill in the art, and is disclosed in Sochi, wherein it allows for customizable determination for permissibility of defect levels of each type ([par. 0008, ln. 1-3] “However, it is difficult for the user to intuitively set the permissible defect level for each type of defect...”, [par. 0068, ln. 1-11]). One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi, and further combined the apparatus of the combination of Arakawa and Fukunishi with the inspection type and dot shape determination of Sochi, through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi and the inspection type and dot shape determination of Sochi to obtain the invention as specified in claim 9.
32. Regarding Claim 13, a combination of Arakawa and Fukunishi teaches the apparatus of claim 12. Arguments analogous to claim 9 are further applicable to claim 13. Specifically, Arakawa discloses when the inspection result is displayed, the one or more controllers are configured to display whether or not there is a defect corresponding {to the inspection type} based on the inspection, and in a case where the defect is detected, the one or more controllers are configured to display a position on the image at which the defect is detected ([par. 0171, ln. 1-10], [par. 0172, ln. 1-8]). Specifically, one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize Arakawa discloses highlighting the defect and displaying the defect and position to the user, but does not specifically disclose an “inspection type”. Likewise, Fukunishi does not specifically disclose an inspection type.
However, Sochi specifically discloses an inspection type ([Fig. 8 and 9], [par. 0066, ln. 1-20]). The motivation to combine remains analogous to claim 9. Specifically, in combining the inspection type of Sochi with the apparatus of the combination of Arakawa and Fukunishi, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to display the defect corresponding to the inspection type selected by the user. Likewise, Sochi specifically discloses an analogous displaying of the defect to a user ([par. 0088, ln. 1-7] “The permissible changing unit 439 may instruct the display unit to display a defect position permissible designation screen, which is a screen that permits a user to designate the position of the defect as shown FIG. 12. Further, the permissible changing unit 439 may change the permissible defect level corresponding to the type of defect at the position designated by the user.”, [par. 0089, ln. 1-5] “Further, the permissible changing unit 439 may instruct the display unit to display the defect, which has a defect level that is higher than the permissible defect level in the magnified image displayed in the defect position designation screen.”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi, and further combined the apparatus of the combination of Arakawa and Fukunishi with the inspection type determination of Sochi, through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi and the inspection type and dot shape determination of Sochi to obtain the invention as specified in claim 13.
33. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2013/0148987 to Arakawa in view of U.S. Patent No. 8,379,932 to Fukunishi, and further in view of U.S. Publication No. 2014/0270397 to Sochi, and further in view of U.S. Publication No. 2019/0289152 to Tsue (hereinafter Tsue).
34. Regarding Claim 10, a combination of Arakawa, Fukunishi, and Sochi teaches the apparatus of claim 9. Arakawa discloses wherein the inspection parameter includes {designation of a filter corresponding to the inspection type}, and a threshold for determining a defect in the comparison in the inspection ([par. 0121, ln. 1-16]). Arakawa does not specifically disclose designation of a filter corresponding to an inspection type. Likewise, Fukunishi does not specifically disclose designating of an inspection type or a filter corresponding to the inspection type.
However, Sochi discloses designation of features corresponding to an inspection type ([Fig. 8 and 9], [par. 0066, ln. 1-20]). The motivation to combine remains analogous to claim 9. One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi, and further combined the apparatus of the combination of Arakawa and Fukunishi with the inspection type determination of Sochi, through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results. Sochi does not specifically disclose designation of a filter corresponding to an inspection type, though one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize that Sochi does disclose designating features of an inspection type ([par. 0066, ln. 1-20]). The examiner specifically notes that filtering to determine features is well known within the art, and that one of ordinary skill in the art, before the effective filling date of the claimed invention, would have recognized that certain inspection types and their respective features could be extracted by well-known and commonly used filters (e.g., Sobel filters for line defects, etc.).
However, Tsue specifically discloses designating a filter to determine a defect ([Fig. 10-11], [par. 0061, ln. 1-11] “…a difference between the read image obtained by reading the printed matter and the printing image data to generate a difference image and a procedure of determining the presence or absence of a defect based on the magnitude of fluctuation of a pixel value in the difference image, an edge detection filter, a threshold value for determining contamination…”, [par. 0106, ln. 1-9] “Thereafter, in order to detect a place (edge) where the fluctuation in value with respect to an adjacent pixel is larger in the difference image, a process of applying the edge detection filter to the difference image is performed and the edge is emphasized. Through this process, a point where the fluctuation in value between pixels is larger is emphasized. Note that pixels having a predetermined interval are selected for comparison. The interval is not particularly limited, but adjacent pixels are preferable.”, [par. 0107, ln. 1-3] “As the edge detection filter, for example, a Sobel filter or a Robinson filter can be used…”, [par. 0108, ln. 1-7] “a 3×3 filter is used to emphasize the fluctuation in value with respect to an adjacent pixel, but the size of the filter usable in the present invention is not particularly limited; a 5×5 filter may be used in order to work out a value that fluctuates between pixels away from each other by two pixels or a larger filter may be used.”, [par. 0111, ln. 1-9] “Since the value after the edge detection filter process varies depending on the coefficient of the edge detection filter to be used, it is desirable to set the threshold value based on the type of the edge detection filter. For example, a threshold value associated with the type of filter may be saved in advance in the storage such that a threshold value according to the type of a filter to be used is used. In addition, learning can also be performed based on past inspection results such that the threshold value is adjusted.”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize Arakawa, Fukunishi, Sochi, and Tsue as within the same field of image processing to correct distortions, and Arakawa, Sochi, and Tsue as more specifically in the same filed of image processing to correct distortions for printed material, and as analogous to the claimed invention. The motivation to combine would have been obvious to one of ordinary skill in the art, in that the filters of Tsue offer a fast and efficient way to highlight noise and extract features analogous to those taught in the apparatus of the combination of Arakawa, Fukunishi, and Sochi. One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the apparatus of the combination of Arakawa, Fukunishi, and Sochi, and further combined the designated the filters based of the inspection type analogous to Tsue, through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous to Fukunishi, the inspection type determination of Sochi, and the designation of a filter based on inspection type analogous to Tsue to obtain the invention as specified in claim 10.
35. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2013/0148987 to Arakawa in view of U.S. Patent No. 8,379,932 to Fukunishi, and further in view of U.S. Publication No. 2015/0356717 to Madden et al. (hereinafter Madden).
36. Regarding Claim 17, a combination of Arakawa and Fukunishi teaches the apparatus of claim 1. Arakawa specifically discloses wherein the plurality of image elements include a nearby and similar character string {and bar code} in the image ([Fig. 11A, see Tel: and Fax: with similar character string save for final digit]). Arakawa and Fukunishi do not specifically disclose wherein the image elements include a bar code.
However, Madden specifically discloses wherein the image elements include a barcode and similar character string in the image ([Fig. 10, see Table on right, barcode on bottom], [Fig. 11, see barcodes, similar character strings], [par. 0036, ln. 1-4] “…the inspection tools include a tool for barcode testing, said tool being adapted to locate and decode one barcode line in a region of an input image, and to characterize the barcode in quality terms.”, [par. 0150, ln. 6-12] “The format label is therefore mostly composed of variable placeholders, each of which may be either an image placeholder, a text string placeholder, a barcode placeholder, a line, a box or a circle. The location, dimension and some additional information for each of these placeholders is specified at the format label level. These placeholders collectively form the set of variable data associated with a format label.”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would specifically recognize Arakawa and Madden as within the same field of image processing for quality inspection of printed material, and as analogous to the claimed invention. Specifically, the motivation to combine would have been obvious one of ordinary skill in the art, in that it would allow application of the inspection apparatus of the combination of Arakawa and Fukunishi to labels with barcodes, and thus allow for real world quality inspection of printed labels. One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the apparatus of Arakawa with the original reference and targe image alignment analogous of Fukunishi, and further combined the apparatus of the combination of Arakawa and Fukunishi with the similar character strings and barcodes of Madden, through known means, with no change to their respective function, and the combination would had yielded nothing more than predicable results.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the apparatus of Arakawa with the original reference and targe image alignment analogous of Fukunishi and the similar character strings and barcodes of Madden to obtain the invention as specified in claim 17.
Conclusion
37. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAULO ANDRES GARCIA whose telephone number is (703)756-5493. The examiner can normally be reached Mon-Fri, 8-4:30PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached on (571)272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAULO ANDRES GARCIA/Examiner, Art Unit 2669
/CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669