DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-6, and 8-10 are currently pending in the present application, with claim 1 being independent.
Response to Amendments / Arguments
Applicant’s arguments, see Pg. 5-7, filed 11/12/2025, with respect to the rejection(s) of claim(s) 1-6 and 8-10 under 35 U.S.C. § 112 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of applicant’s amendments to the claims.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-6, 8-10 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The phrase “certain part” in claim 1 is a relative term which renders the claim indefinite. The phrase “certain part” not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is unclear whether a “certain part” in an image refers to the pattern region, the background region, a detected defect region, or a region determined by a known segmentation, and therefore fails to provide objective boundaries for what portion of the image is being referenced, and further unclear how “intensity” is determined for purposes of the claimed numerical value, ratios, and thresholds.
The examiner respectfully requests the applicant to clarify the scope of the claimed invention.
Claims depending thereon are also rejected for substantially similar reasons as that set forth for the claims from which they depend on.
Claims 1-6, 8-10 will be examined as best understood by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6, 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ha et al. "Robust reflection detection and removal in rainy conditions using LAB and HSV color spaces." REV Journal on Electronics and Communications 6, no. 1-2 (2016), hereinafter referred to as “Ha”, in view of Suzuki et al. (US 8548200 B2) hereinafter referred to as “Suzuki”, and in further view of Wang et al. "Identification and detection of surface defects of outer package printed matter based on machine vision." Journal of Korea TAPPI 52, no. 2 (2020): 3-11, hereinafter referred to as “Wang”.
Regarding claim 1, Ha discloses a method for processing an image, comprising:
acquiring a to-be-processed image (Pg 14, Section 2.2, Right Column; original RGB input image), wherein the to-be-processed image comprises a background (Pg. 14, Section 2.1; vehicles) and a pattern (Pg, 14, Section 2.2; reflected areas);
converting the to-be-processed image into a first color space to obtain a first image component of the to-be-processed image in a first channel, wherein the first channel describes hue information of an image (Table I Summary of HSV Color Space and Pg. 14, Section 2.2, Right Column; The original RGB input image is cloned and converted to …HSV color space…HSV color space contains three channels, H, S, V which represent hue, saturation, and value…focuses on L and H channels. ), and the pattern and the background in the first image component behave differently than in a second image component in terms of intensity (Pg. 15, Section 2.2, Left Column; dark objects such as black vehicles share many similar features with reflection. So they can be mistakenly marked as reflection if only L channel values are considered. In the H channel, reflected areas tend to have lower values…);
converting the to-be-processed image into a second color space to obtain the second image component of the to-be-processed image in a second channel, wherein the second channel describes brightness information of an image (Table I Summary of LAB Color Space and Pg. 14, Section 2.2, Right Column; the original RGB input image is cloned and converted to LAB…The LAB color space contains a lightness channel (L), and two color channels (A, B)…focuses on L and H channels), and the ratio of the intensity between the pattern and the background in the second image component (Pg. 15, Section 2.2, Left Column, Formula (1) and Table II; the values of pixels in both L and H channels are compared in turned. Let R be the group of reflected pixels; pixel p belongs to R if its lightness value L(p)…are lower than given thresholds (a, B), then (1) …L(p) < a…Table II…High Intensity 46, Low Intensity 63. Pg. 16, Section 2.3, Left Column; observation that the reflected areas have different intensities, in term of lightness, compared to non-reflected areas). Pg. 16-17, Section 3; L channel is used to detect the high intensity lightness pixels) is similar to the ratio of the intensity between the pattern and the background in the to-be-processed image (Pg. 15, Section 2.2, Left Column, Formula (1) and Table II; the values of pixels in both L and H channels are compared in turned. Let R be the group of reflected pixels; pixel p belongs to R if its… hue value H(p) are lower than given thresholds (a, B), then (1)…H(p) < B…Table II…High Intensity 54, Low Intensity 62),
and performing image fusion based on the first image component and the second image component to obtain a target image (Fig. 3b-3d and Pg. 15, Section 2.2, Left Column; By combining the values of L and H channels, any pixels that have lower values than a certain threshold are marked as reflection, and other marked as non-reflection), wherein the ratio of the intensity between the pattern in the target image and the background in the target image is greater than or is equal to a preset threshold (Pg. 16-17, Section 2.3; reflected areas have different intensities, in term of lightness, compared to non-reflected areas…So, we can remove reflections by raising the lightness of these areas to slightly match with non-reflected ones…the average chromaticity value among the chosen segment is computed. Afterwards, we re-scale the reflected area chromaticity value to the computed value…if the difference between the reflected area value and the best-fit region average value is larger than a certain threshold, the shadowed area is not changed),
the intensity of a certain part in an image is a numerical value assigned to a pixel in the certain part (Table II and Pg. 15, Section 2.2; any pixels that have lower values than a certain threshold are marked as reflection, and other marked as non-reflection…pixels are classified).
Ha does not disclose wherein similar ratio means that the difference between the ratio of the intensity between the pattern and the background in the second image component and the ratio of intensity between the pattern and the background in the to-be-processed image meets a preset value.
In the same art of intensity image analysis, Suzuki discloses wherein similar ratio means that the difference between the ratio of the intensity between the pattern and the background in the second image component and the ratio of intensity between the pattern and the background in the to-be-processed image meets a preset value (Column 2, Lines 54-63; The system includes an intensity ratio calculating unit that calculates a ratio of a light intensity of a first area in the picked-up image to a light intensity of a second area in the picked-up image…The system includes a threshold correcting unit that compares the calculated ratio with a preset first value)
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Ha’s intensity-based image processing technique to include the similarity evaluation using a preset value as taught by Suzuki. Ha already relies on numerical comparisons of intensity values and applies preset threshold to classify image regions based on brightness and hue. Suzuki teaches that when comparing intensity ratios between image regions, the comparison may be evaluated against a preset value. Therefore, having an explicit preset threshold when evaluating similarity between intensity relationships to maintain the structural contrast characteristics of the original image while still allowing enhancement or transformation is obvious. Incorporating a preset-value similarity check to existing quantitative comparison techniques is a predictable optimization, and further prevents over-correction and improves robustness.
Ha in view of Suzuki does not disclose the target image is used for detecting appearance defect of a package on which the pattern is printed.
In the same art of background and pattern detection, Wang discloses the target image is used for detecting appearance defect of a package on which the pattern is printed (Pg. 4-6 Section 2; collects the target image through computer, and then identifies the image defects by digitizing the image…In the case of external packaging printing…detect defects on the printing area).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to apply Ha and Suzuki’s intensity-based image processing technique to the package defect detection as taught by Wang. Ha, Suzuki, and Wang encompass image-based identification of visually distinct regions, therefore, the combination is a predictable use of known image-processing techniques for a known purpose, by leveraging Ha’s intensity-based image processing to improve the accuracy and reliability of defect detection in packaging systems.
Regarding claim 2, Ha in view of Suzuki and in further view of Wang discloses the method for processing an image according to claim 1. Ha further discloses wherein the acquiring a to-be-processed image comprises: acquiring an original image, and performing enhancement processing on the original image to obtain the to-be-processed image (Pg. 14, Section 2.1, Right Column; vehicle detection technique, which is described in [19, 20] is applied. We first calculate the contour of each foreground object and use the contour size as a mean to filter noise and small objects. After that a bounding box is applied to each vehicle blob. Using the size and location of these bounding boxes, we extract the moving vehicles from the input image into separated vehicle images. This step helps reducing the computation effort since later processes perform only on a smaller set of images).
Ha, Suzuki, and Wang are combined for the reasons set forth above with respect to claim 1.
Regarding claim 3, Ha in view of Suzuki and in further view of Wang discloses the method for processing an image according to claim 1. Ha further discloses wherein the performing image fusion based on the first image component and the second image component to obtain a target image comprises:
performing the image fusion based on the first image component and the second image component to obtain a candidate target image (Fig. 3b-3d and Pg. 15, Section 2.2, Left Column; By combining the values of L and H channels, any pixels that have lower values than a certain threshold are marked as reflection, and other marked as non-reflection. Particularly, the values of pixels in both L and H channels are compared in turned),
and performing preset filtering processing on the candidate target image to obtain the target image (Pg. 15, Section 2.2, Right Column; after pixels are classified, we apply morphological operations to remove isolated pixels. Mis-marked pixels are resolved using dilation and erosion. Then, marked areas which are not large enough (whose number of pixels is smaller than a certain threshold) will be removed. Pg. 16, Section 2.3, Right Column; all remaining edges are smoothed with a Gaussian mask. Fig. 3c-3d).
Ha, Suzuki, and Wang are combined for the reasons set forth above with respect to claim 1.
Regarding claim 6, Ha in view of Suzuki and in further view of Wang discloses the method for processing an image according to claim 2. Ha in view of Suzuki does not disclose wherein the original image is an image of a package, the background is a package body, and the pattern comprises at least one of a character, a logo and a graphic.
In the same art of background and pattern detection, Wang discloses wherein the original image is an image of a package, the background is a package body, and the pattern comprises at least one of a character, a logo and a graphic (Pg. 7, Fig. 2).
Ha, Suzuki, and Wang are combined for the reasons set forth above with respect to claim 1.
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to apply Ha and Suzuki’s intensity-based image processing technique to the package defect detection system as taught by Wang. Having the package body be the background and at least one of a character, logo, and a graphic be the pattern would yield predictable results in improving scalability of defect detection systems using known intensity-based region identification/segmentation.
Regarding claim 8, Ha in view of Suzuki and in further view of Wang discloses the method according to claim 1. However, Ha does not appear to explicitly disclose a non-transitory computer-readable storage medium storing a program, wherein when the program is executed by a processor, the method according to claim 1 is implemented.
In the same art of intensity image analysis, Suzuki discloses a non-transitory computer-readable storage medium storing a program, wherein when the program is executed by a processor, the method according to claim 1 is implemented (Fig. 1 and Column 6, Lines 1-11; image-processing ECU 7…CPU, storage medium).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to implement Ha’s intensity-based image processing technique into Suzuki’s image-processing system comprising a processor and a storage apparatus storing a program that, when executed, causes the processor to perform the method steps. The motivation lies in the advantage that computer systems with processors and storage media are a standard means of executing image and video processing methods, and would have been an obvious design choice, allowing the method to be automated, executed, and practically deployed in electronic devices.
Regarding claim 9, Ha in view of Suzuki and in further view of Wang discloses the method according to claim 1. However, Ha does not appear to explicitly disclose a computing device, comprising: a processor, and a storage medium storing a program, wherein when the program is executed by the processor, the method according to claim 1 is implemented.
In the same art of intensity image analysis, Suzuki discloses a computing device, comprising: a processor; and a storage medium storing a program, wherein when the program is executed by the processor, the method according to claim 1 is implemented (Fig. 1 and Column 6, Lines 1-11; image-processing ECU 7).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to implement Ha’s intensity-based image processing technique into Suzuki’s image-processing system comprising a processor and a storage apparatus storing a program that, when executed, causes the processor to perform the method steps. The motivation lies in the advantage that computer systems with processors and storage media are a standard means of executing image and video processing methods, and would have been an obvious design choice, allowing the method to be automated, executed, and practically deployed in electronic devices.
Claim(s) 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ha et al. "Robust reflection detection and removal in rainy conditions using LAB and HSV color spaces." REV Journal on Electronics and Communications 6, no. 1-2 (2016), hereinafter referred to as “Ha”, in view of Suzuki et al. (US 8548200 B2) hereinafter referred to as “Suzuki”, in further view of Wang et al. "Identification and detection of surface defects of outer package printed matter based on machine vision." Journal of Korea TAPPI 52, no. 2 (2020): 3-11, hereinafter referred to as “Wang”, and in further view of Chiu et al. (US 20050275736), hereinafter referred to as “Chiu”.
Regarding claim 4, Ha in view of Suzuki and in further view of Wang discloses the method for processing an image according to claim 1, but does appear to explicitly disclose wherein the converting the to-be-processed image into a second color space comprises: converting the to-be-processed image into the first color space to obtain a first image, and converting the first image into the second color space.
In the same art of color space conversion for image enhancement processing, Chiu discloses wherein the converting the to-be-processed image into a second color space comprises:
converting the to-be-processed image into the first color space to obtain a first image (Fig. 14 and Par. 0093; YCbCr to sRGB conversion…sRGB to XYZ conversion…XYZ is an intermediate format between RGB and CIELab),
and converting the first image into the second color space (Fig. 14 and Par. 0093; XYZ values are converted to CIELab through CIELab space).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Chiu’s teachings of converting to a first color space to obtain a first image, and converting the first image to a second color space into Ha, Suzuki, and Wang’s intensity-based image processing techniques. Performing sequential conversion into multiple color spaces is a routine and predictable technique in image processing, where an intermediate format is used to allow subsequent processing to operate in the most suitable representation when different color spaces offer different advantageous properties, therefore, the modification merely provides conventional implementation design in standard color space conversion pipelines.
Regarding claim 5, Ha in view of Suzuki and in further view of Wang discloses the method for processing an image according to claim 1, and Ha further discloses the second color space is a Lab color space (Table I Summary of LAB Color Space and Pg. 14, Section 2.2, Right Column; the original RGB input image is cloned and converted to LAB…The LAB color space contains a lightness channel (L), and two color channels (A, B)…focuses on L and H channels).
Ha in view of Suzuki and in further view of Wang does not disclose wherein the first color space is the RGB color space or an XYZ color space.
In the same art of color space conversion for image enhancement processing, Chiu discloses wherein the first color space is the RGB color space or an XYZ color space, and the second color space is a Lab color space (Fig. 14 and Par. 0093; sRGB to XYZ conversion…XYZ is an intermediate format between RGB and CIELab…XYZ values are converted to CIELab through the CIELab space).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Chiu’s teachings of the first color space is RGB color space, and the second color space is LAB color space into Ha, Suzuki, and Wang’s color space conversion techniques. The motivation lies in the advantage of providing a more perceptually uniform representation of color, essential for accurate color representations and manipulations. The RGB-to-Lab workflow is an established standard technique in color space conversion for applications requiring precise color management.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ha et al. "Robust reflection detection and removal in rainy conditions using LAB and HSV color spaces." REV Journal on Electronics and Communications 6, no. 1-2 (2016), hereinafter referred to as “Ha”, in view of Suzuki et al. (US 8548200 B2) hereinafter referred to as “Suzuki”, in further view of Wang et al. "Identification and detection of surface defects of outer package printed matter based on machine vision." Journal of Korea TAPPI 52, no. 2 (2020): 3-11, hereinafter referred to as “Wang”, and in further view of Chiu et al. (US 20050275736), hereinafter referred to as “Chiu”, and in further view of Shi et al. (CN 112862755 A), hereinafter referred to as “Shi”.
Regarding claim 10, Ha in view of Suzuki and in further view of Wang discloses the method for processing an image according to claim 1, and Ha further discloses the second color space is a Lab color space, and the second channel is a L channel (Table I Summary of LAB Color Space and Pg. 14, Section 2.2, Right Column; the original RGB input image is cloned and converted to LAB…The LAB color space contains a lightness channel (L), and two color channels (A, B)…focuses on L and H channels).
Ha in view of Suzuki and in further view of Wang each teach conversion of image data between known color spaces of RGB, HSV and LAB.
However, Ha in view of Suzuki and in further view of Wang does not disclose wherein the first color space is a Red-Green-Blue (RGB) color space.
In the same art of color space conversion for image enhancement processing, Chiu discloses wherein the first color space is a Red-Green-Blue (RGB) color space (Fig. 14 and Par. 0093; YCbCr to sRGB conversion).
Ha, Suzuki, Wang, and Chiu are combined for the reason set forth above with respect to claim 5.
Ha in view of Suzuki, in view of Wang, and in further view of Chiu does not appear to explicitly disclose the first channel is a G channel, the G channel is complementary to the background color.
In the same art of defect detection, Shi discloses wherein the first color space is a Red-Green-Blue (RGB) color space, and the first channel is a G channel (Par. 0078; R, G, B three-channel components of the printed paper section image collected by the optical microscope are unified into a parameter represented by 0 to 255), the G channel is complementary to the background color (Par. 0070; the light of the light emitting panel and the to-be-detected printed sample 200 ink layer color as complementary color).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate an RGB representation where the first channel is a G channel, and a channel whose color response is complementary to the background color as taught by Shi into Ha, Suzuki, Wang, and Chiu’s intensity-based image processing technique. In defect detection systems, having a background’s chromaticity lie complementary to the green axis in an RGB color space provides the predictable benefit of maximizing separation between background and pattern so that defects become more detectable under thresholding/segmentation (Shi Par. 0070; enhancing the contrast of the color of the two, making the section image more clear). Although it is not explicitly disclosed that a G channel is the first channel, in an RGB color space system, there are three candidate channels (R, G, B) (as further described in Shi Par. 0078), it would be an obvious engineering design choice to select whichever channel provides the largest measured intensity separation between the background and pattern. When the background chromaticity is such that the green response provides a complementary color to the background, typically sensor image sensors have more green pixels than red and blue, the human eye is most sensitive to green light, the green channel is crucial for capturing brightness/luminance efficiently, and selecting the G channel is a predictable result of a routine calibration/selection step within an RGB processing pipeline.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNY NGAN TRAN whose telephone number is (571) 272-6888. The examiner can normally be reached Mon-Thurs 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JENNY N TRAN/Examiner, Art Unit 2615
/ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615