DETAILED ACTION
Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-9 are currently pending in this application.
Priority
2. Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Information Disclosure Statement
3. No information disclosure statement (IDS) was submitted in this application.
Drawings
4. The drawings submitted on 12/12/2023 are in compliance with 37 CFR § 1.81 and 37 CFR § 1.83 and have been accepted by the examiner.
Claim Rejections - 35 USC § 101 Non-Statutory
5. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
6. Claims 6-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Specifically, representative Claim 6 recites:
6. A vision-based enhanced omni-directional defect detection method, comprising:
step 1, performing posture adjustment on equipment, and changing an equipment angle and a transmission speed;
step 2, searching for a source of a suspected defect and performing focus detection, comprising collecting information of a multi-angle detection picture, preliminarily identifying the multi-angle detection picture by using a YOLOv5 defect fast identification technology, and in a case where a confidence value is less than 0.6, continuously sending a signal to change the equipment angle and acquiring the multi-angle detection picture;
step 3, after the multi-angle detection picture is acquired, segmenting a defective region within an identification box in the multi-angle detection picture by using a grayscale threshold, and for the defective region, extracting feature information of a defect in the defective region based on OpenCV, wherein the feature information comprises area, perimeter, a pixel mean value, and pixel variance information;
step 4, extracting a light value of the detection picture based on OpenCV, increasing a brightness difference by actively adjusting an intensity of a light source, and enhancing a contrast between the defect and a background in the detection picture, wherein extraction of the light value comprises: converting the detection picture from a red-green-blue (RGB) color space to a hue-saturation-value (HSV) space, extracting brightness V values and calculating a mean value of the brightness V values, and wherein the detection picture has n non-zero pixels, with a non-zero pixels inside the identification box and b non-zero pixels outside the box, HSV values of the non-zero pixels are (h1,S1,V1,(h2,S2,V2),…,(hn, Sn, Vn) respectively, and a pixel mean V value is calculated as:
PNG
media_image1.png
78
112
media_image1.png
Greyscale
wherein Vn and Vb are brightness mean values inside the identification box and outside the identification box respectively, and a difference is dmax = |Va-Vb|
step 5, performing accurate identification on the defective region, wherein the defect in the defective region is extracted based on OpenCV, data preprocessing is performed firstly, and an image quality is improved through a normalization operation, wherein a normalization formula is
PNG
media_image2.png
80
172
media_image2.png
Greyscale
wherein X denotes a pixel value of an input image, Xo,1-n denotes a pixel value of an output image, Xmax denotes a maximum pixel value of the input image, Xmin denotes a minimum pixel value of the input image, and image pixels are adjusted to a range of [0, I] after normalization;
a defect contour is detected by using Canny edges, geometric area information S of the defect is acquired by using a function of cv2.contourArea() in an OpenCV library, at the same time, geometric perimeter information L of the defect is extracted by using a function of cv2.arcLength() in the OpenCV library, a slenderness ratio M and an area occupancy degree N of the defective region are acquired,
the slenderness ratio is obtained by M=w/h,
wherein h and W are a length value and a width value of the defective rectangular region, and the area occupancy degree is obtained by
PNG
media_image3.png
74
96
media_image3.png
Greyscale
and
step 6, acquiring, based on multi-feature information of the detection picture, multi-feature average data of the defect, using the multi-feature average data as input, and achieving defect category differentiation through a decision tree classification model.
The claim limitations in the abstract idea have been highlighted in bold above; the remaining limitations are “additional elements.”
Under Step 1 of the analysis, claim 6 does belong to a statutory category, namely it is a process claim.
Under Step 2A, prong 1, claim 1 is found to include at least one judicial exception, that being a mathematical process. This can be seen in the claim limitation of step 1, performing posture adjustment on equipment, and changing an equipment angle and a transmission speed;
step 3, after the multi-angle detection picture is acquired, segmenting a defective region within an identification box in the multi-angle detection picture by using a grayscale threshold, and for the defective region, extracting feature information of a defect in the defective region based on OpenCV, wherein the feature information comprises area, perimeter, a pixel mean value, and pixel variance information;
step 4, extracting a light value of the detection picture based on OpenCV, increasing a brightness difference by actively adjusting an intensity of a light source, and enhancing a contrast between the defect and a background in the detection picture, wherein extraction of the light value comprises: converting the detection picture from a red-green-blue (RGB) color space to a hue-saturation-value (HSV) space, extracting brightness V values and calculating a mean value of the brightness V values, and wherein the detection picture has n non-zero pixels, with a non-zero pixels inside the identification box and b non-zero pixels outside the box, HSV values of the non-zero pixels are (h1,S1,V1,(h2,S2,V2),…,(hn, Sn, Vn) respectively, and a pixel mean V value is calculated as:
PNG
media_image1.png
78
112
media_image1.png
Greyscale
wherein Vn and Vb are brightness mean values inside the identification box and outside the identification box respectively, and a difference is dmax = |Va-Vb|
step 5, performing accurate identification on the defective region, wherein the defect in the defective region is extracted based on OpenCV, data preprocessing is performed firstly, and an image quality is improved through a normalization operation, wherein a normalization formula is
PNG
media_image2.png
80
172
media_image2.png
Greyscale
wherein X denotes a pixel value of an input image, Xo,1-n denotes a pixel value of an output image, Xmax denotes a maximum pixel value of the input image, Xmin denotes a minimum pixel value of the input image, and image pixels are adjusted to a range of [0, I] after normalization;
a defect contour is detected by using Canny edges, geometric area information S of the defect is acquired by using a function of cv2.contourArea() in an OpenCV library, at the same time, geometric perimeter information L of the defect is extracted by using a function of cv2.arcLength() in the OpenCV library, a slenderness ratio M and an area occupancy degree N of the defective region are acquired, the slenderness ratio is obtained by M=w/h,
wherein h and W are a length value and a width value of the defective rectangular region, and the area occupancy degree is obtained by
PNG
media_image3.png
74
96
media_image3.png
Greyscale
and
step 6, acquiring, based on multi-feature information of the detection picture, multi-feature average data of the defect, using the multi-feature average data as input, and achieving defect category differentiation through a decision tree classification model.
, which is the judicial exception of a mental process and/or a mathematical concept because it is merely a data evaluation including calculations, and/or judgements capable of being performed mentally.
Similar limitations comprise the abstract ideas of Claims 8 and 15.
Step 2A, prong 2 of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception(s) into a practical application of the exception. This evaluation is performed by (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (b) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application.
In addition to the abstract ideas recited in claim 1, the claimed method recites additional elements including step 2, searching for a source of a suspected defect and performing focus detection, comprising collecting information of a multi-angle detection picture, preliminarily identifying the multi-angle detection picture by using a YOLOv5 defect fast identification technology, and in a case where a confidence value is less than 0.6, continuously sending a signal to change the equipment angle and acquiring the multi-angle detection picture; which is merely a data gathering step recited at a high level of generality and therefore merely amount to “insignificant extra-solution” activity(ies). See MPEP 2106.05(g) “Insignificant Extra-Solution Activity,”.
The generic data gathering, processing, and output steps, and other elements, are recited so generically (no details whatsoever are provided ) that it represents no more than mere instructions to apply the judicial exceptions on a computer. It can also be viewed as nothing more than an attempt to generally link the use of the judicial exceptions to the technological environment of a computer. Noting MPEP 2106.04(d)(I): “It is notable that mere physicality or tangibility of an additional element or elements is not a relevant consideration in Step 2A Prong Two. As the Supreme Court explained in Alice Corp., mere physical or tangible implementation of an exception does not guarantee eligibility. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 224, 110 USPQ2d 1976, 1983-84 (2014) ("The fact that a computer ‘necessarily exist[s] in the physical, rather than purely conceptual, realm,’ is beside the point")”.
Thus, under Step 2A, prong 2 of the analysis, even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application and the claim is directed to the judicial exception. No specific practical application is associated with the claimed system. For instance, nothing is done with the output from the first model.
Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, as described above with respect to Step 2A Prong 2, merely amount to a general purpose computer system that attempts to apply the abstract idea in a technological environment, limiting the abstract idea to a particular field of use, and/or merely insignificant extra-solution activity (claims 1, 8, and 15). Such insignificant extra-solution activity, e.g. data gathering and output, when re-evaluated under Step 2B is further found to be well-understood, routine, and conventional as evidenced by MPEP 2106.05(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, and electronically scanning or extracting data from a physical document).
With regards to the dependent claims, claims 7-9, merely further expand upon the algorithm/abstract idea and do not set forth further additional elements therefore these claims are found ineligible for the reasons described for independent claim 6
See Supreme court decision in Alice Corporation Pty. Ltd. V. CLS Bank International, et al.
Claim Rejections - 35 USC § 102
7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
8. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
9. Claims 1-2 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shimbara et al. US Pat # 5,477,268.
With regards to claim 1, Shimbara et al. US Pat # 5,477,268 teaches a vision-based enhanced omni-directional defect detection apparatus, comprising
a conveyor belt, wherein the conveyor belt is connected to a motor and a gearbox,(col. 5, lines 27-30)
a lift lever is disposed on the conveyor belt opposite to one side where the motor is disposed, (R1 figure 2) and
a complementary metal oxide semiconductor (CMOS) camera, (functionally equivalent; Col.5 , lines 42-45)
a plurality of LED lights are disposed along a longitudinal movement direction of the conveyor belt, (Col. 15, lines 25-27)
a pressure sensor is disposed on the conveyor belt opposite to a surface where the plurality of LED lights are disposed, and the pressure sensor is connected to a speed regulator.(Col.5, lines 64-67)
With regards to claim 2, Shimbara et al. US Pat # 5,477,268 teaches the CMOS camera is disposed at an extended portion of the lift lever. (32A; figure 5)
Claim Rejections - 35 USC § 103
10. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
11. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
12. Claim(s) 3-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shimbara et al. US Pat # 5,477,268.
With regards to claims 3-5, Shimbara et al. US Pat # 5,477,268 discloses LED’s, pressure sensor, motor and camera except for the arrangements described in the claim. It would have been obvious to one having ordinary skill in the art at the time the invention was made to , since it has been held that rearranging parts of an invention involves only routine skill in the art. In re Japikse, 86 USPQ 70 C (CCPA 1950)
Conclusion
13. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Staab et al. US 2016/0152416 teaches a conveyor inspection with unmanned vehicle carrying sensor structure.
14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADITYA S BHAT whose telephone number is (571)272-2270. The examiner can normally be reached on Monday-Friday 8 am-6pm.
15. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
16. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shelby Turner can be reached on 571-272-6334. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
17. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ADITYA S BHAT/Primary Examiner, Art Unit 2857 March 12, 2026