DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Claims 7, 10, 17 and 20 are rejoined, as it was found that the species are obvious variants.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
It is noted that claims 1-6, 8-9, 11-16, 18-19 are considered eligible subject matter. Even if the claims were interpreted as an abstract idea, the claims provide a practical application, i.e. defect inspection for displays. It is noted further in claim 13, “a multi-optical vision device” and “an optical coherence tomography device” are interpreted as hardware elements, as the applicant’s specification provides no other interpretation as a software embodiments for the devices.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 5, 9-15, 19 and 20 are rejected under 35 U.S.C. 103(a) as being unpatentable over “Vison Inspection-Synchronized Dual Optical Coherence Tomography for High Resolution Real-Time Multidimensional Defect Tracking in Optical Thin Film Industry (Jeon et al) in view of U.S. Patent Application Publication NO. 20230048386 (Wang et al).
Regarding claim 1, Jeon et al discloses a defect classification method comprising: collecting a first image, VLS RGB image, fig. 1, VLSC) of an exterior of a display device (page 190700, paragraph 1, page 190701, paragraph 3) by a multi-optical vision device (fig. 1, VSLC); determining a defect of the display device based on the first image by the multi- optical vision device, i.e. “external bubbles, bright/dark spots…sub surface scratches” (page 190701, paragraph 3); extracting XY coordinates of the defect of the display device, i.e. potential defect locations for OCT (page 190701, paragraph 4); collecting a second image of an inside of the display device based on the XY coordinates of the defect of the display device by an optical coherence tomography device (fig. 1, OCT, page 190701, paragraph 4); using a model for determining the defect of the display device, the model employed in the recognition of defects described in pages 190702-190704, part B, and a defect type of the display device, i.e. the type of defect size by indicating dimensions (page 190704, paragraph 1), or “defective” (Page 190705, paragraph 1) based on the second image by the optical coherence tomography device, since the second OCT image is used to find the above parameters (page 190704, paragraph 1, page 190704, paragraph 1); determining the defect of the display device based on the second image through the model by the optical coherence tomography device by determining defects described in (page 190702-190704, part B), and as shown with parameters of table 2; and determining the defect type of the display device based on the second image through the model by the optical coherence tomography device, since the OCT image from the OCT device is used to find the defect type described in pages 190704, paragraph 1, page 190705, paragraph 1).
Jeon et al does not disclose expressly training a deep machine learning model to determine the defect type and the determining of defects and defect types is through the deep learning model.
Wang et al discloses training a deep machine learning model (page 1, paragraph 8, page 8, paragraph 196) to determine the defect type (page 1, paragraph 3) and the determining of defects and defect types is through the deep learning model (page 1, paragraph 14, page 7, paragraph 192).
Jeon et al and Wang et al are combinable because they are from the same field of endeavor, i.e. detection of defects on screens.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use a deep machine learning model to process defect data.
The suggestion/motivation for doing so would have been to provide a more convenient, accurate and automated system.
Therefore, it would have been obvious to combine the method of Jeon et al with training a model of Wang et al to obtain the invention as specified in claim 1.
Regarding claim 4, Wang et al discloses the deep machine learning model for determining the defect of the display device and the defect type of the display device is trained based on sample data of a labeled data set (page 8, paragraph 194). Jeon et al discloses the first image and the second image by the optical coherence tomography device are samples of a dataset that are tested (pages 190704-190705, part D).
Regarding claim 5, Wang et al discloses the deep machine learning model includes a convolutional neural network (page 12, paragraph 236).
Regarding claim 9, Jeon et al discloses the second image includes information on a foreign substance and layers in a stacked structure of the display device, because the second image includes cross-sectional information of the inside of the display device (fig. 8, 9).
Regarding claim 10, Jeon et al discloses the second image includes a B-scan image, i.e. fig. 5 (b), or one of the sections of fig 5(c).
Regarding claim 11, Jeon et al discloses the second image includes a C-scan image, i.e. fig. 5(c).
Regarding claim 12, Wang et al discloses the defect of the display device and the defect type of the display device are determined based on samples of a dataset that are tested (Page 8, paragraph 194) through the deep machine learning model (page 8, paragraph 196). Jeon et al discloses the first image by the multi-optical vision device are samples of a dataset (pages 190704-190705, part D).
Claim 13 is rejected for the same reasons as claim 1. Thus, the arguments analogous to that presented above for claim 1 are equally applicable to claim 13. Claim 13 distinguishes from claim 1 only in that claim 13 claim 13 is a system claim instead of a method claim that claims the multi-optical vision device and OCT device as its parts. Jeon et al teaches further this feature, i.e. fig. 1, VLSC and OCT parts.
Claims 14-15 and 19-20 are rejected for the same reasons as claims 4-5, and 9-10, respectively. Thus, the arguments analogous to that presented above for claims 14- 15 and 19-20 are equally applicable to claims 4-5, and 9-10. Claims 14-15 and 19-20 distinguish from claim 4-5, and 9-10 only in that they have different dependencies, both of which have been previously rejected. Therefore, prior art applies.
Claims 6-8 and 16-18 are rejected under 35 U.S.C. 103(a) as being unpatentable over Jeon et al in view of Wang et al, as applied to claims 5 and 15 above, and further in view of U.S. Patent Application Publication No. 20200257955 (Naderiparizi et al).
Regarding claim 6, Jeon et al (as modified by Wang et al) discloses all of the claimed elements as set forth above and incorporated herein by reference.
Jeon et al (as modified by Wang et al) does not disclose expressly the convolutional neural network includes a convolutional layer, a pooling layer, and a fully connected layer.
Naderiparizi et al discloses convolutional neural network includes a convolutional layer, a pooling layer, and a fully connected layer (page 1, paragraph 6).
Jeon et al (as modified by Wang et al) & Naderiparizi et al are combinable because they are from the same field of endeavor, i.e. training convolutional neural networks.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to include the layers.
The suggestion/motivation for doing so would have been to provide a more robust system by allowing the user of the commonly used layers.
Therefore, it would have been obvious to combine the method of Jeon et al (as modified by Wang et al) with the layers of Naderiparizi et al to obtain the invention as specified in claim 6.
Regarding claim 7, Nadierparizi et al discloses the pooling layer includes a max pooling layer (page 1, paragraph 6).
Regarding claim 8, Nadierparizi et al discloses the pooling layer includes an average pooling layer (page 1, paragraph 6).
Claims 16-18 are rejected for the same reasons as claims 6-8, respectively. Thus, the arguments analogous to that presented above for claims 6-8 are equally applicable to claims 16-18. Claims 16-18 distinguish from claims 6-8 only in that they have different dependencies, both of which have been previously rejected. Therefore, prior art applies.
Claims 2 and 3 are rejected under 35 U.S.C. 103(a) as being unpatentable over Jeon et al in view of Wang et al, as applied to claim 1 above, and further in view of U.S. Patent application Publication No. 20050167620 (Cho et al).
Regarding claim 2, Jeon et al (as modified by Wang et al) discloses all of the claimed elements as set forth above and incorporated herein by reference. Jeon et al further discloses that a first image is by the multi-optical vision device (page 190701, paragraph 3, fig. 1, VLSC) which indicates where the defect locations should be further investigated (page 190701, paragraph 4).
Jeon et al (as modified by Wang et al) does not disclose expressly when it is determined that the display device does not include the defect based on the defect locations needing to be further investigated, defect inspection for the display device is terminated.
Cho et al discloses when it is determined that the display device does not include the defect based on the defect locations needing to be further investigated, defect inspection for the display device is terminated (Fig. 5, step 80, no defect results in “END”).
Jeon et al (as modified by Wang et al) & Cho et al are combinable because they are from the same field of endeavor, i.e. panel inspection.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to end inspection if no defect is detected.
The suggestion/motivation for doing so would have been to provide a faster method by eliminating extraneous processing.
Therefore, it would have been obvious to combine the method of Jeon et al (as modified by Wang et al) with terminating inspection of Cho et al to obtain the invention as specified in claim 2.
Regarding claim 3, Jeon et al discloses the second image is by optical coherence tomography device (page 190701, paragraph 4, fig. 1, OCT), which is second imaging data that indicates defects (page 190701, paragraph 4). Cho et al discloses when it is determined that the display device does not include the defect based on the second imaging data (fig. 5, item 84, 88), defect inspection for the display device is terminated (Fig. 5, item 84,88 “Yes”, leads to “end”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN YUAN DULANEY whose telephone number is (571)272-2902. The examiner can normally be reached M1:9am-5pm, th1:9am-1pm, fri1 9am-3pm, m2: 9am-5pm, t2:9-5 th2:9am-5pm, f2: 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at 5712703717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATHLEEN Y DULANEY/Primary Examiner, Art Unit 2666 1/29/2026