DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
This application claims benefit of foreign priority under 35 U.S.C. 119(a)-(d) of Application No. JP2023-045844, filed in Japan on 03/22/2023.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/15/2024 was considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al. (U.S. Patent App. Pub No. 2015/0371883 A1, hereafter referred as Takahashi) in view of Minato et al. (U.S. Patent App. Pub No. 2017/0125271 A1, hereafter referred as Minato).
Regarding Claim 1:
Takahashi teaches a position detection method comprising: providing a substrate processing apparatus including a chamber having a first detection target portion, a susceptor having a second detection target portion and configured to be rotatable within the chamber (Takahashi: Par. [0007-0008]; there is provided a substrate processing apparatus including: a process chamber accommodating a substrate support and a rotating mechanism configured to rotate the substrate support, the substrate support including a first substrate support unit and a second substrate support unit arranged along a circumference of the substrate support), and an imaging device (Takahashi: Par. [0263]; it may be configured to detect the substrate deviation degree on the substrate moving path using imaging cameras);
Takahashi fails to teach acquiring an image captured by the imaging device; generating an edge detection image in which an edge is detected, from the image captured by the imaging device; and detecting positions of the first detection target portion and the second detection target portion from the edge detection image using Hough transformation.
Minato, like Takahashi, is directed to a position detection method and a substrate processing apparatus. Minato does teach acquiring an image captured by the imaging device (Minato: Par. [0039]; the imaging apparatus 20 captures an image of an object for which a parameter associated with a position (e.g., the coordinate of the central position) is to be estimated with the image processing apparatus 10); generating an edge detection image in which an edge is detected, from the image captured by the imaging device (Minato: Par. [0048] and Fig. 4; for the captured image appearing on the display 31, the user performs an operation described below to designate an outline E51 (edge) of the plane shape I50 of the object; the user designates an edge extraction area (arch area with a width) SA1 for the captured image including the plane shape I50, and also sets the number of lines by which the edge is scanned in the edge extraction area SA1 (e.g., scanlines SL1 to SL11 shown in FIG. 4)); and detecting positions of the first detection target portion and the second detection target portion from the edge detection image using Hough transformation (Minato: Par. [0055] and [0104]; without alignment marks, the positioning uses the edge of an object, and thus uses coefficients calculated with fitting and geometric computations for a straight line and a circle based on a straight edge line and a circular edge line; the image processing apparatus 10 may set a value calculated with a method (e.g., a simpler position estimation method such as Hough transform or the least square method) different from the method for calculating the points H1 and H2 using the combination of points D52 and D54 used by the image processing apparatus 10 as the provisional parameter (the provisional center PP1)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Takahashi to utilize the edge detection using Hough transform, as taught by Minato, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. As taught by Minato, the proposed modification improves robustness against abnormal detection values, and achieves both high accuracy estimation and high-speed estimation processing (Minato: Abstract).
In regards to Claim 2, Takahashi as modified by Minato further teaches the position detection method according to claim 1, wherein the generating the edge detection image includes detecting the edge from a brightness variation (Minato: Par. [0098]; the positive or negative direction is determined based on the relationship between the brightness of the object and the brightness of the background, as well as a filter coefficient used for the edge extraction).
In regards to Claim 3, Takahashi as modified by Minato further teaches the position detection method according to claim 1, wherein the second detection target portion is a recess formed on the susceptor.
In regards to Claim 4, Takahashi as modified by Minato further teaches the position detection method according to claim 3, wherein the second detection target portion is a circular recess (Takahashi: Par. [0066] and Fig. 3; the substrate support unit 217b may have, for example, a circular shape when seen from the top and a concave shape when seen from the side; Fig. 3 showcases the top down view of the circular recesses for the substrates).
In regards to Claim 5, Takahashi as modified by Minato further teaches the position detection method according to claim 3, wherein the second detection target portion is an annular recess (Takahashi: Par. [0066] and Fig. 3; the substrate support unit 217b may have, for example, a circular shape when seen from the top and a concave shape when seen from the side; Fig. 3 showcases the top down view of the circular recesses for the substrates).
In regards to Claim 6, Takahashi as modified by Minato further teaches the position detection method according to claim 1, wherein the second detection target portion is a disc-shaped member arranged in a recess formed on the susceptor (Takahashi: Par. [0066] and Fig. 3; the substrate support unit 217b may have, for example, a circular shape when seen from the top and a concave shape when seen from the side; Fig. 3 showcases the top down view of the circular recesses for the substrates).
In regards to Claim 7, Takahashi as modified by Minato further teaches the position detection method according to claim 1, wherein the first detection target portion and the second detection target portion have a circular shape in plan view (Takahashi: Par. [0066] and Fig. 3; the substrate support unit 217b may have, for example, a circular shape when seen from the top and a concave shape when seen from the side; Fig. 3 showcases the top down view of the circular recesses for the substrates), and wherein the Hough transformation detects the circular shape (Minato: Par. [0055] and [0104]; without alignment marks, the positioning uses the edge of an object, and thus uses coefficients calculated with fitting and geometric computations for a straight line and a circle based on a straight edge line and a circular edge line; the image processing apparatus 10 may set a value calculated with a method (e.g., a simpler position estimation method such as Hough transform or the least square method) different from the method for calculating the points H1 and H2 using the combination of points D52 and D54 used by the image processing apparatus 10 as the provisional parameter (the provisional center PP1)).
Regarding Claim 9:
Takahashi as modified by Minato further teaches a substrate processing apparatus comprising: a chamber having a first detection target portion; a susceptor having a second detection target portion, the susceptor being rotatable within the chamber (Takahashi: Par. [0007-0008]; there is provided a substrate processing apparatus including: a process chamber accommodating a substrate support and a rotating mechanism configured to rotate the substrate support, the substrate support including a first substrate support unit and a second substrate support unit arranged along a circumference of the substrate support), and an imaging device (Takahashi: Par. [0263]; it may be configured to detect the substrate deviation degree on the substrate moving path using imaging cameras); an imaging device (Takahashi: Par. [0263]; it may be configured to detect the substrate deviation degree on the substrate moving path using imaging cameras); and a controller, wherein the controller is configured to execute a process including: acquiring an image captured by the imaging device (Minato: Par. [0039]; the imaging apparatus 20 captures an image of an object for which a parameter associated with a position (e.g., the coordinate of the central position) is to be estimated with the image processing apparatus 10); generating an edge detection image in which an edge is detected, from the image captured by the imaging device (Minato: Par. [0048] and Fig. 4; for the captured image appearing on the display 31, the user performs an operation described below to designate an outline E51 (edge) of the plane shape I50 of the object; the user designates an edge extraction area (arch area with a width) SA1 for the captured image including the plane shape I50, and also sets the number of lines by which the edge is scanned in the edge extraction area SA1 (e.g., scanlines SL1 to SL11 shown in FIG. 4)); and detecting positions of the first detection target portion and the second detection target portion from the edge detection image using Hough transformation (Minato: Par. [0055] and [0104]; without alignment marks, the positioning uses the edge of an object, and thus uses coefficients calculated with fitting and geometric computations for a straight line and a circle based on a straight edge line and a circular edge line; the image processing apparatus 10 may set a value calculated with a method (e.g., a simpler position estimation method such as Hough transform or the least square method) different from the method for calculating the points H1 and H2 using the combination of points D52 and D54 used by the image processing apparatus 10 as the provisional parameter (the provisional center PP1)).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al. (U.S. Patent App. Pub No. 2015/0371883 A1, hereafter referred as Takahashi) in view of Minato et al. (U.S. Patent App. Pub No. 2017/0125271 A1, hereafter referred as Minato) and Severns et al. (U.S. Patent Pub No. 5121531 A, hereafter referred as Severns).
In regards to Claim 8, Takahashi as modified by Minato fails to further teach the position detection method according to claim 1, wherein the first detection target portion and the second detection target portion have a polygonal shape with a plurality of straight lines in plan view, and wherein the Hough transformation detects the straight lines.
Severns, like Takahashi, is directed to a substrate processing apparatus. Severns does further teach wherein the first detection target portion and the second detection target portion have a polygonal shape with a plurality of straight lines in plan view (Severns: Col. 2, lines 60-70; susceptor 1 has a pentagonal cross-sectional shape in a direction transverse to its vertical axis; since each of its five faces has three shallow recesses 7, the total capacity of susceptor 1 as illustrated is fifteen substrates; the susceptor 1 could have any other polygonal cross-sectional shape, such as tetragonal, hexagonal, or septagonal, and that the number of substrates per side might be more or less than three), and wherein the Hough transformation detects the straight lines (Minato: Par. [0055] and [0104]; without alignment marks, the positioning uses the edge of an object, and thus uses coefficients calculated with fitting and geometric computations for a straight line and a circle based on a straight edge line and a circular edge line; the image processing apparatus 10 may set a value calculated with a method (e.g., a simpler position estimation method such as Hough transform or the least square method) different from the method for calculating the points H1 and H2 using the combination of points D52 and D54 used by the image processing apparatus 10 as the provisional parameter (the provisional center PP1)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Takahashi to utilize the pentagonal shaped susceptor, as taught by Severns, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. As taught by Severns, the proposed modification allows the susceptor to accommodate different numbers of substrates while their positions remain equally distanced in a circumferential way (Severns: Col. 2, lines 60-70).
Pertinent Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Wilt (U.S. Patent App. Pub No. 2002/0114518 A1) teaches the Hough transform.
Suehira et al. (U.S. Patent App. Pub No. 2009/0108483 A1) teaches an alignment method for effecting alignment between two plate-like objects.
Dong et al. (U.S. Patent App. Pub No. 2012/0045115 A1) teaches a mechanism in which multiple sensitivity regions are set in a single inspection region, and thereby a defect only in a region where a DOI (Defect of interesting) is present.
Kuwahara (U.S. Patent App. Pub No. 2018/0218935 A1) teaches a detection coordinate calculator calculates detection coordinates of an outer periphery of a reference substrate or a substrate placed at a reference position on a hand.
Isaka (U.S. Patent App. Pub No. 2023/0176489 A1) teaches a detecting apparatus configured to detect a position of a predetermined pattern on a substrate.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RENAE BITOR whose telephone number is (703)756-5563. The examiner can normally be reached Monday to Friday: 8:00 - 5:30 but off the 1st Friday of the biweek.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, GREG MORSE can be reached on (571)272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RENAE A BITOR/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698