DETAILED ACTION
Notice of AIA Status
The present application is being examined under the AIA the first inventor to file provisions.
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Response to Amendments
Applicant’s arguments see remarks, 01/27/2026, with respect to the claim objections and 112(b) rejections submitted in the non-final office action dated 10/29/2025 are persuasive due to amendments thus have been withdrawn.
Response to Arguments
The applicant argues on page 18, “However, when Nishino is read as a whole, it is clear that the optical parts 3A and 3B are never actually supported by the head 25 at the same time, but that the head 25 is used to suction- attach and hold the optical parts 3A and 3B at different times (see e.g., step S25 and step S31 of the method depicted in FIG. 13 of Nishino, which are clearly performed sequentially). Therefore, the optical parts 3A and 3B of Nishino cannot be considered to be fixed to the head 25 "in an unknown spatial relationship" as required by independent claim 1.”
In response the office does not find this argument to be persuasive. Based on the breadth of the claim language the prior art by Nishino et al. (US 20100132187 A1) explicitly teaches the method comprising: fixing the first and second objects to the same motion control stage in an unknown spatial relationship (Fig. 9, Paragraph [0051]- Nishino discloses the part mounting apparatus illustrated in FIG. 9 includes an XY-stage 22, which is movable in an X-axis direction and a Y-axis direction. Provided on the XY-stage 22 is a placement stage 22a on which the mounting board 1 is placed. The optical parts 3A and 3B, which are mounting parts, are supported by a head 25 attached to a Z-stage 24, which is movable in a Z-axis direction.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Nishino the method comprising: fixing the first and second objects to the same motion control stage in an unknown spatial relationship.
Wherein having Harada’s system for aligning and bonding multiple objects wherein the method comprising: fixing the first and second objects to the same motion control stage in an unknown spatial relationship.
The motivation behind the modification would have been to allow for greater accuracy in obtaining positional data, since both Harada and Nishino are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Nishino’s system provides a way to improve the positional accuracy. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Nishino et al. (US 20100132187 A1) Paragraph [0079].
For clarity on the record the office understands that the 2 objects (3a and 3b) on the control stage 22 do not have a known relationship or distance between them.
Office respectfully recommends the applicant further amend to further define the unknown spatial relationship such as in specification page 35 lines [0030-32].
The applicant argues on page 18, “Moreover, the Office Action asserts, on page 11, that one of ordinary skill in the art would have been motivated to combine the teachings of Harada in view of Park with those of Nishino because it would have allowed "for greater accuracy in obtaining positional data, since both Harada and Nishino are both systems that align multiple objects." The Office Action alleges that paragraph [0096] of Harada teaches that the system of Harada provides "improved systems efficiency" and that paragraph [0079] of Nishino teaches that the system of Nishino provides "a way to improve positional accuracy."”
In response, the office does not find this argument to be persuasive base on the same reasons set forth above and the rejection below.
The applicant argues on page 19, “Therefore, even if the optical parts 3A and 3B of Nishino were considered to be fixed "to the same motion control stage in an unknown spatial relationship" as required by independent claim 1, paragraph [0079] of Nishino does not teach a way to improve positional accuracy of the optical parts 3A and 3B. Rather, paragraph [0079] of Nishino teaches arranging optical part 20 with respect to optical parts 3A and 3B with good positional accuracy.”
In response, the office does not find this argument to be persuasive base on the same reasons set forth above and the rejection below.
The applicant argues on page 19, “In addition, even if Nishino was considered to teach that first and second objects in the form of the optical parts 3A and 3B are fixed "to the same motion control stage in an unknown spatial relationship" as required by independent claim 1, Applicant submits that any such teaching of Nishino is fundamentally incompatible with the teachings of FIG. 19 of Harada. More specifically, paragraphs [0131] and [0134] of Harada teach that the photomask 130 and the semiconductor wafer 131 (which the Office Action alleges are equivalent to the claimed first and second objects, respectively) should be mounted on different alignment stages to enable the alignment of the photomask 130 and the semiconductor wafer 131 using the different alignment stages. Consequently, one of ordinary skill in the art would NOT be motivated to fix the photomask 130 and the semiconductor wafer 131 of Harada "to the same motion control stage in an unknown spatial relationship" as required by independent claim 1, because that would have prevented alignment of the photomask 130 and the semiconductor wafer 131 relative to one another.”
In response, the office does not find this argument to be persuasive base on the same reasons set forth above and the rejection below.
The applicant argues on page 19, “Therefore, when starting from Harada, even if Nishino was considered to teach that the optical parts 3A and 3B are fixed "to the same motion control stage in an unknown spatial relationship", one of ordinary skill in the art would not arrive at the method of independent claim 1.”
In response, the office does not find this argument to be persuasive base on the same reasons set forth above and the rejection below.
The applicant argues on page 19, “Furthermore, Borodovsky, Liu, Yamada, Lin, and Nishimura fail to cure the deficiencies of Harada, Park, and Nishino. Thus, dependent claims 2-21, 27-29, and 32-28 are also non-obvious over the cited”
In response, the office does not find this argument to be persuasive base on the same reasons set forth above and the rejection below.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claims 1, 6, and 12 recite limitations that use words like “means” (or “step”) or similar terms with functional language and do invoke 35 U.S.C. 112(f):
Claim 1; recites the limitation, “imaging system to acquire…,” [Line 5].
Claim 1; recites the limitation, “imaging system to acquire…,” [Line 12].
Claim 6; recites the limitation, “imaging system to acquire…,” [Lines 15].
Claim 12; recites the limitation, “imaging system to acquire…,” [Lines 16].
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
After a careful analysis, as disclosed above, and a careful review of the specification the following limitations in claim 1, 6, and 12:
“imaging system” (Fig. 1, #30. Page 28 Lines [0030-34] and Page 29 Lines [0001-2]- “The imaging system 30 includes a microscope and a camera arranged so that the camera can acquire images of one or more objects located on the upper surface 23 of the table 22 of the motion control stage 20 through the microscope.” Thus, has sufficient structure wherein is a camera that takes images through a microscope).
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 9, 18-21, 27-29, 35, and 37-39 are rejected under 35 U.S.C 103 as being unpatentable over Harada et al. (US 20110001974 A1) hereafter referenced as Harada in view of Park et al. (US 20080090312 A1) hereafter referenced as Park and Nishino et al. (US 20100132187 A1) hereafter referenced as Nishino.
Regarding claim 1, Harada discloses a method for use in the spatial registration of first and second objects (Fig. 3, Paragraph [0056]- Harada discloses FIG. 3 is a flowchart illustrating a first example of an alignment method. In step S10, the image capturing unit 11 captures the image of the alignment mark M arranged on the planar member 2 in FIG. 1, and the orientation detection unit 14 detects the orientation of the planar member 2 by detecting the orientation of the image of the alignment mark M1.),
using an imaging system (Fig. 19, #97 and #98 called cameras. Paragraph [0132]) to acquire an image of the first object or to acquire an image of a first marker provided with the first object (Fig. 12B, Paragraph [0132]- Harada discloses the photomask 130 and the semiconductor wafer 131 are provided with alignment marks such as described with reference to FIG. 2, FIG. 4A, FIG. 10A, FIG. 10B, FIG. 12A, FIG. 12B, etc., and the camera 97 and/or the camera 98 capture images of the alignment marks formed on the photomask 130 and the semiconductor wafer 131. (wherein the photomask 130 is the first object)),
wherein the first marker and the first object have a known spatial relationship (Fig. 5 Paragraph [0067]- Harada discloses the position detection unit 12 determines the absolute position of the alignment mark M2 from a known absolute position in the field of view of the image capturing unit 11.);
determining a position and orientation of the first object in a frame of reference of the motion control stage based at least in part on the acquired image of the first object (Fig. 1, Paragraph [0134]- Harada discloses the alignment unit 96 takes as inputs the images that the camera 97 and/or the camera 98 captured of the alignment marks formed on the photomask 130 and the semiconductor wafer 131, aligns the photomask 130 and the semiconductor wafer 131 relative to each other by moving the XYZ stage 91 and/or the mask stage 92, and detects the orientations of the photomask 130 and the semiconductor wafer 131.).
or based at least in part on the acquired image of the first marker and the known spatial relationship between the first marker and the first object (Fig. 1, Paragraph [0049]- Harada discloses a position adjusting unit 13 which adjusts the position of the planar member 2 relative to the reference position, based on the position of the alignment mark M detected by the position detection unit 12, and an orientation detection unit 14 which detects the orientation of the planar member 2 based the image captured of the alignment mark.);
using the imaging system to acquire an image of the second object (Fig. 2, Paragraph [0091]- Harada discloses when the porous chuck 52 is placed over the porous chuck 51, the camera 61 and/or the camera 62 capture images of the alignment marks formed on the second transparent sheet 101 held on the porous chuck 52.)
or to acquire an image of a second marker provided with the second object (fig. 12B Paragraph [0132]- Harada discloses the photomask 130 and the semiconductor wafer 131 are provided with alignment marks such as described with reference to FIG. 2, FIG. 4A, FIG. 10A, FIG. 10B, FIG. 12A, FIG. 12B, etc., and the camera 97 and/or the camera 98 capture images of the alignment marks formed on the photomask 130 and the semiconductor wafer 131. (wherein the semiconductor wafer 131 is the second object)),
wherein the second marker and the second object have a known spatial relationship (Fig. 5 Paragraph [0067]- Harada discloses the position detection unit 12 determines the absolute position of the alignment mark M2 from a known absolute position in the field of view of the image capturing unit 11.);
and determining a position and orientation of the second object in the frame of reference of the motion control stage based at least in part on the acquired image of the second object (Fig. 1, Paragraph [0134]- Harada discloses the alignment unit 96 takes as inputs the images that the camera 97 and/or the camera 98 captured of the alignment marks formed on the photomask 130 and the semiconductor wafer 131, aligns the photomask 130 and the semiconductor wafer 131 relative to each other by moving the XYZ stage 91 and/or the mask stage 92, and detects the orientations of the photomask 130 and the semiconductor wafer 131.)
or based at least in part on the acquired image of the second marker and the known spatial relationship between the second marker and the second object (Fig. 1, Paragraph [0049]- Harada discloses a position adjusting unit 13 which adjusts the position of the planar member 2 relative to the reference position, based on the position of the alignment mark M detected by the position detection unit 12, and an orientation detection unit 14 which detects the orientation of the planar member 2 based the image captured of the alignment mark.).
Although Harada explicitly teaches a imaging system Harada fails to explicitly teach a imaging system that captures images through a microscope.
However, Park explicitly teaches a imaging system that captures images through a microscope (Fig. 2, Paragraph [0039]- Park discloses a microscope mounted camera may be employed to acquire 210 the image).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Park a imaging system that captures images through a microscope.
Wherein having Harada’s system for aligning and bonding multiple objects wherein an imaging system that captures images through a microscope.
The motivation behind the modification would have been to allow for easier processing of small objects, since both Harada and Park are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Park’s system provides another way to improve efficiency while looking at a small-scale object. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Park et al. (US 20080090312 A1) Paragraph [0023].
Harada in view of Park and Nishino fails to explicitly teach the method comprising: fixing the first and second objects to the same motion control stage in an unknown spatial relationship.
However, Nishino explicitly teaches the method comprising: fixing the first and second objects to the same motion control stage in an unknown spatial relationship (Fig. 9, Paragraph [0051]- Nishino discloses the part mounting apparatus illustrated in FIG. 9 includes an XY-stage 22, which is movable in an X-axis direction and a Y-axis direction. Provided on the XY-stage 22 is a placement stage 22a on which the mounting board 1 is placed. The optical parts 3A and 3B, which are mounting parts, are supported by a head 25 attached to a Z-stage 24, which is movable in a Z-axis direction.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Nishino the method comprising: fixing the first and second objects to the same motion control stage in an unknown spatial relationship.
Wherein having Harada’s system for aligning and bonding multiple objects wherein the method comprising: fixing the first and second objects to the same motion control stage in an unknown spatial relationship.
The motivation behind the modification would have been to allow for greater accuracy in obtaining positional data, since both Harada and Nishino are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Nishino’s system provides a way to improve the positional accuracy. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Nishino et al. (US 20100132187 A1) Paragraph [0079].
Regarding claim 3, Harada in view of Park and Nishino discloses the method of claim 1,
Harada further teaches comprising: determining the position and orientation of the first marker in the frame of reference of the motion control stage based at least in part on the acquired image of the first marker (Fig. 1, Paragraph [0055]- Harada discloses the orientation detection unit 14 performs pattern matching between the image captured by the image capturing unit 11 and the prestored image identical in shape to the alignment mark M1 by rotating the prestored image in increments of a predetermined angle, and detects the orientation of the alignment mark M1 in the image captured by the image capturing unit 11.);
and using the determined position and orientation of the first marker in the frame of reference of the motion control stage and the known spatial relationship between the first marker and the first object to determine the position and orientation of the first object in the frame of reference of the motion control stage (Fig. 1, Paragraph [0055]- Harada discloses information indicating the orientation detected by the orientation detection unit 14 is supplied to a determining unit 15. The determining unit 15 determines whether the planar member 2 is oriented in the correct direction by referring to the orientation detected by the orientation detection unit 14.).
Regarding claim 9, Harada in view of Park and Nishino discloses the method of claim 1,
Harada further teaches comprising: determining the position and orientation of the second marker in the frame of reference of the motion control stage based at least in part on the acquired image of the second marker (Fig. 15, Paragraph [0112]- Harada discloses the alignment unit 75 takes as inputs the images that the camera 76 and/or the camera 77 captured of the alignment marks formed on the substrate 110, aligns the substrate 110 with respect to the laser light source 72 by moving the XY stage 71, and detects the orientation of the substrate 110. The position of the substrate 110 relative to the laser light source 72 may be adjusted by moving the laser light source 72 instead of or in addition to moving the substrate 110 by the XY stage 71.);
and using the determined position and orientation of the second marker in the frame of reference of the motion control stage and the known spatial relationship between the second marker and the second object to determine the position and orientation of the second object in the frame of reference of the motion control stage (Fig. 15, Paragraph [0113]- Harada discloses Based on the orientation of the substrate 110 detected by the alignment unit 75, the determining unit 78 determines whether the substrate 110 mounted on the XY stage 71 is oriented in the correct direction; if the substrate 110 is not oriented correctly, the determining unit 78 sends an alarm signal to the output unit 74.).
Regarding claim 18, Harada in view of Park and Nishino discloses the method of claim 1,
Harada further teaches wherein the first object is detachably attached to the motion control stage (Fig. 19, Paragraph [0131]- Harada discloses the fabrication apparatus 90 includes: an XYZ stage 91 for mounting thereon the semiconductor wafer 131 and for moving the semiconductor wafer 131 in three-dimensional space; a mask stage 92 for mounting thereon the photomask 130 and for moving the photomask 130 in two-dimensional space;)
or wherein the first object is detachably attached to a first substrate or wafer and the first substrate or wafer is fixed to the motion control stage (Fig 2, Paragraph [0110]- Harada discloses the substrate 110 is provided with alignment marks such as described with reference to FIG. 2, FIG. 4A, FIG. 10A, FIG. 10B, FIG. 12A, FIG. 12B, etc., and the camera 76 and/or the camera 77 capture images of the alignment marks formed on the substrate 110 mounted on the XY stage 71. Further in Fig. 17, Paragraph [0119]- Harada discloses the fabrication apparatus 80 is a fabrication apparatus for fabricating a circuit substrate by mounting an electronic component 121 on the substrate 120.).
Regarding claim 19, Harada in view of Park and Nishino discloses the method of claim 1,
Harada further teaches wherein the second object is detachably attached to the motion control stage (Fig. 15, Paragraph [0109]- Harada discloses the fabrication apparatus 70 includes: an XY stage 71 for mounting thereon a substrate 110 and for moving the substrate 110 in two-dimensional space)
or wherein the second object comprises a feature, a structure, a target area, a target region defined on a second substrate or wafer (Fig. 2, Paragraph [0121]- Harada discloses the substrate 120 is provided with alignment marks such as described with reference to FIG. 2, FIG. 4A, FIG. 10A, FIG. 10B, FIG. 12A, FIG. 12B, etc., and the camera 86 and/or the camera 87 capture images of the alignment marks formed on the substrate 120 mounted on the XY stage 81.),
and the second substrate or wafer is fixed to the motion control stage (Fig. 15, Paragraph [0109]- Harada discloses the fabrication apparatus 70 includes: an XY stage 71 for mounting thereon a substrate 110 and for moving the substrate 110 in two-dimensional space).
Regarding claim 20, Harada in view of Park and Nishino discloses the method of claim 1,
Harada further teaches comprising determining a spatial relationship between the first and second objects in the frame of reference of the motion control stage based on the determined position and orientation of the first object in the frame of reference of the motion control stage and the determined position and orientation of the second object in the frame of reference of the motion control stage (Fig. 19 Paragraph [0135]- Harada discloses based on the orientations of the photomask 130 and the semiconductor wafer 131 detected by the alignment unit 96, the determining unit 99 determines whether the photomask 130 and the semiconductor wafer 131 mounted on the mask stage 92 and the XYZ stage 91, respectively, are oriented in the correct direction; if the photomask 130 and the semiconductor wafer 131 are not oriented correctly, the determining unit 99 sends an alarm signal to the output unit 95.).
Regarding claim 21, Harada in view of Park and Nishino teaches the method of claim 20,
Harada further teaches comprising spatially registering the first and second objects based on the determined spatial relationship between the first and second objects in the frame of reference of the motion control stage (Fig. 19 Paragraph [0135]- Harada discloses based on the orientations of the photomask 130 and the semiconductor wafer 131 detected by the alignment unit 96, the determining unit 99 determines whether the photomask 130 and the semiconductor wafer 131 mounted on the mask stage 92 and the XYZ stage 91, respectively, are oriented in the correct direction; if the photomask 130 and the semiconductor wafer 131 are not oriented correctly, the determining unit 99 sends an alarm signal to the output unit 95.),
Regarding claim 27, Harada in view of Park and Nishino discloses the method of claim 21,
Harada further teaches comprising attaching the first and second objects while the first and second objects are aligned (Harada discloses the UV radiation device 57 projects ultraviolet radiation onto the first and second transparent sheets 100 and 101, thereby curing the coating adhesive on the respective sheets 100 and 101 and preventing them from slipping out of position.),
Regarding claim 28, Harada in view of Park and Nishino discloses the method of claim 1,
Harada further teaches wherein at least one of the first and second objects comprises a component or wherein at least one of the first and second objects comprises a portion, piece or chip of material (Fig. 19, Paragraph [0131]- Harada discloses the fabrication apparatus 90 includes: an XYZ stage 91 for mounting thereon the semiconductor wafer 131 and for moving the semiconductor wafer 131 in three-dimensional space; a mask stage 92 for mounting thereon the photomask 130 and for moving the photomask 130 in two-dimensional space).
Regarding claim 29, Harada in view of Park and Nishino discloses the method of claim 1,
Harada further teaches wherein one of the first and second objects comprises a lithographic mask (Fig. 19, Paragraph [0131]- Harada discloses the fabrication apparatus 90 includes: an XYZ stage 91 for mounting thereon the semiconductor wafer 131 and for moving the semiconductor wafer 131 in three-dimensional space; a mask stage 92 for mounting thereon the photomask 130 and for moving the photomask 130 in two-dimensional space (wherein photomask is another name for a lithographic mask))
and the other of the first and second objects comprises a work-piece (Fig. 19, Paragraph [0131]- Harada discloses the fabrication apparatus 90 includes: an XYZ stage 91 for mounting thereon the semiconductor wafer 131 and for moving the semiconductor wafer 131 in three-dimensional space; a mask stage 92 for mounting thereon the photomask 130 and for moving the photomask 130 in two-dimensional space).
Regarding claim 35, Harada in view of Park and Nishino teaches the method of claim 21, Harada further teaches wherein spatially registering the first and second objects based on the determined spatial relationship between the first and second objects in the frame of reference of the motion control stage comprises holding the first object (Fig. 13, Paragraph [0089]- Harada discloses the fabrication apparatus 50 includes: porous chucks 51 and 52 for holding thereon first and second transparent sheets 100 and 101 as workpieces),
moving the first object and the motion control stage apart, using the motion control stage to move the second object relative to the first object based on the determined spatial relationship between the first and second objects in the frame of reference of the motion control stage until the first and second objects are in alignment (Fig. 14, Paragraph [0104]- Harada discloses from the amounts of positional displacement of the first and second transparent sheets 100 and 101, the alignment unit 60 determines the amount of positional displacement between the first and second transparent sheets 100 and 101. Then, the alignment unit 60 aligns the first and second transparent sheets 100 and 101 to each other by moving the XY stage 54 so as to reduce the thus determined amount of positional displacement to zero.),
and then bringing the first and second objects together until the first and second objects are aligned and in engagement (Fig. 14, Paragraph [0105]- Harada discloses In step S38, the Z stage 55 is moved upward, and the first and second transparent sheets 100 and 101 are laminated together under pressure.).
Regarding claim 37, Harada in view of Park and Nishino teaches the method of claim 35, Harada further teaches wherein the intermediate adhesive material or agent comprises an intermediate adhesion layer (Fig. 14, Paragraph [0104]- Harada discloses from the amounts of positional displacement of the first and second transparent sheets 100 and 101, the alignment unit 60 determines the amount of positional displacement between the first and second transparent sheets 100 and 101. Then, the alignment unit 60 aligns the first and second transparent sheets 100 and 101 to each other by moving the XY stage 54 so as to reduce the thus determined amount of positional displacement to zero. Fig. 14, Paragraph [0106]- Harada discloses the UV radiation device 57 projects ultraviolet radiation onto the first and second transparent sheets 100 and 101, thereby curing the coating adhesive on the respective sheets 100 and 101 and preventing them from slipping out of position.).
Regarding claim 38, Harada in view of Park and Nishino teaches the method of claim 28, Harada further teaches wherein the component comprises an (Fig. 19, Paragraph [0131]- Harada discloses the fabrication apparatus 90 includes: an XYZ stage 91 for mounting thereon the semiconductor wafer 131 and for moving the semiconductor wafer 131 in three-dimensional space; a mask stage 92 for mounting thereon the photomask 130 and for moving the photomask 130 in two-dimensional space)
or an electronic component (Fig. 19, Paragraph [0131]- Harada discloses the fabrication apparatus 90 includes: an XYZ stage 91 for mounting thereon the semiconductor wafer 131 and for moving the semiconductor wafer 131 in three-dimensional space; a mask stage 92 for mounting thereon the photomask 130 and for moving the photomask 130 in two-dimensional space)
Regarding claim 39, Harada in view of Park and Nishino teaches the method of claim 29, Harada further teaches wherein the work-piece comprises a substrate or a wafer (Fig. 19, Paragraph [0131]- Harada discloses the fabrication apparatus 90 includes: an XYZ stage 91 for mounting thereon the semiconductor wafer 131 and for moving the semiconductor wafer 131 in three-dimensional space; a mask stage 92 for mounting thereon the photomask 130 and for moving the photomask 130 in two-dimensional space).
Claims 2, 8, and 33-34 are rejected under 35 U.S.C 103 as being unpatentable over Harada et al. (US 20110001974 A1) hereafter referenced as Harada in view of Park et al. (US 20080090312 A1) hereafter referenced as Park, Nishino et al. (US 20100132187 A1) hereafter referenced as Nishino, and Borodovsky et al. (US 20180033593 A1) hereafter referenced as Borodovsky.
Regarding claim 2, Harada in view of Park and Nishino discloses the method of claim 1,
Harada further teaches wherein the first marker is rotationally asymmetric in one or two dimensions (Fig. 4A, Paragraph [0060]- Harada discloses the geometric figures F2 to F4 are identical in shape, each rotated 90 degrees relative to one another, but the geometric figure F1 differs in shape from the geometric figures F2 to F4; as a result, the alignment mark M2 as a whole has a rotationally asymmetrical shape.).
Harada in view of Park and Nishino fails to explicitly teach wherein the first marker is aperiodic.
However, Borodovsky explicitly teaches wherein the first marker is aperiodic (Fig. 11, Paragraph [0062]- Borodovsky discloses an aperiodic alignment mark structure (in the X-direction) may be used. As an example, FIG. 11 illustrates an aperiodic alignment structure and corresponding backscatter electron (BSE) detector response, in accordance with an embodiment of the present invention.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Borodovsky wherein the first marker is aperiodic.
Wherein having Harada’s system for aligning and bonding multiple objects wherein the first marker is aperiodic.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Borodovsky are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Borodovsky’s system improves accuracy. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Borodovsky et al. (US 20180033593 A1) Paragraph [0050].
Regarding claim 8, Harada in view of Park and Nishino discloses the method of claim 1,
Harada further teaches wherein the second marker is rotationally asymmetric and/or aperiodic in one or two dimensions (Fig. 2, Paragraph [0052]- Harada discloses further, since the alignment mark M1 has a rotationally asymmetrical shape as illustrated, the orientation of the planar member 2 can be detected by detecting the direction in which the portion indicated at reference character A is oriented in the image of the alignment mark M1 captured by the image capturing unit 11.).
Harada in view of Park and Nishino fails to explicitly teach wherein the second marker is aperiodic.
However, Borodovsky explicitly teaches wherein the second marker is aperiodic (Fig. 11, Paragraph [0062]- Borodovsky discloses an aperiodic alignment mark structure (in the X-direction) may be used. As an example, FIG. 11 illustrates an aperiodic alignment structure and corresponding backscatter electron (BSE) detector response, in accordance with an embodiment of the present invention.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Borodovsky wherein the second marker is aperiodic.
Wherein having Harada’s system for aligning and bonding multiple objects wherein the second marker is aperiodic.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Borodovsky are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Borodovsky’s system improves accuracy. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Borodovsky et al. (US 20180033593 A1) Paragraph [0050].
Regarding claim 33, Harada in view of Park, Nishino, and Borodovsky teaches the method of claim 2, Harada further teaches wherein the first marker comprises, or takes the form of, a grid which is rotationally asymmetric in one or two dimensions (Fig. 4A, Paragraph [0060]- Harada discloses the geometric figures F2 to F4 are identical in shape, each rotated 90 degrees relative to one another, but the geometric figure F1 differs in shape from the geometric figures F2 to F4; as a result, the alignment mark M2 as a whole has a rotationally asymmetrical shape.).
Harada in view of Park and Nishino fails to explicitly teach wherein the second marker is aperiodic.
However, Borodovsky explicitly teaches wherein the second marker is aperiodic (Fig. 11, Paragraph [0062]- Borodovsky discloses an aperiodic alignment mark structure (in the X-direction) may be used. As an example, FIG. 11 illustrates an aperiodic alignment structure and corresponding backscatter electron (BSE) detector response, in accordance with an embodiment of the present invention.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Borodovsky wherein the second marker is aperiodic.
Wherein having Harada’s system for aligning and bonding multiple objects wherein the second marker is aperiodic.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Borodovsky are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Borodovsky’s system improves accuracy. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Borodovsky et al. (US 20180033593 A1) Paragraph [0050].
Harada in view of Park and Nishino fails to explicitly teach wherein the second marker is aperiodic.
However, Borodovsky explicitly teaches wherein the second marker is aperiodic (Fig. 11, Paragraph [0062]- Borodovsky discloses an aperiodic alignment mark structure (in the X-direction) may be used. As an example, FIG. 11 illustrates an aperiodic alignment structure and corresponding backscatter electron (BSE) detector response, in accordance with an embodiment of the present invention.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Borodovsky wherein the second marker is aperiodic.
Wherein having Harada’s system for aligning and bonding multiple objects wherein the second marker is aperiodic.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Borodovsky are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Borodovsky’s system improves accuracy. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Borodovsky et al. (US 20180033593 A1) Paragraph [0050].
Regarding claim 34, Harada in view of Park, Nishino, and Borodovsky teaches the method of claim 8, Harada further teaches wherein the second marker comprises, or takes the form of, a grid which is rotationally asymmetric in one or two dimensions (Fig. 4A, Paragraph [0060]- Harada discloses the geometric figures F2 to F4 are identical in shape, each rotated 90 degrees relative to one another, but the geometric figure F1 differs in shape from the geometric figures F2 to F4; as a result, the alignment mark M2 as a whole has a rotationally asymmetrical shape.).
Claims 4 and 10 are rejected under 35 U.S.C 103 as being unpatentable over Harada et al. (US 20110001974 A1) hereafter referenced as Harada in view of Park et al. (US 20080090312 A1) hereafter referenced as Park, Nishino et al. (US 20100132187 A1) hereafter referenced as Nishino, and Liu et al. (US 20110038527 A1) hereafter referenced as Liu.
Regarding claim 4, Harada in view of Park and Nishino teaches the method of claim 3,
Harada further teaches comprising: measuring a relative position and orientation of the motion control stage corresponding to the acquired image of the first marker (Fig. 2, Paragraph [0061]- Harada discloses the position detection unit 12 detects the position of the alignment mark M2 by detecting, through image processing, the position of the portion of the alignment mark M2 indicated at reference character P. Further, the orientation of the planar member 2 can be detected by detecting the direction in which the portion indicated at reference character A is oriented in the image of the alignment mark M2 captured by the image capturing unit 11.);
determining a degree of similarity between the acquired image of the first marker and a virtual image of the first marker (Fig. 2, Paragraph [0051]- Harada discloses the position detection unit 12 detects, in the image captured by the image capturing unit 11, the position P of the alignment mark M1 by performing pattern matching between the image captured by the image capturing unit 11 and the prestored image identical in shape to the alignment mark M1.),
which virtual image of the first marker has the same size and shape as the first marker (Fig. 2, Paragraph [0051]- Harada discloses the position detection unit 12 detects, in the image captured by the image capturing unit 11, the position P of the alignment mark M1 by performing pattern matching between the image captured by the image capturing unit 11 and the prestored image identical in shape to the alignment mark M1.),
and determining the position and orientation of the first marker in the frame of reference of the motion control stage based on the measured relative position and orientation of the motion control stage corresponding to the acquired image of the first marker and the relative position and orientation of the virtual image of the first marker with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the first marker (Fig. 2, Paragraph [0055]- Harada discloses information indicating the orientation detected by the orientation detection unit 14 is supplied to a determining unit 15. The determining unit 15 determines whether the planar member 2 is oriented in the correct direction by referring to the orientation detected by the orientation detection unit 14.)
Harada in view of Park and Nishino is fail to explicitly teach and responsive to determining that the degree of similarity between the acquired image of the first marker and the virtual image of the first marker does not comply with a predetermined criterion translating and/or rotating the virtual image of the first marker with respect to a FOV of the imaging system until the degree of similarity between the acquired image of the first marker and the virtual image of the first marker complies with the predetermined criterion and the acquired image of the first marker complies with the predetermined criterion.
However, Liu explicitly teaches and responsive to determining that the degree of similarity between the acquired image of the first marker and the virtual image of the first marker does not comply with a predetermined criterion (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.),
translating and/or rotating the virtual image of the first marker with respect to a FOV of the imaging system until the degree of similarity between the acquired image of the first marker and the virtual image of the first marker complies with the predetermined criterion (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.)
and the acquired image of the first marker complies with the predetermined criterion (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Liu and responsive to determining that the degree of similarity between the acquired image of the first marker and the virtual image of the first marker does not comply with a predetermined criterion translating and/or rotating the virtual image of the first marker with respect to a FOV of the imaging system until the degree of similarity between the acquired image of the first marker and the virtual image of the first marker complies with the predetermined criterion and the acquired image of the first marker complies with the predetermined criterion.
Wherein having Harada’s system for aligning and bonding multiple objects wherein and responsive to determining that the degree of similarity between the acquired image of the first marker and the virtual image of the first marker does not comply with a predetermined criterion translating and/or rotating the virtual image of the first marker with respect to a FOV of the imaging system until the degree of similarity between the acquired image of the first marker and the virtual image of the first marker complies with the predetermined criterion and the acquired image of the first marker complies with the predetermined criterion.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Liu are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of the system. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Liu et al. (US 20110038527 A1) Paragraph [0069].
Regarding claim 10, Harada in view of Park and Nishino teaches the method of claim 9,
Harada further teaches comprising: measuring a relative position and orientation of the motion control stage corresponding to the acquired image of the second marker (Fig. 2, Paragraph [0061]- Harada discloses the position detection unit 12 detects the position of the alignment mark M2 by detecting, through image processing, the position of the portion of the alignment mark M2 indicated at reference character P. Further, the orientation of the planar member 2 can be detected by detecting the direction in which the portion indicated at reference character A is oriented in the image of the alignment mark M2 captured by the image capturing unit 11.);
determining a degree of similarity between the acquired image of the second marker and a virtual image of the second marker, which virtual image of the second marker has the same size and shape as the second marker (Fig. 2, Paragraph [0051]- Harada discloses the position detection unit 12 detects, in the image captured by the image capturing unit 11, the position P of the alignment mark M1 by performing pattern matching between the image captured by the image capturing unit 11 and the prestored image identical in shape to the alignment mark M1.),
and determining the position and orientation of the second marker in the frame of reference of the motion control stage based on the measured relative position and orientation of the motion control stage corresponding to the acquired image of the second marker and the relative position and orientation of the virtual image of the second marker with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the second marker (Fig. 2, Paragraph [0055]- Harada discloses information indicating the orientation detected by the orientation detection unit 14 is supplied to a determining unit 15. The determining unit 15 determines whether the planar member 2 is oriented in the correct direction by referring to the orientation detected by the orientation detection unit 14.).
Harada in view of Park and Nishino is fail to explicitly teach and responsive to determining that the degree of similarity between the acquired image of the second marker and the virtual image of the second marker does not comply with a predetermined criterion, translating and/or rotating the virtual image of the second marker with respect to the FOV of the imaging system until the degree of similarity between the acquired image of the second marker and the virtual image of the second marker complies with the predetermined criterion and the acquired image of the second marker complies with the predetermined criterion.
However, Liu explicitly teaches and responsive to determining that the degree of similarity between the acquired image of the second marker and the virtual image of the second marker does not comply with a predetermined criterion (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.),
translating and/or rotating the virtual image of the second marker with respect to the FOV of the imaging system until the degree of similarity between the acquired image of the second marker and the virtual image of the second marker complies with the predetermined criterion (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.)
and the acquired image of the second marker complies with the predetermined criterion (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Liu and responsive to determining that the degree of similarity between the acquired image of the second marker and the virtual image of the second marker does not comply with a predetermined criterion, translating and/or rotating the virtual image of the second marker with respect to the FOV of the imaging system until the degree of similarity between the acquired image of the second marker and the virtual image of the second marker complies with the predetermined criterion and the acquired image of the second marker complies with the predetermined criterion.
Wherein having Harada’s system for aligning and bonding multiple objects wherein and responsive to determining that the degree of similarity between the acquired image of the second marker and the virtual image of the second marker does not comply with a predetermined criterion, translating and/or rotating the virtual image of the second marker with respect to the FOV of the imaging system until the degree of similarity between the acquired image of the second marker and the virtual image of the second marker complies with the predetermined criterion and the acquired image of the second marker complies with the predetermined criterion.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Liu are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of the system. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Liu et al. (US 20110038527 A1) Paragraph [0069].
Claims 5 and 11 are rejected under 35 U.S.C 103 as being unpatentable over Harada et al. (US 20110001974 A1) hereafter referenced as Harada in view of Park et al. (US 20080090312 A1) hereafter referenced as Park, Nishino et al. (US 20100132187 A1) hereafter referenced as Nishino, Liu et al. (US 20110038527 A1) hereafter referenced as Liu, and Yamada et al. (US 20110286657 A1) hereafter referenced as Yamada.
Regarding claim 5, Harada in view of Park and Nishino and Liu teaches the method of claim 4,
Harada further teaches wherein determining the degree of similarity between the acquired image of the first marker and the virtual image of the first marker comprises evaluating a cross-correlation value between the acquired image of the first marker and the virtual image of the first marker (Fig. 2, Paragraph [0051]- Harada discloses the position detection unit 12 detects, in the image captured by the image capturing unit 11, the position P of the alignment mark M1 by performing pattern matching between the image captured by the image capturing unit 11 and the prestored image identical in shape to the alignment mark M1 (wherein cross correlation is a common form of pattern matching).)
Harada in view of Park and Nishino fails to explicitly teach or has a maximum value.
However, Liu explicitly teaches or has a maximum value (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Liu or has a maximum value.
Wherein having Harada’s system for aligning and bonding multiple objects wherein or has a maximum value.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Liu are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of the system. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Liu et al. (US 20110038527 A1) Paragraph [0069].
Harada in view of Park, Nishino, and Liu fails to explicitly teach and wherein the degree of similarity between the acquired image of the first marker and the virtual image of the first marker complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
However, Yamada explicitly teaches and wherein the degree of similarity between the acquired image of the first marker and the virtual image of the first marker complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value (Fig. 7, Paragraph [0049]- Yamada discloses a cross-correlation coefficient, a ratio of a minimum line width, a ratio of a minimum space width, a ratio of a line width average value, a ratio of a space width average value, a ratio of a coverage, and a ratio of the number of apexes between the extraction patterns is considered to be employed. For example, a method is considered in which the extraction patterns of which degree of matching of graphical feature is equal to or more than a predetermined threshold are classified into the same classification pattern.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino and Liu of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Yamada and wherein the degree of similarity between the acquired image of the first marker and the virtual image of the first marker complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
Wherein having Harada’s system for aligning and bonding multiple objects wherein the degree of similarity between the acquired image of the first marker and the virtual image of the first marker complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
The motivation behind the modification would have been to allow for a more accurate system, since both Harada and Yamada are both systems that perform alignment. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of data obtained. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Yamada et al. (US 20110286657 A1) Paragraph [0037].
Regarding claim 11, Harada in view of Nishino, Park, and Liu discloses the method of claim 10,
Harada further teaches wherein determining the degree of similarity between the acquired image of the second marker and the virtual image of the second marker comprises evaluating a cross-correlation value between the acquired image of the second marker and the virtual image of the second marker (Fig. 2, Paragraph [0051]- Harada discloses the position detection unit 12 detects, in the image captured by the image capturing unit 11, the position P of the alignment mark M1 by performing pattern matching between the image captured by the image capturing unit 11 and the prestored image identical in shape to the alignment mark M1 (wherein cross correlation is a common form of pattern matching).)
Harada in view of Park and Nishino fails to explicitly teach or has a maximum value.
However, Liu explicitly teaches or has a maximum value (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Liu or has a maximum value.
Wherein having Harada’s system for aligning and bonding multiple objects wherein or has a maximum value.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Liu are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of the system. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Liu et al. (US 20110038527 A1) Paragraph [0069].
Harada in view of Park and Nishino and Liu fails to explicitly teach and wherein the degree of similarity between the acquired image of the second marker and the virtual image of the second marker complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
However, Yamada explicitly teaches and wherein the degree of similarity between the acquired image of the second marker and the virtual image of the second marker complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value (Fig. 7, Paragraph [0049]- Yamada discloses a cross-correlation coefficient, a ratio of a minimum line width, a ratio of a minimum space width, a ratio of a line width average value, a ratio of a space width average value, a ratio of a coverage, and a ratio of the number of apexes between the extraction patterns is considered to be employed. For example, a method is considered in which the extraction patterns of which degree of matching of graphical feature is equal to or more than a predetermined threshold are classified into the same classification pattern.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino and Liu of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Yamada and wherein the degree of similarity between the acquired image of the second marker and the virtual image of the second marker complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
Wherein having Harada’s system for aligning and bonding multiple objects wherein the degree of similarity between the acquired image of the second marker and the virtual image of the second marker complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
The motivation behind the modification would have been to allow for a more accurate system, since both Harada and Yamada are both systems that perform alignment. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of data obtained. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Yamada et al. (US 20110286657 A1) Paragraph [0037].
Claims 14 and 16 are rejected under 35 U.S.C 103 as being unpatentable over Harada et al. (US 20110001974 A1) hereafter referenced as Harada in view of Park et al. (US 20080090312 A1) hereafter referenced as Park, Nishino et al. (US 20100132187 A1) hereafter referenced as Nishino, Liu et al. (US 20110038527 A1) hereafter referenced as Liu, and Lin et al. (US 20130148878 A1) hereafter referenced as Lin.
Regarding claim 14, Harada in view of Park and Nishino discloses the method of claim 1, comprising:
Harada in view of Park and Nishino fails to explicitly teach measuring a relative position and orientation of the motion control stage corresponding to an acquired image of the first object; determining a degree of similarity between the acquired image of the first object and a virtual image of the first object, which virtual image of the first object has the same size and shape as the first object, and responsive to determining that the degree of similarity between the acquired image of the first object and the virtual image of the first object does not comply with a predetermined criterion, translating and/or rotating the virtual image of the first object with respect to a FOV of the imaging system until the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion; and determining the position and orientation of the first object in the frame of reference of the motion control stage based on the measured relative position and orientation of the motion control stage corresponding to the acquired image of the first object and the relative position and orientation of the virtual image of the first object with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the first object and the acquired image of the first object complies with the predetermined criterion.
However, Lin explicitly teaches measuring a relative position and orientation of the motion control stage corresponding to an acquired image of the first object (Fig. 2, Paragraph [0045]- Lin discloses comparing the actual coordinate system with a coordinate system of a second substrate to obtain three types of offset values .DELTA.X, .DELTA.Y, .DELTA..theta.; (S06): moving the first substrate to a correct waiting position based on the offset values .DELTA.X, .DELTA.Y, .DELTA..theta.; (S07): ensuring if the first substrate is disposed at the correct waiting position);
determining a degree of similarity between the acquired image of the first object and a virtual image of the first object, which virtual image of the first object has the same size and shape as the first object, and responsive to determining that the degree of similarity between the acquired image of the first object and the virtual image of the first object does not comply with a predetermined criterion (Fig. 1, Paragraph [0022]- Lin discloses a step (S06) of using the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. as movement compensation values of the first substrate to move the first substrate to the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are larger than a preset value; and determining that the first substrate is at the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are smaller than the preset value.),
and determining the position and orientation of the first object in the frame of reference of the motion control stage based on the measured relative position and orientation of the motion control stage corresponding to the acquired image of the first object and the relative position and orientation of the virtual image of the first object with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the first object (Fig. 1, Paragraph [0022]- Lin discloses a step (S06) of using the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. as movement compensation values of the first substrate to move the first substrate to the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are larger than a preset value; and determining that the first substrate is at the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are smaller than the preset value.)
and the acquired image of the first object complies with the predetermined criterion (Fig. 1, Paragraph [0022]- Lin discloses a step (S06) of using the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. as movement compensation values of the first substrate to move the first substrate to the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are larger than a preset value; and determining that the first substrate is at the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are smaller than the preset value.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Lin measuring a relative position and orientation of the motion control stage corresponding to an acquired image of the first object; determining a degree of similarity between the acquired image of the first object and a virtual image of the first object, which virtual image of the first object has the same size and shape as the first object, and responsive to determining that the degree of similarity between the acquired image of the first object and the virtual image of the first object does not comply with a predetermined criterion, translating and/or rotating the virtual image of the first object with respect to a FOV of the imaging system until the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion; and determining the position and orientation of the first object in the frame of reference of the motion control stage based on the measured relative position and orientation of the motion control stage corresponding to the acquired image of the first object and the relative position and orientation of the virtual image of the first object with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the first object and the acquired image of the first object complies with the predetermined criterion.
Wherein having Harada’s system for aligning and bonding multiple objects wherein measuring a relative position and orientation of the motion control stage corresponding to an acquired image of the first object; determining a degree of similarity between the acquired image of the first object and a virtual image of the first object, which virtual image of the first object has the same size and shape as the first object, and responsive to determining that the degree of similarity between the acquired image of the first object and the virtual image of the first object does not comply with a predetermined criterion, translating and/or rotating the virtual image of the first object with respect to a FOV of the imaging system until the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion; and determining the position and orientation of the first object in the frame of reference of the motion control stage based on the measured relative position and orientation of the motion control stage corresponding to the acquired image of the first object and the relative position and orientation of the virtual image of the first object with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the first object and the acquired image of the first object complies with the predetermined criterion.
The motivation behind the modification would have been to allow for a precise and affordable system, since both Harada and Lin are both systems that perform alignment. Wherein Harada’s system wherein improved systems efficiency, while Lin’s system reduces cost and improves precision. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Lin et al. (US 20130148878 A1) Paragraph [0057].
Harada in view of Park, Nishino, and Lin is fail to explicitly teach translating and/or rotating the virtual image of the first object with respect to a FOV of the imaging system until the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion.
However, Liu explicitly teaches translating and/or rotating the virtual image of the first object with respect to a FOV of the imaging system until the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park, Nishino, and Lin of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Liu translating and/or rotating the virtual image of the first object with respect to a FOV of the imaging system until the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion.
Wherein having Harada’s system for aligning and bonding multiple objects wherein translating and/or rotating the virtual image of the first object with respect to a FOV of the imaging system until the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Liu are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of the system. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Liu et al. (US 20110038527 A1) Paragraph [0069].
Regarding claim 16, Harada in view of Park and Nishino discloses the method of claim 1,
Harada in view of Park and Nishino fails to explicitly teach comprising: measuring a relative position and orientation of the motion control stage corresponding to an acquired image of the second object; determining a degree of similarity between the acquired image of the second object and a virtual image of the second object, which virtual image of the second object has the same size and shape as the second object and responsive to determining that the degree of similarity between the acquired image of the second object and the virtual image of the second object does not comply with a predetermined criterion, and determining the position and orientation of the second object in the frame of reference of the motion control stage based on the measured relative position and orientation of the motion control stage corresponding to the acquired image of the second object and the relative position and orientation of the virtual image of the second object with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the second object and the acquired image of the second object complies with the predetermined criterion.
However, Lin explicitly teaches comprising: measuring a relative position and orientation of the motion control stage corresponding to an acquired image of the second object (Fig. 2, Paragraph [0045]- Lin discloses comparing the actual coordinate system with a coordinate system of a second substrate to obtain three types of offset values .DELTA.X, .DELTA.Y, .DELTA..theta.; (S06): moving the first substrate to a correct waiting position based on the offset values .DELTA.X, .DELTA.Y, .DELTA..theta.; (S07): ensuring if the first substrate is disposed at the correct waiting position);
determining a degree of similarity between the acquired image of the second object and a virtual image of the second object, which virtual image of the second object has the same size and shape as the second object and responsive to determining that the degree of similarity between the acquired image of the second object and the virtual image of the second object does not comply with a predetermined criterion (Fig. 1, Paragraph [0022]- Lin discloses a step (S06) of using the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. as movement compensation values of the first substrate to move the first substrate to the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are larger than a preset value; and determining that the first substrate is at the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are smaller than the preset value.),
and determining the position and orientation of the second object in the frame of reference of the motion control stage based on the measured relative position and orientation of the motion control stage corresponding to the acquired image of the second object and the relative position and orientation of the virtual image of the second object with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the second object (Fig. 1, Paragraph [0022]- Lin discloses a step (S06) of using the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. as movement compensation values of the first substrate to move the first substrate to the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are larger than a preset value; and determining that the first substrate is at the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are smaller than the preset value.)
and the acquired image of the second object complies with the predetermined criterion (Fig. 1, Paragraph [0022]- Lin discloses a step (S06) of using the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. as movement compensation values of the first substrate to move the first substrate to the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are larger than a preset value; and determining that the first substrate is at the correct waiting position in the alignment-and-assembling space if the offset values .DELTA.X, .DELTA.Y, .DELTA..theta. are smaller than the preset value.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Lin comprising: measuring a relative position and orientation of the motion control stage corresponding to an acquired image of the second object; determining a degree of similarity between the acquired image of the second object and a virtual image of the second object, which virtual image of the second object has the same size and shape as the second object and responsive to determining that the degree of similarity between the acquired image of the second object and the virtual image of the second object does not comply with a predetermined criterion, and determining the position and orientation of the second object in the frame of reference of the motion control stage based on the measured relative position and orientation of the motion control stage corresponding to the acquired image of the second object and the relative position and orientation of the virtual image of the second object with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the second object and the acquired image of the second object complies with the predetermined criterion.
Wherein having Harada’s system for aligning and bonding multiple objects wherein comprising: measuring a relative position and orientation of the motion control stage corresponding to an acquired image of the second object; determining a degree of similarity between the acquired image of the second object and a virtual image of the second object, which virtual image of the second object has the same size and shape as the second object and responsive to determining that the degree of similarity between the acquired image of the second object and the virtual image of the second object does not comply with a predetermined criterion, and determining the position and orientation of the second object in the frame of reference of the motion control stage based on the measured relative position and orientation of the motion control stage corresponding to the acquired image of the second object and the relative position and orientation of the virtual image of the second object with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the second object and the acquired image of the second object complies with the predetermined criterion.
The motivation behind the modification would have been to allow for a precise and affordable system, since both Harada and Lin are both systems that perform alignment. Wherein Harada’s system wherein improved systems efficiency, while Lin’s system reduces cost and improves precision. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Lin et al. (US 20130148878 A1) Paragraph [0057].
Harada in view of Park, Nishino, and Lin is fail to explicitly teach translating and/or rotating the virtual image of the second object with respect to the FOV of the imaging system until the degree of similarity between the acquired image of the second object and the virtual image of the second object complies with the predetermined criterion.
However, Liu explicitly teaches translating and/or rotating the virtual image of the second object with respect to the FOV of the imaging system until the degree of similarity between the acquired image of the second object and the virtual image of the second object complies with the predetermined criterion (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park, Nishino, and Lin of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Liu translating and/or rotating the virtual image of the second object with respect to the FOV of the imaging system until the degree of similarity between the acquired image of the second object and the virtual image of the second object complies with the predetermined criterion.
Wherein having Harada’s system for aligning and bonding multiple objects wherein translating and/or rotating the virtual image of the second object with respect to the FOV of the imaging system until the degree of similarity between the acquired image of the second object and the virtual image of the second object complies with the predetermined criterion.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Liu are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of the system. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Liu et al. (US 20110038527 A1) Paragraph [0069].
Claims 15 and 17 are rejected under 35 U.S.C 103 as being unpatentable over Harada et al. (US 20110001974 A1) hereafter referenced as Harada in view of Park et al. (US 20080090312 A1) hereafter referenced as Park, Nishino et al. (US 20100132187 A1) hereafter referenced as Nishino, Liu et al. (US 20110038527 A1) hereafter referenced as Liu, Lin et al. (US 20130148878 A1) hereafter referenced as Lin, and Yamada et al. (US 20110286657 A1) hereafter referenced as Yamada.
Regarding claim 15, Harada in view of Nishino, Park, Liu, and Lin discloses the method of claim 14, Harada further teaches wherein determining the degree of similarity between the acquired image of the first object and the virtual image of the first object comprises evaluating a cross-correlation value between the acquired image of the first object and the virtual image of the first object (Fig. 2, Paragraph [0051]- Harada discloses the position detection unit 12 detects, in the image captured by the image capturing unit 11, the position P of the alignment mark M1 by performing pattern matching between the image captured by the image capturing unit 11 and the prestored image identical in shape to the alignment mark M1 (wherein cross correlation is a common form of pattern matching).)
Harada in view of Nishino, Park, and Lin fails to explicitly teach and wherein the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value or has a maximum values.
However, Liu explicitly teaches or has a maximum value (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park, Nishino, and Lin of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Liu or has a maximum value.
Wherein having Harada’s system for aligning and bonding multiple objects wherein or has a maximum value.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Liu are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of the system. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Liu et al. (US 20110038527 A1) Paragraph [0069].
Harada in view of Nishino, Park, Liu, and Lin fails to explicitly teach and wherein the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
However, Yamada explicitly teaches and wherein the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value (Fig. 7, Paragraph [0049]- Yamada discloses a cross-correlation coefficient, a ratio of a minimum line width, a ratio of a minimum space width, a ratio of a line width average value, a ratio of a space width average value, a ratio of a coverage, and a ratio of the number of apexes between the extraction patterns is considered to be employed. For example, a method is considered in which the extraction patterns of which degree of matching of graphical feature is equal to or more than a predetermined threshold are classified into the same classification pattern.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park, Nishino, and Liu of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Yamada and wherein the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
Wherein having Harada’s system for aligning and bonding multiple objects wherein the degree of similarity between the acquired image of the first object and the virtual image of the first object complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
The motivation behind the modification would have been to allow for a more accurate system, since both Harada and Yamada are both systems that perform alignment. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of data obtained. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Yamada et al. (US 20110286657 A1) Paragraph [0037].
Regarding claim 17, Harada in view of Park and Nishino discloses the method of claim 16,
Harada further teaches wherein determining the degree of similarity between the acquired image of the second object and the virtual image of the second object comprises evaluating a cross-correlation value between the acquired image of the second object and the virtual image of the second object (Fig. 2, Paragraph [0051]- Harada discloses the position detection unit 12 detects, in the image captured by the image capturing unit 11, the position P of the alignment mark M1 by performing pattern matching between the image captured by the image capturing unit 11 and the prestored image identical in shape to the alignment mark M1 (wherein cross correlation is a common form of pattern matching).)
Harada in view of Nishino, Park, and Lin fails to explicitly teach or has a maximum value.
However, Liu explicitly teaches or has a maximum value (Fig. 3, Paragraph [0049]- Liu discloses the model pattern may be rotated at various angles around a nominal angle, perform pattern matching at each rotation angle to obtain the best matching results at the angle, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. This will provide not only a match and its similarity value, but also an angle offset d.theta. Similarly, the model pattern may be scaled around a nominal size, perform pattern matching at each scale to obtain the best matching results at the scale, and the final best matching for that kernel in target image may be obtained by fitting matching results in a curve and identifying the curve peak. A matching scale ds can then be obtained.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park, Nishino, and Lin of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Liu or has a maximum value.
Wherein having Harada’s system for aligning and bonding multiple objects wherein or has a maximum value.
The motivation behind the modification would have been to allow for greater system accuracy, since both Harada and Liu are both systems that align multiple objects. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of the system. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Liu et al. (US 20110038527 A1) Paragraph [0069].
Harada in view of Nishino, Park, Liu, and Lin fails to explicitly teach and wherein the degree of similarity between the acquired image of the second object and the virtual image of the second object complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
However, Yamada explicitly teaches and wherein the degree of similarity between the acquired image of the second object and the virtual image of the second object complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value (Fig. 7, Paragraph [0049]- Yamada discloses a cross-correlation coefficient, a ratio of a minimum line width, a ratio of a minimum space width, a ratio of a line width average value, a ratio of a space width average value, a ratio of a coverage, and a ratio of the number of apexes between the extraction patterns is considered to be employed. For example, a method is considered in which the extraction patterns of which degree of matching of graphical feature is equal to or more than a predetermined threshold are classified into the same classification pattern.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park, Nishino, Liu, and Lin of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Yamada and wherein the degree of similarity between the acquired image of the second object and the virtual image of the second object complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
Wherein having Harada’s system for aligning and bonding multiple objects wherein the degree of similarity between the acquired image of the second object and the virtual image of the second object complies with the predetermined criterion when the cross-correlation value is greater than a predetermined threshold value.
The motivation behind the modification would have been to allow for a more accurate system, since both Harada and Yamada are both systems that perform alignment. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves accuracy of data obtained. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Yamada et al. (US 20110286657 A1) Paragraph [0037].
Claim 36 is rejected under 35 U.S.C 103 as being unpatentable over Harada et al. (US 20110001974 A1) hereafter referenced as Harada in view of Park et al. (US 20080090312 A1) hereafter referenced as Park, Nishino et al. (US 20100132187 A1) hereafter referenced as Nishino, and Nishimura et al. (US 20090057891 A1) hereafter referenced as Nishimura.
Regarding claim 36, Harada in view of Park and Nishino teaches the method of claim 27,
Harada further teaches wherein attaching the first and second objects while the first and second objects are aligned comprises using at least one of a differential adhesion method (Fig. 14, Paragraph [0104]- Harada discloses from the amounts of positional displacement of the first and second transparent sheets 100 and 101, the alignment unit 60 determines the amount of positional displacement between the first and second transparent sheets 100 and 101. Then, the alignment unit 60 aligns the first and second transparent sheets 100 and 101 to each other by moving the XY stage 54 so as to reduce the thus determined amount of positional displacement to zero. Fig. 14, Paragraph [0106]- Harada discloses the UV radiation device 57 projects ultraviolet radiation onto the first and second transparent sheets 100 and 101, thereby curing the coating adhesive on the respective sheets 100 and 101 and preventing them from slipping out of position.),
or bonding the first and second objects together using an intermediate adhesive material or agent to attach the first and second objects while the first and second objects are aligned (Fig. 14, Paragraph [0104]- Harada discloses from the amounts of positional displacement of the first and second transparent sheets 100 and 101, the alignment unit 60 determines the amount of positional displacement between the first and second transparent sheets 100 and 101. Then, the alignment unit 60 aligns the first and second transparent sheets 100 and 101 to each other by moving the XY stage 54 so as to reduce the thus determined amount of positional displacement to zero. Fig. 14, Paragraph [0106]- Harada discloses the UV radiation device 57 projects ultraviolet radiation onto the first and second transparent sheets 100 and 101, thereby curing the coating adhesive on the respective sheets 100 and 101 and preventing them from slipping out of position.).
Harada in view of Park and Nishino fails to explicitly teach a capillary bonding method or a soldering method.
a capillary bonding method (Fig. 16C, Paragraph [0330]- Nishimura discloses after this, the so-called ball bonding method which uses a bonding capillary 361 is employed, whereby a second bump 314b is formed on the trailing end portion of the bonding wire 311 which is connected to the bump 314a (see FIGS. 16C and 16D).),
or a soldering method (Fig. 3D Paragraph [0152]- Nishimura discloses an external connection terminal 114 made up of a solder ball electrode is placed as to the electrode terminal 113 on the other main face of the wiring board 101, using a reflow soldering method or the like, thereby forming the semiconductor device 100 (see FIG. 3D).),
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Harada in view of Park and Nishino of having a method for use in the spatial registration of first and second objects, using an imaging system to acquire an image of the first object or to acquire an image of a first marker provided with the first object with the teachings of Nishimura a capillary bonding method or a soldering method.
Wherein having Harada’s system for aligning and bonding multiple objects wherein a capillary bonding method or a soldering method.
The motivation behind the modification would have been to allow for greater reliability of the system, since both Harada and Liu are both systems that bond multiple objects together. Wherein Harada’s system wherein improved systems efficiency, while Liu’s system improves reliability of the products produced. Please see Harada et al. (US 20110001974 A1), Paragraph [0096] and Nishimura et al. (US 20090057891 A1) Paragraph [0317].
Allowable Subject Matter
Claims 6 and 12 along with their dependent claims 7 and 13 respectively, are therefrom objected to as being dependent upon rejected base claim, claims 1, respectively but would be allowable if rewritten in independent form including all of the limitations of the base claims and any intervening claims once the claim objections and 112(b) rejections are overcome.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 6, the prior arts fail to explicitly teach, wherein and determining the position and orientation of the first marker in the frame of reference of the motion control stage based on: the measured relative position of the motion control stage corresponding to the acquired image of the first marker; the relative position of the virtual image of the first marker with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the first marker and the acquired image of the first marker complies with the predetermined criterion; the measured relative position of the motion control stage corresponding to the acquired image of the further first marker; and the relative position of the virtual image of the further first marker with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the further first marker and the acquired image of the further first marker complies with the predetermined criterion, as claimed in claim 6.
Regarding claim 12, and determining the position and orientation of the second marker in the frame of reference of the motion control stage based on: the measured relative position of the motion control stage corresponding to the acquired image of the second marker; the relative position of the virtual image of the second marker with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the second marker and the acquired image of the second marker complies with the predetermined criterion; the measured relative position of the motion control stage corresponding to the acquired image of the further second marker; and the relative position of the virtual image of the further second marker with respect to the FOV of the imaging system when the degree of similarity between the virtual image of the further second marker and the acquired image of the further second marker complies with the predetermined criterion, as claimed in claim 12.
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered
pertinent to applicant`s disclosure.
TAKAHASHI et al. (US 20190259567 A1)- To provide a lightweight and highly rigid stage device that can move in X and Y directions and a Z direction, and a charged particle beam device including the stage device. A stage device includes a chuck that is loaded with a sample, an XY stage that moves in X and Y directions, and a Z stage that moves in a Z direction. The Z stage includes: an inclined part that is fixed to the XY stage and includes an inclined surface inclined with respect to an XY plane; a movement part that moves on the inclined surface; and a table that is fixed to the movement part and is provided with the a plane parallel to the XY plane......................Please see Fig. 1. Abstract.
Feigl et al. (US 20220330465 A1)- A device for handling components that is designed and equipped to handle components with multiple lateral surfaces and/or edges of the lateral surfaces. The device has at least one receiving tool, which is arranged on a turning device, for a respective component of the components, where the receiving tool is designed and equipped to receive the respective component on one of the component cover surfaces. The turning device is designed and equipped to rotate the receiving tool on a turning plane about a turning axis, and in the process optionally convey a component located on the receiving tool from a receiving position to one or more orientation positions, optionally one or more inspecting positions, a setting-down position, and optionally an ejecting position. The device also has a holding and supplying device, which faces the receiving position, for a component supply, and a discharge device.......................Please see Fig. 1. Abstract.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUCIUS C.G. ALLEN whose telephone number is (703)756-5987. The examiner can normally be reached Mon - Fri 8-5pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571)272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LUCIUS CAMERON GREEN ALLEN/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673