Prosecution Insights
Last updated: April 19, 2026
Application No. 18/727,349

METHOD FOR GENERATING THREE-DIMENSIONAL DIAGRAM OF OBJECT, THREE-DIMENSIONAL DIAGRAM GENERATION APPARATUS AND THREE-DIMENSIONAL SHAPE INSPECTION APPARATUS

Non-Final OA §103
Filed
Jul 09, 2024
Examiner
NGUYEN, PHU K
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Yamaha Robotics Holdings Co. Ltd.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
1019 granted / 1184 resolved
+24.1% vs TC avg
Moderate +7% lift
Without
With
+7.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
1224
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
66.6%
+26.6% vs TC avg
§102
3.8%
-36.2% vs TC avg
§112
4.6%
-35.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1184 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-30 are rejected under 35 U.S.C. 103 as being unpatentable over IKEDA et al (KR 20100017396 A) in view of KINJO et al (US 2022/0180494 Al). As per claim 1, Ikeda teaches the claimed “method for generating a three-dimensional diagram of a three-dimensional object based on a vertical view image of the object imaged from vertically above and multiple oblique view images of the object imaged from multiple obliquely upper directions”, the method comprising: “a vertical view diagram generation step which extracts a contour line in the vertical view image to generate a vertical view diagram of the object” (Ikeda, page 7, 2nd paragraph - camera C0 is provided in a state in which the optical axis is directed in the vertical direction (state at the front of the workpiece W; pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value) (Noted: Ikeda’s contour and/or edges of the wire defines the contour’s length of the wire); “a diagram conversion step which respectively converts the vertical view diagram into multiple oblique view diagrams based on a shape parameter including height information of the object” (Ikeda, pages 13-15 - First, in ST31, each camera C0 and C1 is simultaneously driven to generate an image. In the following ST32, the pattern matching process using the model registered before inspection is performed with respect to the front-view image A0… In ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range)); “repeatedly executes adjustment of the shape parameter and conversion of the vertical view diagram into each of the oblique view diagrams until each of the converted oblique view diagram overlaps with each of the oblique view image” (Ikeda, page 19 - In ST63, a position to be subjected to three-dimensional measurement on the first image is specified. An example of this position specifying method is the same as that described above regarding the various types of inspection when the camera C0 is placed in front view. In ST64, on the second image picked up by the camera C1, a position corresponding to the previously specified position on the first image is specified. In ST65, three-dimensional coordinates are calculated using the specified position on the first image and the position on the second image... For example, three-dimensional measurement may be performed on a plurality of points. In that case, it is also possible to specify a plurality of positions in each of the ST63 and ST64, and to calculate the three-dimensional coordinates of the plurality of points in the ST65, and from ST63 to ST65 that perform the position specification and the three-dimensional coordinate calculation for one point. By repeating the step of a plurality of times, the three-dimensional coordinates may be calculated one by one for each repetition… In ST46, correlation matching processing is executed in the search area 82 using the model image registered in ST44. The region most similar to the registered image is identified, and this is set as the measurement target region on the side of the perspective image A1. In ST47, the coordinates of the representative point are obtained for the measurement target area on the perspective image A1 side, and the three-dimensional coordinates are calculated using the coordinates and the coordinates of the representative point on the front-view image A0 side. Subsequently, at ST48, the adequacy of the obtained Z coordinate is determined. The result of the determination in ST49 is output, and the processing ends after that) (Noted: By taking images of a 3D object from different camera views, the position of each point on the 3D object can be determined by triangulation (i.e., finding the intersection of different 2D view rays projected from different camera views) – see Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44; [0040] - Since the two-dimensional coordinates (x31, y31) acquired from the image of the camera 43 and the two-dimensional coordinates (x41, y41) acquired from the image of the camera 44 in step S105 of FIG. 3 are two-dimensional coordinates corresponding to the same portion 35 of the wire 30 shown in FIG. 4, three-dimensional coordinates of the portion 35 of the wire 30 can be calculated from the two two-dimensional coordinates and the positions of the cameras 43 and 44); and “a diagram synthesis step which synthesizes the generated vertical view diagram and each of the converted oblique view diagrams to generate the three-dimensional diagram of the object” (Ikeda, page 9 - Thus, by imaging each workpiece work W once with two cameras C0 and C1, the inspection by 2D measurement and the inspection by 3D measurement can be performed continuously. Since the front-view image A0 is used in the two-dimensional measurement processing, it is possible to perform a measurement processing with high accuracy using an image without distortion of characters. When performing the three-dimensional measurement process, the corresponding measurement target point is specified between the front-view image A0 and the striking image A1, and the coordinates of the specified points are calculated by a calculation formula based on the principle of triangulation. By application, three-dimensional coordinates are calculated; Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44). Thus, it would have been obvious, in view of Kenjo, to configure Ikeda’s method as claimed by reconstructing a 3D diagram of the object by capturing 2D images from multiple oblique view cameras and projecting the 2D points on the captured 2D images back into the 3D object’s points. The motivation is to inspect the shape of the 3D object based on the reconstructed 3D diagram. Claim 2 adds into claim 1 “wherein the object is a device composed of multiple components and multiple wires connecting between the components, the vertical view diagram generation step extracts each contour line of each component and each of the wires in the vertical view image to generate the vertical view diagram of the device, and the shape parameter is height of each component from a reference surface, inclination of a surface of each component, and a bending parameter of each of the wires” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the contour’s length of the wire and the bending point’s position on the wire; furthermore, the bending point position can be defined by its coordinates or relative position (i.e., proportional to the length) on the wire; furthermore, the reconstructed 3D diagram of the object represents the inclination of a surface of each component). Claim 3 adds into claim 1 “wherein the vertical view diagram generation step generates a contour line in the vertical view image drawn by a user as the vertical view diagram” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the claimed contour line of the wire), and the diagram conversion step converts the vertical view diagram into each of the oblique view diagrams based on the adjusted shape parameter input by the user” (Ikeda, pages 9-10 - It sets using the coordinate of the edge point identified by (7), and the height range (range which can take the height of the object location of 3D measurement) specified by the user. The height here is the height in the vertical direction, ie, the front view direction, based on the mounting surface of the workpiece W, and is also referred to as the front view height. The height reference is not limited to the mounting surface of the workpiece W, but can be taken at the position of the camera C0 or any other position. The height range specified by the user is an object range for three-dimensional measurement along the optical axis of the camera C0). Claim 4 adds into claim 3 “wherein the object is a device composed of multiple components and multiple wires connecting between the components” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image), “the vertical view diagram generation step generates each contour line of each component and each of the wires in the vertical view image drawn by the user as the vertical view diagram of the device, and the shape parameter is height of each component from a reference surface, inclination of a surface of each component, and a bending parameter of each of the wires” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image). Claim 5 adds into claim 2 “wherein each of the wires has multiple bending points between a starting point and an ending point, the bending parameter is a three-dimensional coordinate position of each of the bending points of each of the wires” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the contour’s length of the wire and the bending point’s position on the wire; furthermore, the bending point position can be defined by its coordinates or relative position (i.e., proportional to the length) on the wire), and “the three-dimensional coordinate position of each of the bending points is a combination of a longitudinal direction coordinate position, a lateral direction coordinate position, and a height direction coordinate position in a coordinate system composed of a longitudinal direction axis extending from the starting point to the ending point of the wire in a plane of the reference surface, a lateral direction axis extending in a direction orthogonal to the longitudinal direction axis from the starting point of the wire in the plane of the reference surface, and a height direction axis extending in a vertical direction with respect to the reference surface through the starting point” (Ikeda, page 12 - FIG. 10 shows a state in which one point P on the plane D at an arbitrary height position in the space is imaged at points p0 and p1 on the imaging surfaces F0 and F1 of the cameras C0 and C1, respectively. In FIG. 10, X, Y, and Z are coordinate axes representing a three-dimensional space, and the plane D is parallel to the XY plane). Claim 6 adds into claim 5 “wherein the three-dimensional coordinate position of each of the bending points comprises a three-dimensional proportional coordinate position which is a combination of a proportional longitudinal direction coordinate position proportional to a wire total length between the starting point and the ending point, a proportional lateral direction coordinate position proportional to the wire total length, and a proportional height direction coordinate position proportional to the wire total length” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the contour’s length of the wire and the bending point’s position on the wire; furthermore, the bending point position can be defined by its coordinates or relative position (i.e., proportional to the length) on the wire). Claim 7 adds into claim 2 “wherein the vertical view diagram generation step groups the multiple wires into multiple groups composed of the wires having the starting points located on the same component, the ending points located on the same component, and the same thickness” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image), and “extracts each contour line of each component and each of the wires in the vertical view image to generate the vertical view diagram of the device, the bending parameter is composed of multiple group-specific bending parameters defined for each of the groups” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image), and “the diagram conversion step respectively converts each vertical view diagram of each of the wires included in each group into multiple oblique view diagrams based on each of the group-specific bending parameters” (Ikeda, pages 13-15 - First, in ST31, each camera C0 and C1 is simultaneously driven to generate an image. In the following ST32, the pattern matching process using the model registered before inspection is performed with respect to the front-view image A0… In ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range), and “repeatedly executes adjustment of each of the group-specific bending parameters of each of the groups and conversion of each vertical view diagram of the wires included in each of the groups into the oblique view diagrams until each of the converted oblique view diagrams of each of the wires included in each of the groups overlaps with each of the oblique view images of each of the wires included in each of the groups” (Ikeda, page 19 - In ST63, a position to be subjected to three-dimensional measurement on the first image is specified. An example of this position specifying method is the same as that described above regarding the various types of inspection when the camera C0 is placed in front view. In ST64, on the second image picked up by the camera C1, a position corresponding to the previously specified position on the first image is specified. In ST65, three-dimensional coordinates are calculated using the specified position on the first image and the position on the second image... For example, three-dimensional measurement may be performed on a plurality of points. In that case, it is also possible to specify a plurality of positions in each of the ST63 and ST64, and to calculate the three-dimensional coordinates of the plurality of points in the ST65, and from ST63 to ST65 that perform the position specification and the three-dimensional coordinate calculation for one point. By repeating the step of a plurality of times, the three-dimensional coordinates may be calculated one by one for each repetition… In ST46, correlation matching processing is executed in the search area 82 using the model image registered in ST44. The region most similar to the registered image is identified, and this is set as the measurement target region on the side of the perspective image A1. In ST47, the coordinates of the representative point are obtained for the measurement target area on the perspective image A1 side, and the three-dimensional coordinates are calculated using the coordinates and the coordinates of the representative point on the front-view image A0 side. Subsequently, at ST48, the adequacy of the obtained Z coordinate is determined. The result of the determination in ST49 is output, and the processing ends after that) (Noted: By taking images of a 3D object from different camera views, the position of each point on the 3D object can be determined by triangulation (i.e., finding the intersection of different 2D view rays projected from different camera views) – see Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44; [0040] - Since the two-dimensional coordinates (x31, y31) acquired from the image of the camera 43 and the two-dimensional coordinates (x41, y41) acquired from the image of the camera 44 in step S105 of FIG. 3 are two-dimensional coordinates corresponding to the same portion 35 of the wire 30 shown in FIG. 4, three-dimensional coordinates of the portion 35 of the wire 30 can be calculated from the two two-dimensional coordinates and the positions of the cameras 43 and 44); and “a diagram synthesis step which synthesizes the generated vertical view diagram and each of the converted oblique view diagrams to generate the three-dimensional diagram of the object” (Ikeda, page 9 - Thus, by imaging each workpiece work W once with two cameras C0 and C1, the inspection by 2D measurement and the inspection by 3D measurement can be performed continuously. Since the front-view image A0 is used in the two-dimensional measurement processing, it is possible to perform a measurement processing with high accuracy using an image without distortion of characters. When performing the three-dimensional measurement process, the corresponding measurement target point is specified between the front-view image A0 and the striking image A1, and the coordinates of the specified points are calculated by a calculation formula based on the principle of triangulation. By application, three-dimensional coordinates are calculated; Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44). Thus, it would have been obvious, in view of Kenjo, to configure Ikeda’s method as claimed by reconstructing a 3D diagram of the object by capturing 2D images from multiple oblique view cameras and projecting the 2D points on the captured 2D images back into the 3D object’s points. The motivation is to inspect the shape of the 3D object based on the reconstructed 3D diagram. Claim 8 adds into claim 4 “wherein the vertical view diagram generation step groups the multiple wires into multiple groups composed of the wires having the starting points located on the same component, the ending points located on the same component, and the same thickness, and extracts each contour line of each component and each of the wires in the vertical view image to generate the vertical view diagram of the device, the bending parameter is composed of multiple group-specific bending parameters defined for each of the groups” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image), and “the diagram conversion step respectively converts each vertical view diagram of each of the wires included in each group into multiple oblique view diagrams based on each of the group-specific bending parameters, and then performs conversion of the vertical view diagram of each of the wires included in each of the groups into each oblique view diagram based on each adjusted group-specific bending parameter input by the user” (Ikeda, page 19 - In ST63, a position to be subjected to three-dimensional measurement on the first image is specified. An example of this position specifying method is the same as that described above regarding the various types of inspection when the camera C0 is placed in front view. In ST64, on the second image picked up by the camera C1, a position corresponding to the previously specified position on the first image is specified. In ST65, three-dimensional coordinates are calculated using the specified position on the first image and the position on the second image... For example, three-dimensional measurement may be performed on a plurality of points. In that case, it is also possible to specify a plurality of positions in each of the ST63 and ST64, and to calculate the three-dimensional coordinates of the plurality of points in the ST65, and from ST63 to ST65 that perform the position specification and the three-dimensional coordinate calculation for one point. By repeating the step of a plurality of times, the three-dimensional coordinates may be calculated one by one for each repetition… In ST46, correlation matching processing is executed in the search area 82 using the model image registered in ST44. The region most similar to the registered image is identified, and this is set as the measurement target region on the side of the perspective image A1. In ST47, the coordinates of the representative point are obtained for the measurement target area on the perspective image A1 side, and the three-dimensional coordinates are calculated using the coordinates and the coordinates of the representative point on the front-view image A0 side. Subsequently, at ST48, the adequacy of the obtained Z coordinate is determined. The result of the determination in ST49 is output, and the processing ends after that) (Noted: By taking images of a 3D object from different camera views, the position of each point on the 3D object can be determined by triangulation (i.e., finding the intersection of different 2D view rays projected from different camera views) – see Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44; [0040] - Since the two-dimensional coordinates (x31, y31) acquired from the image of the camera 43 and the two-dimensional coordinates (x41, y41) acquired from the image of the camera 44 in step S105 of FIG. 3 are two-dimensional coordinates corresponding to the same portion 35 of the wire 30 shown in FIG. 4, three-dimensional coordinates of the portion 35 of the wire 30 can be calculated from the two two-dimensional coordinates and the positions of the cameras 43 and 44); and “a diagram synthesis step which synthesizes the generated vertical view diagram and each of the converted oblique view diagrams to generate the three-dimensional diagram of the object” (Ikeda, page 9 - Thus, by imaging each workpiece work W once with two cameras C0 and C1, the inspection by 2D measurement and the inspection by 3D measurement can be performed continuously. Since the front-view image A0 is used in the two-dimensional measurement processing, it is possible to perform a measurement processing with high accuracy using an image without distortion of characters. When performing the three-dimensional measurement process, the corresponding measurement target point is specified between the front-view image A0 and the striking image A1, and the coordinates of the specified points are calculated by a calculation formula based on the principle of triangulation. By application, three-dimensional coordinates are calculated; Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44). Thus, it would have been obvious, in view of Kenjo, to configure Ikeda’s method as claimed by reconstructing a 3D diagram of the object by capturing 2D images from multiple oblique view cameras and projecting the 2D points on the captured 2D images back into the 3D object’s points. The motivation is to inspect the shape of the 3D object based on the reconstructed 3D diagram. Claim 9 adds into claim 7 “wherein the multiple wires grouped in one group respectively have multiple bending points between the starting point and the ending point” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the bending point in the wire), “the group-specific bending parameter is a three-dimensional coordinate position of each of the bending points that the wires grouped in one group have in common” (Ikeda, pages 9-10 - Also in the perspective image A1, the detection area 8 is set for each lead 6. These detection areas 8are each detection areas of the front-view image A0 based on a calculation formula (formula (1) to be described later) for converting one point on one image to one point on the other image. It sets using the coordinate of the edge point identified by (7), and the height range (range which can take the height of the object location of 3D measurement) specified by the user. The height here is the height in the vertical direction, i.e., the front view direction, based on the mounting surface of the workpiece W, and is also referred to as the front view height. The height reference is not limited to the mounting surface of the workpiece W, but can be taken at the position of the camera C0 or any other position. The height range specified by the user is an object range for three-dimensional measurement along the optical axis of the camera C0), and “the three-dimensional coordinate position of each of the bending points is a combination of a longitudinal direction coordinate position, a lateral direction coordinate position, and a height direction coordinate position in a coordinate system composed of a longitudinal direction axis extending from the starting point to the ending point of the wire in a plane of the reference surface, a lateral direction axis extending in a direction orthogonal to the longitudinal direction axis from the starting point of the wire in the plane of the reference surface, and a height direction axis extending in a vertical direction with respect to the reference surface through the starting point” (Ikeda, page 12 - FIG. 10 shows a state in which one point P on the plane D at an arbitrary height position in the space is imaged at points p0 and p1 on the imaging surfaces F0 and F1 of the cameras C0 and C1, respectively. In FIG. 10, X, Y, and Z are coordinate axes representing a three-dimensional space, and the plane D is parallel to the XY plane). Claim 10 adds into claim 9 “wherein the three-dimensional coordinate position of each of the bending points comprises a three-dimensional proportional coordinate position which is a combination of a proportional longitudinal direction coordinate position proportional to a wire total length between the starting point and the ending point, a proportional lateral direction coordinate position proportional to the wire total length, and a proportional height direction coordinate position proportional to the wire total length” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the contour’s length of the wire and the bending point’s position on the wire; furthermore, the bending point position can be defined by its coordinates or relative position (i.e., proportional to the length) on the wire). As per claim 21, Ikeda teaches the claimed “three-dimensional shape inspection apparatus for performing shape inspection on a three-dimensional object,” the three-dimensional shape inspection apparatus comprising: “a master three-dimensional diagram generation part that generates a master three-dimensional diagram of a standard product of the object; and an inspection part that compares a three-dimensional diagram of the object with the master three-dimensional diagram to perform inspection on the object” (Ikeda, page 11 - FIG. 8 shows a detailed procedure regarding the lead inspection (ST3 in FIG. 4) of the IC. The processing The processing from ST21 to ST24 in this procedure is performed for the front-view image A0 which is the image which picked up the object to be measured. First, in ST21, the positioning area 9 is set in the front-view image A0 based on the setting condition registered in the teaching. In the following ST22, the image in this positioning area 9 is compared with the model registered in ST17 of the teaching process, and the deviation amount with respect to a model is extracted (for example, the pattern matching method can be applied to this process), “wherein the master three-dimensional diagram generation part acquires a standard vertical view image of the standard product imaged from vertically above and multiple standard oblique view images of the standard product imaged from multiple obliquely upper directions” (Ikeda, Fig. 1, pages 13-15 - First, in ST31, each camera C0 and C1 is simultaneously driven to generate an image. In the following ST32, the pattern matching process using the model registered before inspection is performed with respect to the front-view image A0), “extracts a contour line in the standard vertical view image of the standard product to generate a standard vertical view diagram of the standard product” (Ikeda, page 7, 2nd paragraph - camera C0 is provided in a state in which the optical axis is directed in the vertical direction (state at the front of the workpiece W; pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value) (Noted: Ikeda’s contour and/or edges of the wire defines the contour’s length of the wire), “respectively converts the standard vertical view diagram into multiple standard oblique view diagrams based on a shape parameter including height information of the standard product stored in a storage part” (Ikeda, pages 13-15 - First, in ST31, each camera C0 and C1 is simultaneously driven to generate an image. In the following ST32, the pattern matching process using the model registered before inspection is performed with respect to the front-view image A0… In ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range)) and “repeatedly executes adjustment of the shape parameter and conversion of the standard vertical view diagram into each of the standard oblique view diagrams until each of the converted standard oblique view diagrams overlaps with each of the standard oblique view images” (Ikeda, page 19 - In ST63, a position to be subjected to three-dimensional measurement on the first image is specified. An example of this position specifying method is the same as that described above regarding the various types of inspection when the camera C0 is placed in front view. In ST64, on the second image picked up by the camera C1, a position corresponding to the previously specified position on the first image is specified. In ST65, three-dimensional coordinates are calculated using the specified position on the first image and the position on the second image... For example, three-dimensional measurement may be performed on a plurality of points. In that case, it is also possible to specify a plurality of positions in each of the ST63 and ST64, and to calculate the three-dimensional coordinates of the plurality of points in the ST65, and from ST63 to ST65 that perform the position specification and the three-dimensional coordinate calculation for one point. By repeating the step of a plurality of times, the three-dimensional coordinates may be calculated one by one for each repetition… In ST46, correlation matching processing is executed in the search area 82 using the model image registered in ST44. The region most similar to the registered image is identified, and this is set as the measurement target region on the side of the perspective image A1. In ST47, the coordinates of the representative point are obtained for the measurement target area on the perspective image A1 side, and the three-dimensional coordinates are calculated using the coordinates and the coordinates of the representative point on the front-view image A0 side. Subsequently, at ST48, the adequacy of the obtained Z coordinate is determined. The result of the determination in ST49 is output, and the processing ends after that) (Noted: By taking images of a 3D object from different camera views, the position of each point on the 3D object can be determined by triangulation (i.e., finding the intersection of different 2D view rays projected from different camera views) – see Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44; [0040] - Since the two-dimensional coordinates (x31, y31) acquired from the image of the camera 43 and the two-dimensional coordinates (x41, y41) acquired from the image of the camera 44 in step S105 of FIG. 3 are two-dimensional coordinates corresponding to the same portion 35 of the wire 30 shown in FIG. 4, three-dimensional coordinates of the portion 35 of the wire 30 can be calculated from the two two-dimensional coordinates and the positions of the cameras 43 and 44); and “synthesizes the generated standard vertical view diagram and each of the standard oblique view diagrams overlapping with each of the standard oblique view images to generate the master three-dimensional diagram” (Ikeda, page 9 - Thus, by imaging each workpiece work W once with two cameras C0 and C1, the inspection by 2D measurement and the inspection by 3D measurement can be performed continuously. Since the front-view image A0 is used in the two-dimensional measurement processing, it is possible to perform a measurement processing with high accuracy using an image without distortion of characters. When performing the three-dimensional measurement process, the corresponding measurement target point is specified between the front-view image A0 and the striking image A1, and the coordinates of the specified points are calculated by a calculation formula based on the principle of triangulation. By application, three-dimensional coordinates are calculated; Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44), and “the inspection part acquires a vertical view image of the object imaged from vertically above and multiple oblique view images of the object imaged from multiple obliquely upper directions” (Ikeda, Fig. 1, pages 13-15 - First, in ST31, each camera C0 and C1 is simultaneously driven to generate an image. In the following ST32, the pattern matching process using the model registered before inspection is performed with respect to the front-view image A0), “extracts a contour line from the vertical view image while referring to the master three-dimensional diagram to generate a vertical view diagram of the object, respectively extracts a contour line from each of the oblique view images while referring to the master three-dimensional diagram to generate multiple oblique view diagrams of the object” (Ikeda, page 7, 2nd paragraph - camera C0 is provided in a state in which the optical axis is directed in the vertical direction (state at the front of the workpiece W; pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value) (Noted: Ikeda’s contour and/or edges of the wire defines the contour’s length of the wire), “synthesizes the generated vertical view diagram and each of the oblique view diagrams to generate the three-dimensional diagram of the object” ” (Ikeda, page 9 - Thus, by imaging each workpiece work W once with two cameras C0 and C1, the inspection by 2D measurement and the inspection by 3D measurement can be performed continuously. Since the front-view image A0 is used in the two-dimensional measurement processing, it is possible to perform a measurement processing with high accuracy using an image without distortion of characters. When performing the three-dimensional measurement process, the corresponding measurement target point is specified between the front-view image A0 and the striking image A1, and the coordinates of the specified points are calculated by a calculation formula based on the principle of triangulation. By application, three-dimensional coordinates are calculated) (Noted: By taking images of a 3D object from different camera views, the position of each point on the 3D object can be determined by triangulation (i.e., finding the intersection of different 2D view rays projected from different camera views) – see Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44; [0040] - Since the two-dimensional coordinates (x31, y31) acquired from the image of the camera 43 and the two-dimensional coordinates (x41, y41) acquired from the image of the camera 44 in step S105 of FIG. 3 are two-dimensional coordinates corresponding to the same portion 35 of the wire 30 shown in FIG. 4, three-dimensional coordinates of the portion 35 of the wire 30 can be calculated from the two two-dimensional coordinates and the positions of the cameras 43 and 44), and “compares the generated three-dimensional diagram of the object with the master three-dimensional diagram to perform inspection on a three-dimensional shape of the object” which is obvious in an inspection process of the object by comparing the 3D object model generated by captured images from multiple camara views and the 3D diagram of the object generated by the input shape parameters. Thus, it would have been obvious, in view of Kenjo, to configure Ikeda’s method as claimed by reconstructing a 3D diagram of the object by capturing 2D images from multiple oblique view cameras and projecting the 2D points on the captured 2D images back into the 3D object’s points. The motivation is to inspect the shape of the 3D object based on the reconstructed 3D diagram. Claim 22 adds into claim 21 “wherein the object and the standard product are devices composed of multiple components and multiple wires connecting between the components, the master three-dimensional diagram generation part extracts each contour line of each component and each of the wires in the standard vertical view image to generate the standard vertical view diagram of the device, and the shape parameter is height of each component from a reference surface, inclination of a surface of each component, and a bending parameter of each of the wires” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the contour’s length of the wire and the bending point’s position on the wire; furthermore, the bending point position can be defined by its coordinates or relative position (i.e., proportional to the length) on the wire; furthermore, the reconstructed 3D diagram of the object represents the inclination of a surface of each component). Claim 23 adds into claim 21 “a display part that displays images and diagrams; and an input part for a user to input data” (Ikeda, page 7 - This inspection apparatus has a measurement process function of both three-dimensional and two-dimensional, and image photographs the inspection object W (henceforth "work W") conveyed by the inspection line L of a factory. The image is sequentially picked up by the unit (1), and the measurement process and the discrimination process according to various inspection purposes are executed), “wherein the master three-dimensional diagram generation part repeatedly displays the standard vertical view image of the standard product on the display part, and generates a contour line in the standard vertical view image drawn by the user by operating the input part as the standard vertical view diagram” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the claimed contour line of the wire), and “performs conversion of the standard vertical view diagram into each of the standard oblique view diagrams based on the adjusted shape parameter input from the input part by the user” (Ikeda, pages 9-10 - It sets using the coordinate of the edge point identified by (7), and the height range (range which can take the height of the object location of 3D measurement) specified by the user. The height here is the height in the vertical direction, ie, the front view direction, based on the mounting surface of the workpiece W, and is also referred to as the front view height. The height reference is not limited to the mounting surface of the workpiece W, but can be taken at the position of the camera C0 or any other position. The height range specified by the user is an object range for three-dimensional measurement along the optical axis of the camera C0; page 19 - In ST63, a position to be subjected to three-dimensional measurement on the first image is specified. An example of this position specifying method is the same as that described above regarding the various types of inspection when the camera C0 is placed in front view. In ST64, on the second image picked up by the camera C1, a position corresponding to the previously specified position on the first image is specified. In ST65, three-dimensional coordinates are calculated using the specified position on the first image and the position on the second image... For example, three-dimensional measurement may be performed on a plurality of points. In that case, it is also possible to specify a plurality of positions in each of the ST63 and ST64, and to calculate the three-dimensional coordinates of the plurality of points in the ST65, and from ST63 to ST65 that perform the position specification and the three-dimensional coordinate calculation for one point. By repeating the step of a plurality of times, the three-dimensional coordinates may be calculated one by one for each repetition… In ST46, correlation matching processing is executed in the search area 82 using the model image registered in ST44. The region most similar to the registered image is identified, and this is set as the measurement target region on the side of the perspective image A1. In ST47, the coordinates of the representative point are obtained for the measurement target area on the perspective image A1 side, and the three-dimensional coordinates are calculated using the coordinates and the coordinates of the representative point on the front-view image A0 side. Subsequently, at ST48, the adequacy of the obtained Z coordinate is determined. The result of the determination in ST49 is output, and the processing ends after that) (Noted: By taking images of a 3D object from different camera views, the position of each point on the 3D object can be determined by triangulation (i.e., finding the intersection of different 2D view rays projected from different camera views) – see Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44; [0040] - Since the two-dimensional coordinates (x31, y31) acquired from the image of the camera 43 and the two-dimensional coordinates (x41, y41) acquired from the image of the camera 44 in step S105 of FIG. 3 are two-dimensional coordinates corresponding to the same portion 35 of the wire 30 shown in FIG. 4, three-dimensional coordinates of the portion 35 of the wire 30 can be calculated from the two two-dimensional coordinates and the positions of the cameras 43 and 44), and “displays each of the converted standard oblique view diagrams and each of the standard oblique view images on the display part by superimposing each of the converted standard oblique view diagrams and each of the standard oblique view images” which is obvious for comparison of the converted standard oblique view diagrams and the standard oblique view image. Thus, it would have been obvious, in view of Kenjo, to configure Ikeda’s method as claimed by reconstructing a 3D diagram of the object by capturing 2D images from multiple oblique view cameras and projecting the 2D points on the captured 2D images back into the 3D object’s points. The motivation is to inspect the shape of the 3D object based on the reconstructed 3D diagram. Claim 24 adds into claim 23 “wherein the object and the standard product are devices composed of multiple components and multiple wires connecting between the components” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image), “the master three-dimensional diagram generation part generates each contour line of each component and each of the wires in the standard vertical view image drawn by the user by operating the input part as the standard vertical view diagram of the device, and the shape parameter is height of each component from a reference surface, inclination of a surface of each component, and a bending parameter of each of the wires” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image). Claim 25 adds into claim 22 “wherein each of the wires has multiple bending points between a starting point and an ending point, the bending parameter is a three-dimensional coordinate position of each of the bending points of each of the wires” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the contour’s length of the wire and the bending point’s position on the wire; furthermore, the bending point position can be defined by its coordinates or relative position (i.e., proportional to the length) on the wire), and “the three-dimensional coordinate position of each of the bending points is a combination of a longitudinal direction coordinate position, a lateral direction coordinate position, and a height direction coordinate position in a coordinate system composed of a longitudinal direction axis extending from the starting point to the ending point of the wire in a plane of the reference surface, a lateral direction axis extending in a direction orthogonal to the longitudinal direction axis from the starting point of the wire in the plane of the reference surface, and a height direction axis extending in a vertical direction with respect to the reference surface through the starting point” (Ikeda, page 12 - FIG. 10 shows a state in which one point P on the plane D at an arbitrary height position in the space is imaged at points p0 and p1 on the imaging surfaces F0 and F1 of the cameras C0 and C1, respectively. In FIG. 10, X, Y, and Z are coordinate axes representing a three-dimensional space, and the plane D is parallel to the XY plane). Claim 26 adds into claim 25 “wherein the three-dimensional coordinate position of each of the bending points comprises a three-dimensional proportional coordinate position which is a combination of a proportional longitudinal direction coordinate position proportional to a wire total length between the starting point and the ending point, a proportional lateral direction coordinate position proportional to the wire total length, and a proportional height direction coordinate position proportional to the wire total length” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the contour’s length of the wire and the bending point’s position on the wire; furthermore, the bending point position can be defined by its coordinates or relative position (i.e., proportional to the length) on the wire). Claim 27 adds into claim 22 “wherein the master three-dimensional diagram generation part groups the multiple wires into multiple groups composed of the wires having the starting points located on the same component, the ending points located on the same component, and the same thickness” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image), and “extracts the contour line in the standard vertical view image of the standard product to generate the standard vertical view diagram of the standard product, the bending parameter stored in the storage part is composed of multiple group-specific bending parameters defined for each of the groups” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image), and “the master three-dimensional diagram generation part respectively converts each standard vertical view diagram of each of the wires included in each group into multiple standard oblique view diagrams based on each of the group-specific bending parameters” (Ikeda, pages 13-15 - First, in ST31, each camera C0 and C1 is simultaneously driven to generate an image. In the following ST32, the pattern matching process using the model registered before inspection is performed with respect to the front-view image A0… In ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range), and “repeatedly executes adjustment of each of the group-specific bending parameters of each of the groups and conversion of each standard vertical view diagram of the wires included in each of the groups into the standard oblique view diagrams until each of the converted standard oblique view diagrams of each of the wires included in each of the groups overlaps with each of the standard oblique view images of each of the wires included in each of the groups” ” (Ikeda, page 9 - Thus, by imaging each workpiece work W once with two cameras C0 and C1, the inspection by 2D measurement and the inspection by 3D measurement can be performed continuously. Since the front-view image A0 is used in the two-dimensional measurement processing, it is possible to perform a measurement processing with high accuracy using an image without distortion of characters. When performing the three-dimensional measurement process, the corresponding measurement target point is specified between the front-view image A0 and the striking image A1, and the coordinates of the specified points are calculated by a calculation formula based on the principle of triangulation. By application, three-dimensional coordinates are calculated; page 19 - In ST63, a position to be subjected to three-dimensional measurement on the first image is specified. An example of this position specifying method is the same as that described above regarding the various types of inspection when the camera C0 is placed in front view. In ST64, on the second image picked up by the camera C1, a position corresponding to the previously specified position on the first image is specified. In ST65, three-dimensional coordinates are calculated using the specified position on the first image and the position on the second image... For example, three-dimensional measurement may be performed on a plurality of points. In that case, it is also possible to specify a plurality of positions in each of the ST63 and ST64, and to calculate the three-dimensional coordinates of the plurality of points in the ST65, and from ST63 to ST65 that perform the position specification and the three-dimensional coordinate calculation for one point. By repeating the step of a plurality of times, the three-dimensional coordinates may be calculated one by one for each repetition… In ST46, correlation matching processing is executed in the search area 82 using the model image registered in ST44. The region most similar to the registered image is identified, and this is set as the measurement target region on the side of the perspective image A1. In ST47, the coordinates of the representative point are obtained for the measurement target area on the perspective image A1 side, and the three-dimensional coordinates are calculated using the coordinates and the coordinates of the representative point on the front-view image A0 side. Subsequently, at ST48, the adequacy of the obtained Z coordinate is determined. The result of the determination in ST49 is output, and the processing ends after that) (Noted: By taking images of a 3D object from different camera views, the position of each point on the 3D object can be determined by triangulation (i.e., finding the intersection of different 2D view rays projected from different camera views) – see Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44; [0040] - Since the two-dimensional coordinates (x31, y31) acquired from the image of the camera 43 and the two-dimensional coordinates (x41, y41) acquired from the image of the camera 44 in step S105 of FIG. 3 are two-dimensional coordinates corresponding to the same portion 35 of the wire 30 shown in FIG. 4, three-dimensional coordinates of the portion 35 of the wire 30 can be calculated from the two two-dimensional coordinates and the positions of the cameras 43 and 44). Thus, it would have been obvious, in view of Kenjo, to configure Ikeda’s method as claimed by reconstructing a 3D diagram of the object by capturing 2D images from multiple oblique view cameras and projecting the 2D points on the captured 2D images back into the 3D object’s points. The motivation is to inspect the shape of the 3D object based on the reconstructed 3D diagram. Claim 28 adds into claim 24 “wherein the master three-dimensional diagram generation part groups the multiple wires into multiple groups composed of the wires having the starting points located on the same component, the ending points located on the same component, and the same thickness, and generates each contour line of each component and each of the wires in the standard vertical view image drawn by the user by operating the input part as the vertical view diagram of the standard product, the bending parameter stored in the storage part is composed of multiple group-specific bending parameters defined for each of the groups” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image), and “the master three-dimensional diagram generation part repeatedly respectively converts each standard vertical view diagram of each of the wires included in each of the groups into multiple standard oblique view diagrams based on each of the group-specific bending parameters, and then performs conversion of each standard vertical view diagram of each of the wires included in each of the groups into each of the standard oblique view diagrams based on each of the adjusted group-specific bending parameters input from the input part by the user (Ikeda, page 19 - In ST63, a position to be subjected to three-dimensional measurement on the first image is specified. An example of this position specifying method is the same as that described above regarding the various types of inspection when the camera C0 is placed in front view. In ST64, on the second image picked up by the camera C1, a position corresponding to the previously specified position on the first image is specified. In ST65, three-dimensional coordinates are calculated using the specified position on the first image and the position on the second image... For example, three-dimensional measurement may be performed on a plurality of points. In that case, it is also possible to specify a plurality of positions in each of the ST63 and ST64, and to calculate the three-dimensional coordinates of the plurality of points in the ST65, and from ST63 to ST65 that perform the position specification and the three-dimensional coordinate calculation for one point. By repeating the step of a plurality of times, the three-dimensional coordinates may be calculated one by one for each repetition… In ST46, correlation matching processing is executed in the search area 82 using the model image registered in ST44. The region most similar to the registered image is identified, and this is set as the measurement target region on the side of the perspective image A1. In ST47, the coordinates of the representative point are obtained for the measurement target area on the perspective image A1 side, and the three-dimensional coordinates are calculated using the coordinates and the coordinates of the representative point on the front-view image A0 side. Subsequently, at ST48, the adequacy of the obtained Z coordinate is determined. The result of the determination in ST49 is output, and the processing ends after that) (Noted: By taking images of a 3D object from different camera views, the position of each point on the 3D object can be determined by triangulation (i.e., finding the intersection of different 2D view rays projected from different camera views) – see Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44; [0040] - Since the two-dimensional coordinates (x31, y31) acquired from the image of the camera 43 and the two-dimensional coordinates (x41, y41) acquired from the image of the camera 44 in step S105 of FIG. 3 are two-dimensional coordinates corresponding to the same portion 35 of the wire 30 shown in FIG. 4, three-dimensional coordinates of the portion 35 of the wire 30 can be calculated from the two two-dimensional coordinates and the positions of the cameras 43 and 44); and “a diagram synthesis step which synthesizes the generated vertical view diagram and each of the converted oblique view diagrams to generate the three-dimensional diagram of the object” (Ikeda, page 9 - Thus, by imaging each workpiece work W once with two cameras C0 and C1, the inspection by 2D measurement and the inspection by 3D measurement can be performed continuously. Since the front-view image A0 is used in the two-dimensional measurement processing, it is possible to perform a measurement processing with high accuracy using an image without distortion of characters. When performing the three-dimensional measurement process, the corresponding measurement target point is specified between the front-view image A0 and the striking image A1, and the coordinates of the specified points are calculated by a calculation formula based on the principle of triangulation. By application, three-dimensional coordinates are calculated; Kinjo, [0032] - The wire shape measurement device 100 includes a plurality of cameras 41 to 44 that capture two-dimensional images of the semiconductor device 10, and a control unit 50 that inspects the shape of the wire 30 based on the two-dimensional images acquired by the cameras 41 to 44); and “displays each of the converted standard oblique view diagrams and each of the standard oblique view images on the display part by superimposing each of the converted standard oblique view diagrams and each of the standard oblique view images” which is obvious for comparison of the converted standard oblique view diagrams and the standard oblique view image. Thus, it would have been obvious, in view of Kenjo, to configure Ikeda’s method as claimed by reconstructing a 3D diagram of the object by capturing 2D images from multiple oblique view cameras and projecting the 2D points on the captured 2D images back into the 3D object’s points. The motivation is to inspect the shape of the 3D object based on the reconstructed 3D diagram. Claim 29 adds into claim 27 “wherein the multiple wires grouped in one group respectively have multiple bending points between the starting point and the ending point” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the bending point in the wire), “the group-specific bending parameter is a three-dimensional coordinate position of each of the bending points that the wires grouped in one group have in common” (Ikeda, pages 9-10 - Also in the perspective image A1, the detection area 8 is set for each lead 6. These detection areas 8are each detection areas of the front-view image A0 based on a calculation formula (formula (1) to be described later) for converting one point on one image to one point on the other image. It sets using the coordinate of the edge point identified by (7), and the height range (range which can take the height of the object location of 3D measurement) specified by the user. The height here is the height in the vertical direction, i.e., the front view direction, based on the mounting surface of the workpiece W, and is also referred to as the front view height. The height reference is not limited to the mounting surface of the workpiece W, but can be taken at the position of the camera C0 or any other position. The height range specified by the user is an object range for three-dimensional measurement along the optical axis of the camera C0), and “the three-dimensional coordinate position of each of the bending points is a combination of a longitudinal direction coordinate position, a lateral direction coordinate position, and a height direction coordinate position in a coordinate system composed of a longitudinal direction axis extending from the starting point to the ending point of the wire in a plane of the reference surface, a lateral direction axis extending in a direction orthogonal to the longitudinal direction axis from the starting point of the wire in the plane of the reference surface, and a height direction axis extending in a vertical direction with respect to the reference surface through the starting point” (Ikeda, page 12 - FIG. 10 shows a state in which one point P on the plane D at an arbitrary height position in the space is imaged at points p0 and p1 on the imaging surfaces F0 and F1 of the cameras C0 and C1, respectively. In FIG. 10, X, Y, and Z are coordinate axes representing a three-dimensional space, and the plane D is parallel to the XY plane). Claim 30 adds into claim 29 “wherein the three-dimensional coordinate position of each of the bending points comprises a three-dimensional proportional coordinate position which is a combination of a proportional longitudinal direction coordinate position proportional to a wire total length between the starting point and the ending point, a proportional lateral direction coordinate position proportional to the wire total length, and a proportional height direction coordinate position proportional to the wire total length” (Ikeda, pages 9, 10, 14 - In the following ST3, the coordinates of the tip position of each lead in the front view image A0 are obtained by the edge detection dimension, and then the coordinates of the tip position of the corresponding lead in the perspective image A1 are determined by the edge detection obtained by the technique. Then, three-dimensional coordinates of the tip of each lead are obtained from the coordinates of the tip position of each lead in both images, and from the calculated value, it is determined whether or not there is an abnormality such as lifting or bending in each lead… Alternatively, the contour of the lead 6a may be extracted by extracting the edge in the positioning region 9 or the concentration gradient direction thereof, and the x and y coordinates of the tip of the lead 6a may be obtained. In ST15, the detection region 7 is set in each lead 6 on the basis of the x-coordinate, y-coordinate and the data input at ST11 of the leading end 6a. Specifically, the length of the lead 6 and the pitch between the leads 6 on the image are calculated using the data input from ST11, the number of pixels of the camera C0, the magnification, and the like, and the calculated value … in ST35, the model is converted into a shape to be imaged in the isometric camera C1 using a homography matrix corresponding to a predetermined height (for example, a height when the work is normal) within a specified height range. The measurement target area may be specified using the model after the conversion. Conversely, the perspective image A1may be converted into a frontal perspective image, and an area matching the model may be specified on the converted image) (Noted: Ikeda’s contour and/or edges of the wire obviously defines the contour’s length of the wire and the bending point’s position on the wire; furthermore, the bending point position can be defined by its coordinates or relative position (i.e., proportional to the length) on the wire). Claims 11-20 claim a three-dimensional diagram generation apparatus based on the method for generating a three-dimensional diagram of claims 1-10; therefore, they are rejected under a similar rationale. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHU K NGUYEN whose telephone number is (571)272-7645. The examiner can normally be reached M-F 8-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F. Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHU K NGUYEN/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jul 09, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602147
ZOOM ACTION BASED IMAGE PRESENTATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602874
FRAGMENTATION MODEL GENERATION METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602836
METHOD TO GENERATE DISPLACEMENT FOR SYMMETRY MESH
2y 5m to grant Granted Apr 14, 2026
Patent 12599485
SYSTEMS AND METHODS FOR ORTHOPEDIC IMPLANTS
2y 5m to grant Granted Apr 14, 2026
Patent 12597206
MECHANICAL WEIGHT INDEX MAPS FOR MESH RIGGING
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
93%
With Interview (+7.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 1184 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month