DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Foreign Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. DE102020212279.2, filed on 9/29/2020.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 3/23/2023 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4 are rejected under 35 U.S.C. 103 as being unpatentable over Myokan et al. (US 20200177866); hereinafter ‘Myokan’ in view of Buge (US 20180199812), hereinafter ‘Buge’.
Regarding Claim 1, Myokan discloses an apparatus (1) for calibrating a three-dimensional position of a center of an entrance pupil of a camera (2) (e.g., The calibration apparatus 10 determines the internal parameter, the distortion correction coefficient, and the external parameter in such a manner that an image of a feature point within the captured image appears at a position reflective of an original position in the three-dimensional space (i.e., an apparatus for calibrating a three-dimensional position) [0043]; The internal parameter defines a relation between position coordinates of pixels in a captured image and position coordinates in a camera coordinate system having an origin at an optical center (i.e., of a center of an entrance pupil of a camera) and based on the unit of length, and represents lens characteristics determined by a focal length, a relative origin position, a shear factor, and a scale factor [0003]),
- comprising a mount (4) for holding the camera (2) in such a manner that the camera (2) captures a predetermined calibration field of view (5) (e.g., FIG. 1 illustrates a configuration of a calibration system (i.e., comprising a mount for holding the camera). The calibration system includes an imaging apparatus 12, a chart 200 for calibration, a jig 14, and a calibration apparatus 10. The imaging apparatus 12 performs calibration. The jig 14 fixes the imaging apparatus 12 and the chart 200 in a predetermined positional relation (i.e., in such a manner that the camera captures a predetermined calibration field of view). The calibration apparatus 10 performs calibration to acquire a camera parameter [0035]; the chart 200 is disposed in contact with the virtual screen 220 as illustrated in FIG. 3 so as to image the chart patterns by covering the field-of-view range (i.e., a predetermined calibration field of view) [0050]),
- comprising at least two stationary reference cameras (7 to 10) for recording the calibration field of view (5) from different directions (11 to 14) (e.g., The imaging apparatus 12 may include only one camera or include a stereo camera that is configured by spacing two cameras (left and right cameras) apart at a known interval. As another alternative, the imaging apparatus 12 may include three or more cameras (i.e., at least two stationary reference cameras) [0037]; The jig 14 fixes (i.e., stationary) the imaging apparatus 12 and the chart 200 in a predetermined positional relation (i.e., the calibration field of view (5) from different directions) [0035]; as illustrated in FIG. 3 so as to image the chart patterns by covering the field-of-view range (i.e., the calibration field of view (5) from different directions) [0050]; The image acquisition section 34 acquires data regarding a captured image of the chart 200 from the imaging apparatus (i.e., for recording the calibration field of view) [0068]),
- comprising at least one stationary main calibration surface (15 to 17) having stationary main calibration structures (18 to 21) that are arranged in the calibration field of view (5) (e.g., The jig 14 fixes (i.e., stationary) the imaging apparatus 12 and the chart 200 in a predetermined positional relation [0035]; FIG. 2 is a diagram illustrating in more detail the chart 200 for calibration. Illustrated in (a) of FIG. 2 are chart patterns including a checker and a marker that are depicted on each plane surface forming the chart 200 (i.e., at least one stationary main calibration surface). A chart pattern 212a is depicted on the plane surface disposed on the left as viewed from the imaging apparatus 12. A chart pattern 212b (i.e., having stationary main calibration structures) is depicted on the plane surface disposed on the right. Illustrated in (b) of FIG. 2 is the chart 200 that is viewed from the imaging apparatus 12 when the two boards disposed at an angle of θ face the imaging apparatus 12 (i.e., having stationary main calibration structures that are arranged in the calibration field of view) [0046]),
- comprising at least one additional calibration surface (22 to 24) which has additional calibration structures (25) which driven to be displaceable between (e.g., As illustrated in the right portion of FIG. 4, the chart 230 depicted in the example of FIG. 4 includes three plane surfaces 232a, 232b, and 232c. More specifically, the chart 230 is structured such that the third plane surface 232c is disposed in contact with a base positioned in contact with the plane surfaces 232a and 232b associated with the two plane surfaces erected to form an angle of θ as illustrated in FIG. 2 (i.e., comprising at least one additional calibration surface) [0053]; the original checkered patterns are depicted in reverse perspective such that normal checkered patterns appear in a captured image (i.e., which has additional calibration structures) [0054]; when it is determined that one chart pattern is to be used, only one positional and postural relation should be established between the chart 200 or 230 and the imaging apparatus 12. Therefore, the jig 14 is used to establish such relation (i.e., which driven to be displaceable between) [0056]),
-- an operating position in which the additional calibration surface (22 to 24) is arranged within the field of view (5) (e.g., when the imaging apparatus 12 is disposed so as to overlook the plane surface 232c placed on the base (i.e., an operating position in which the additional calibration surface), that is, orient the line of sight downward from the horizontal (i.e., arranged within the field of view), the chart pattern depicted on the plane surface 232c is also imaged [0054]),
- comprising an evaluation unit (29) for processing recorded camera data of the camera (2) to be calibrated and of the reference cameras (7 to 10) and status parameters of the apparatus (1) (e.g., FIG. 5 illustrates internal circuit configurations of the chart pattern generation apparatus and calibration apparatus 10. The CPU 122 controls the transmission of signals and the processing performed by elements in the chart pattern generation apparatus and calibration apparatus (i.e., comprising an evaluation unit (29) for processing). The GPU 124 performs image processing. The main memory 126 includes a random-access memory (RAM) and stores programs and data required for processing [0058]; the image acquisition section 34 acquires data regarding a captured image from the imaging apparatus 12 (step S20). When the imaging apparatus 12 includes a multi-eye camera such as a stereo camera, the image acquisition section 34 acquires data regarding an image captured by each imaging element in the multi-eye camera (i.e., recorded camera data of the camera to be calibrated and of the reference cameras) [0090]; The calibration apparatus 10 includes an image acquisition section, a feature point information acquisition section, a projective transformation parameter storage section (i.e., status parameters of the apparatus), a feature point information storage section, a calibration section, and a camera parameter storage section (i.e., status parameters of the apparatus) [0067]).
Myokan does not explicitly disclose a neutral position in which the additional calibration surface (22 to 24) is arranged outside the field of view (5) and via a calibration surface displacement drive (27).
Buge discloses a neutral position in which the additional calibration surface (22 to 24) is arranged outside the field of view (5) (e.g., while the other calibration objects 22 received by the holder 20 (for example the calibration objects 22B and 22C) are positioned outside the view field of the camera (i.e., a neutral position in which the additional calibration surface is arranged outside the field of view) [0027]),
via a calibration surface displacement drive (27) (e.g., The calibration method moreover comprises a step in which a motorized drive unit, in drive connection with the holder, is firstly controlled to drive the holder, with the calibration object positioned thereon (i.e., via a calibration surface displacement drive) in the camera view field [0006]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Myokan and Buge for a neutral position in which the additional calibration surface (22 to 24) is arranged outside the field of view (5), and via a calibration surface displacement drive (27) as this would give the advantage to read out imaging features, for example to determine an imaging size of the calibration object positioned within the view field of the camera. In particular, a diameter of the calibration object is determined as imaging size from the camera image or the camera images of maximum definition and is correlated with the actual diameter of the calibration object, (see Buge, [0031]).
Regarding Claim 3, Myokan and Buge disclose the limitations as discussed above in Claim 1.
Myokan further discloses the additional calibration structures (25) of the respective additional calibration surface (22 to 24) are provided in a 3D arrangement that deviates from a flat surface (e.g., From the viewpoint that the position coordinates of the feature points of the chart patterns on the chart 200 are position coordinates originally given to the chart, which is a real object, the position coordinates of the feature points of the chart patterns on the chart 200 may be regarded as the “three-dimensional (3D) model position coordinates” of the chart (i.e., the additional calibration structures of the respective additional calibration surface are provided in a 3D arrangement). However, the information required for calibration is available as far as the position coordinates on each plane surface of the chart 200 are known. Practically, therefore, two-dimensional position coordinates will suffice (i.e., that deviates from a flat surface) [0065]; for transforming a two-dimensional index of a feature point within the chart pattern into the 3D model position coordinates in the chart 200 (i.e., that deviates from a flat surface) [0073]).
Regarding Claim 4, Myokan and Buge disclose the limitations as discussed above in Claim 1.
Myokan further discloses the main calibration structures (18 to 21) are arranged in a main calibration structure main plane (xy) and additionally in a main calibration structure angular plane (yz) (e.g., In the example of FIG. 10, the upper left vertex of the second square above the marker 52 (i.e., the main calibration structures) is the origin of a coordinate system of the index (i, j). Meanwhile, in a system of position coordinates (x, y) based on the unit of a pixel on the image plane (i.e., are arranged in a main calibration structure main plane (xy)) [0085]; the case with step S36, the indexes (i, j) are transformed into the 3D model position coordinates (x″, y″, z″) (i.e., additionally in a main calibration structure angular plane (yz)) by using the homography matrix, and stored in association with the position coordinates (u, v) of the feature points in the captured image [0102]).
wherein the main calibration structure angular plane is arranged at an angle greater than 5° to the main calibration structure main plane (e.g., Further, circles in FIG. 20 indicate ranges corresponding to a plurality of viewing angles (i.e., the main calibration structure angular plane). In order from closest to the center to farthest from the center, the ranges are designated 94a, 94b, and 94c. The viewing angles of these ranges are 50 degrees, 100 degrees, and 120 degrees (i.e., arranged at an angle greater than 5° to the main calibration structure main plane) [0121]).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Myokan in view of Buge and further in view of McGarry et al. (US 20200320740), hereinafter ‘McGarry’.
Regarding Claim 2, Myokan and Buge disclose the limitations as discussed above in Claim 1.
Myokan and Buge do not explicitly disclose at least one further reference camera (30), which is movable relative to the mount (4), for recording the calibration field of view (5), which is driven to be displaceable between - a first field of view recording position and - at least one further field-of-view recording position that differs from the first field-of- view recording position in an image capture direction (32), via a camera displacement drive (31).
McGarry discloses at least one further reference camera (30), which is movable relative to the mount (4), for recording the calibration field of view (5), which is driven to be displaceable between (e.g., The field calibration, for example, can determine scan motion direction relative to the camera, and the relative 3D pose (rotation and translation) among multiple cameras, for systems containing multiple cameras (i.e., at least one further reference camera) [0056]; The one or more other linear stages 114B enable movement of the camera 120 and the light source 130 relative to the mount 150 (i.e., which is movable relative to the mount) [0049]; a portion of the 3D calibration structure of the mount 150 are concurrently illuminated by the light source 130 and in the field of view of the camera 120. Images are then acquired for the predetermined orientations (i.e., for recording the calibration field of view) [0050]; the relative motion is provided by moving the camera 120 (i.e., driven to be displaceable) and the light source 130 [0052]),
- a first field of view recording position (e.g., During scanning A, the vision system 100 can obtain the image of the first long side of the bracket 150A, each of the images contains a first portion of the first 3 D calibration structure (i.e., a first field of view recording position). In this state, the camera 130 at the first convex side wall is downwards [0071]),
- and at least one further field-of-view recording position that differs from the first field-of- view recording position in an image capture direction (32) (e.g., During scanning B, the vision system 100 can obtain the image of the first short side of the bracket 150A, each of the images contains the second part of the first 3 D calibration structure (i.e., at least one further field-of-view recording position that differs from the first field-of- view recording position) 155A and the second part of the object 105 containing the second convex side wall. In this state, the camera 130 at the second convex side wall is downwards (i.e., in an image capture direction) [0072]),
via a camera displacement drive (31) (e.g., the relative motion is provided by moving the camera 120 (i.e., via a camera displacement drive) and the light source 130 (e.g., using one or more linear stages 114B) [0052]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Myokan and Buge with McGarry for at least one further reference camera (30), which is movable relative to the mount (4), for recording the calibration field of view (5), which is driven to be displaceable between - a first field of view recording position and - at least one further field-of-view recording position that differs from the first field-of- view recording position in an image capture direction (32), via a camera displacement drive (31) as this would give the advantage to provide one or more rotational degree of freedom and one or more translational degree of freedom, (see McGarry, [0046]).
Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Myokan in view of Buge, McGarry and further in view of Beardsley (US 20030202691), hereinafter ‘Beardsley’.
Regarding Claim 5, Myokan and Buge disclose the limitations as discussed above in Claim 1.
Myokan further discloses holding the camera to be calibrated (2) in the mount (4) (e.g., FIG. 1 illustrates a configuration of a calibration system (i.e., holding the camera to be calibrated in the mount). The calibration system includes an imaging apparatus 12, the imaging apparatus 12 performs calibration [0035]),
- capturing the stationary main calibration surface (15 to 17) with the camera (2) to be calibrated with the additional calibration surface (22 to 24) (e.g., The calibration apparatus 10 acquires data regarding an image of the chart 200 for calibration (i.e., capturing the stationary main calibration surface) that is captured by the imaging apparatus 12 (i.e., with the camera (2) to be calibrated), and performs calibration computation based on the acquired data [0039]; the chart 230 depicted in the example of FIG. 4 includes three plane surfaces 232a, 232b, and 232c (i.e., the additional calibration surface) [0053]),
- displacing the additional calibration surface (22 to 24) between the operating position (e.g., when the imaging apparatus 12 is disposed so as to overlook the plane surface 232c placed on the base (i.e., displacing the additional calibration surface), that is, orient the line of sight downward from the horizontal (i.e., the operating position), the chart pattern depicted on the plane surface 232c is also imaged [0054]),
- capturing the additional calibration structures (25) with the camera (2) to be calibrated with the additional calibration surface (22 to 24) in the operating position (e.g., the chart 230 (i.e., the additional calibration structures) depicted in the example of FIG. 4 includes three plane surfaces 232a, 232b, and 232c [0053]; when the imaging apparatus 12 (i.e., capturing with the camera to be calibrated) is disposed so as to overlook the plane surface 232c placed on the base (i.e., the additional calibration surface), that is, orient the line of sight downward from the horizontal (i.e., in the operating position), the chart pattern depicted on the plane surface 232c is also imaged [0054]),
- evaluating the recorded image data of the camera (2) to be calibrated and the cameras (7 to 10; 7 to 10, 30) with the evaluation unit (29) (e.g., FIG. 5 illustrates internal circuit configurations of the chart pattern generation apparatus and calibration apparatus 10. The CPU 122 controls the transmission of signals and the processing performed by elements in the chart pattern generation apparatus and calibration apparatus (i.e., with the evaluation unit). The GPU 124 performs image processing. The main memory 126 includes a random-access memory (RAM) and stores programs and data required for processing [0058]; the image acquisition section 34 acquires data regarding a captured image from the imaging apparatus 12 (step S20). When the imaging apparatus 12 includes a multi-eye camera such as a stereo camera, the image acquisition section 34 acquires data regarding an image captured by each imaging element in the multi-eye camera (i.e., evaluating the recorded image data of the camera to be calibrated) [0090]; The imaging apparatus 12 may include only one camera or include a stereo camera that is configured by spacing two cameras (left and right cameras) apart at a known interval. As another alternative, the imaging apparatus 12 may include three or more cameras (i.e., and the cameras) [0037).
Myokan does not explicitly disclose the reference cameras (7 to 10; 7 to 10, 30), in the neutral position, position with the calibration surface displacement drive (27).
Buge discloses in the neutral position (e.g., while the other calibration objects 22 received by the holder 20 (for example the calibration objects 22B and 22C) are positioned outside the view field of the camera (i.e., the neutral position) [0027]),
and position with the calibration surface displacement drive (27) (e.g., The calibration method moreover comprises a step in which a motorized drive unit, in drive connection with the holder, is firstly controlled to drive the holder, with the calibration object positioned thereon (i.e., position with the calibration surface displacement drive) in the camera view field [0006]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Myokan with Buge for a neutral position, and position with the calibration surface displacement drive (27) as this would hive the advantage to read out imaging features, for example to determine an imaging size of the calibration object positioned within the view field of the camera. In particular, a diameter of the calibration object is determined as imaging size from the camera image or the camera images of maximum definition and is correlated with the actual diameter of the calibration object, (see Buge, [0031]).
Myokan, Buge and McGarry do not explicitly disclose and the reference cameras (7 to 10; 7 to 10, 30).
Beardsley discloses the reference cameras (7 to 10; 7 to 10, 30) (e.g., The calibration method starts by placing a known calibration object 150 on the turntable 140 and, if necessary, adjusting the position, focus, and aperture of each camera 130. The set of calibration images 161 of the calibration object are then acquired 210 by the cameras 130 (i.e., the reference cameras) [0028])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Myokan, Buge, and McGarry with Beardsley for the reference cameras (7 to 10; 7 to 10, 30) as this would give the advantage for a translation vector between the fixed camera and each floating (i.e. reference cameras) camera is determined, and the translation vectors are used to place all the cameras in a common coordinate frame, (see Beardsley, [0015]).
Regarding Claim 6, Myokan, Buge, McGarry and Beardsley disclose the limitations as discussed above in Claim 5.
Myokan and Buge do not explicitly disclose at least one further reference camera (30), which is movable relative to the mount (4), for recording the calibration field of view (5) which is driven to be displaceable between, - a first field of view recording position, - and at least one further field-of-view recording position that differs from the first field-of- view recording position in an image capture direction (32), - capturing at least one of the main calibration surface (15 to 17) and the additional calibration surface (22 to 24) with the movable reference camera (30) in the first field of view recording position, displacing the movable reference camera (30) with the camera displacement drive (31), - capturing at least one of the main calibration surface (15 to 17) and the additional calibration surface (22 to 24) with the movable reference camera (30) in the further field of view recording position, and evaluating recorded image data of the movable reference camera (30) with an evaluation unit (29).
McGarry discloses which is driven to be displaceable and via a camera displacement drive (31) (e.g., the relative motion (i.e., which is driven to be displaceable) is provided by moving the camera 120 and the light source 130 (e.g., using one or more linear stages 114B) (i.e., via a camera displacement drive). The relative motion can be based on instructions provided by the computing device [0052]),
a first field of view recording position (e.g., During scanning A, the vision system 100 can obtain the image of the first long side of the bracket 150A, each of the images contains a first portion of the first 3 D calibration structure (i.e., a first field of view recording position). In this state, the camera 130 at the first convex side wall is downwards [0071]),
- and at least one further field-of-view recording position that differs from the first field-of- view recording position in an image capture direction (32) (e.g., During scanning B, the vision system 100 can obtain the image of the first short side of the bracket 150A, each of the images contains the second part of the first 3 D calibration structure 155A and the second part of the object 105 containing the second convex side wall (i.e., at least one further field-of-view recording position that differs from the first field-of- view recording position). In this state, the camera 130 at the second convex side wall is downwards (i.e., in an image capture direction) [0072]),
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Myokan, and Buge with McGarry for which is driven to be displaceable, a first field of view recording position, and at least one further field-of-view recording position that differs from the first field-of- view recording position in an image capture direction (32), and via a camera displacement drive (31) as this would give the advantage that can mitigate accuracy/repeatability concerns related to motion stages of the vision system. For instance, variabilities in stage rotation, tip angle, and the angular run-out of a translation stage can be compensated for, (see McGarry, [0027]).
Myokan, and Buge with McGarry do not explicitly disclose at least one further reference camera (30), which is movable relative to the mount (4), for recording the calibration field of view (5), - capturing at least one of the main calibration surface (15 to 17) and the additional calibration surface (22 to 24) with the movable reference camera (30) in the first field of view recording position, displacing the movable reference camera (30)- capturing at least one of the main calibration surface (15 to 17) and the additional calibration surface (22 to 24) with the movable reference camera (30) in the further field of view recording position, and evaluating recorded image data of the movable reference camera (30) with an evaluation unit (29).
Beardsley discloses at least one further reference camera (30), which is movable relative to the mount (4), for recording the calibration field of view (5) (e.g., one camera is selected as a fixed camera, all other cameras are designated as floating cameras (i.e., at least one further reference camera) [Abstract]; the position of each camera 130 relative to the turntable 140 for any desired degree of the rotation can be determined (i.e., which is movable relative to the mount) [0031]; his provides an extensive set of views of the calibration object from a variety of viewpoints (i.e., for recording the calibration field of view) [0028]),
- capturing at least one of the main calibration surface (15 to 17) and the additional calibration surface (22 to 24) with the movable reference camera (30) in the first field of view recording position (e.g., The calibration method utilizes sets of sequences of images 161 of the calibration object 150 (i.e., capturing at least one of the main calibration surface) [0032]; the floating cameras 262 (i.e., with the movable reference camera) can be placed 270 in the common coordinate frame (i.e., in the first field of view recording position) of the fixed camera 261 [0064])
- displacing the movable reference camera (30) (e.g., the floating cameras 262 can be placed (i.e., displacing the movable reference camera) 270 in the common coordinate frame [0064]),
- capturing at least one of the main calibration surface (15 to 17) and the additional calibration surface (22 to 24) with the movable reference camera (30) in the further field of view recording position (e.g., The calibration object 150 (i.e., capturing at least one of the main calibration surface), as shown in FIG. 1, has patterns of similar appearance on its visible faces, so to achieve automatic tracking the patterns are supplemented with the distinctive colored markers 153, one unique marker for each face of the object (i.e., and the additional calibration surface) [0043]; one camera is selected as a fixed camera, all other cameras are designated as floating cameras (i.e., with the movable reference camera) [0014]; This provides an extensive set of views of the calibration object from a variety of viewpoints (i.e., in the further field of view recording position) [0028]),
- and evaluating recorded image data of the movable reference camera (30) with an evaluation unit (29) (e.g., The system 100 includes multiple cameras (C1-C6) 130, and a turntable 140. A processor 160 receives sets of calibration images 161 (i.e., evaluating recorded image data) acquired by the cameras 130 (i.e., of the movable reference camera (30) with an evaluation unit) determine calibration parameters 170 of the system [0023]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Myokan, Buge, and McGarry with Beardsley for at least one further reference camera (30), which is movable relative to the mount (4), for recording the calibration field of view (5), - capturing at least one of the main calibration surface (15 to 17) and the additional calibration surface (22 to 24) with the movable reference camera (30) in the first field of view recording position, displacing the movable reference camera (30)- capturing at least one of the main calibration surface (15 to 17) and the additional calibration surface (22 to 24) with the movable reference camera (30) in the further field of view recording position, and evaluating recorded image data of the movable reference camera (30) with an evaluation unit (29) as this would give the advantage for a full metric calibration, obtaining the intrinsic and extrinsic parameters of the multiple cameras, (see Beardsley, [0021]).
Claims 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Myokan in view of Buge and further in view of Kranski et al. (US 20170374360), hereinafter ‘Kranski’.
Regarding Claim 7,
Myokan further discloses an evaluation unit (53) for processing recorded camera data of the cameras (42 to 44) (e.g., FIG. 5 illustrates internal circuit configurations of the chart pattern generation apparatus and calibration apparatus 10. The CPU 122 controls the transmission of signals and the processing performed by elements in the chart pattern generation apparatus and calibration apparatus (i.e., an evaluation unit for processing). The GPU 124 performs image processing. The main memory 126 includes a random-access memory (RAM) and stores programs and data required for processing [0058]; the image acquisition section 34 acquires data regarding a captured image from the imaging apparatus 12 (step S20). When the imaging apparatus 12 includes a multi-eye camera such as a stereo camera, the image acquisition section 34 acquires data regarding an image captured by each imaging element in the multi-eye camera (i.e., recorded camera data of the cameras) [0090]),
Myokan and Buge do not explicitly disclose a system (41) for determining relative positions of centers of entrance pupils of at least two cameras (42 to 44) which are mounted on a common supporting frame (45) with respect to each other, - a plurality of calibration structure carrier components (46 to 49) comprising calibration structures (18 to 21) that can be arranged around the supporting frame (45) such that each of the cameras (42 to 44) detects at least calibration structures (18 to 21) of two of the calibration structure carrier components (46 to 49) wherein the arrangement of the calibration structure carrier components (46 to 49) is such that at least one of the calibration structures (18 to 21) of one and the same calibration structure carrier component (46 to 49) is captured by two cameras.
Kranski discloses a system (41) for determining relative positions of centers of entrance pupils of at least two cameras (42 to 44) which are mounted on a common supporting frame (45) with respect to each other (4360(e.g., FIGS. 1A and 1B illustrate an example system 100 to perform camera calibration. System 100 provides a multi-camera approach for performing calibration, where a movable platform device (e.g., conveyer belt 104) includes multiple placement positions for multiple cameras (i.e., a system of at least two cameras which are mounted on a common supporting frame with respect to each other) [0032]; intrinsic parameters can be derived, such as for example: (a) the focal length of the camera in both the X and Y axes; (b) optical center of the camera (or camera sensor) (i.e., for determining relative positions of centers of entrance pupils); and/or (c) distortion coefficients [0078]),
- a plurality of calibration structure carrier components (46 to 49) comprising calibration structures (18 to 21) that can be arranged around the supporting frame (45) such that each of the cameras (42 to 44) detects at least calibration structures (18 to 21) of two of the calibration structure carrier components (46 to 49) (e.g., FIG. 2 provides an illustration of the interior of a target 102 according to some embodiments of the invention. In some embodiments, the target 102 is a multi-faceted structure comprised of multiple planar regions (i.e., a plurality of calibration structure carrier components), where each facet/planar region 202 includes a respective planar target (i.e., comprising calibration structures) [0039]; FIGS. 1A and 1B illustrate an example system 100 to perform camera calibration. System 100 provides a multi-camera approach for performing calibration, where a movable platform device (e.g., conveyer belt 104) includes multiple placement positions for multiple cameras (i.e., that can be arranged around the supporting frame such that each of the cameras detects) [0032]; The target 102 comprises a generally partial-spherical shape having multiple planar targets (i.e., detects at least calibration structures of two of the calibration structure carrier components) portions located thereon. A full-FOV (field of view) target ensures that all sections of the imager are covered by detectable points [0033]),
wherein the arrangement of the calibration structure carrier components (46 to 49) is such that at least one of the calibration structures (18 to 21) of one and the same calibration structure carrier component (46 to 49) is captured by two cameras (e.g., Each of the cameras are located at a designated position, spaced apart from one another at a set distance, with their image capture direction facing the target 102 (i.e., the arrangement of the calibration structure carrier components is such that) [0034] see Fig. 1A; FIG. 2 provides an illustration of the interior of a target 102 according to some embodiments of the invention. In some embodiments, the target 102 is a multi-faceted structure comprised of multiple planar regions (i.e., of one and the same calibration structure carrier component), where each facet/planar region 202 includes a respective planar target (i.e., at least one of the calibration structures) [0039]; In the system 100 illustrated in FIGS. 1A and 1B, conveyer belt 104 includes four positions (positions 1, 2, 3, and 4) for four cameras 110a-d to undergo concurrent calibration (i.e., is captured by two cameras) at any given moment in time [0032]),
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Myokan and Buge with Kranski for a system (41) for determining relative positions of centers of entrance pupils of at least two cameras (42 to 44) which are mounted on a common supporting frame (45) with respect to each other, - a plurality of calibration structure carrier components (46 to 49) comprising calibration structures (18 to 21) that can be arranged around the supporting frame (45) such that each of the cameras (42 to 44) detects at least calibration structures (18 to 21) of two of the calibration structure carrier components (46 to 49) wherein the arrangement of the calibration structure carrier components (46 to 49) is such that at least one of the calibration structures (18 to 21) of one and the same calibration structure carrier component (46 to 49) is captured by two cameras as this would give the advantage of reducing the number of images captured while simultaneously preserving overall information density, (see Kranski, [Abstract]), and to determine the true parameters of a camera device that produces an image, which allows for determination of calibration data of the camera such as intrinsic parameters and extrinsic parameters, (see Kranski, [0004]).
Regarding Claim 8,
Myokan further discloses evaluating recorded image data of the cameras (42 to 44) with an evaluation unit (53) (e.g., FIG. 5 illustrates internal circuit configurations of the chart pattern generation apparatus and calibration apparatus 10. The CPU 122 controls the transmission of signals and the processing performed by elements in the chart pattern generation apparatus and calibration apparatus (i.e., evaluating). The GPU 124 performs image processing. The main memory 126 includes a random-access memory (RAM) and stores programs and data required for processing [0058]; the image acquisition section 34 acquires data regarding a captured image from the imaging apparatus 12 (step S20). When the imaging apparatus 12 includes a multi-eye camera such as a stereo camera, the image acquisition section 34 acquires data regarding an image captured by each imaging element in the multi-eye camera (i.e., recorded camera image data of the cameras) [0090]),
Myokan and Buge do not explicitly disclose - mounting the cameras (42 to 44) on a common supporting frame (45), - arranging calibration structure carrier components (46 to 49) as a group of calibration structure carrier components (46 to 49) around the supporting frame (45), - and capturing the calibration structure carrier components (46 to 49) that are located in a field of view of the cameras (42 to 44) in a predetermined relative position of the supporting frame (45) to the group of calibration structure carrier components (46 to 49).
Kranski discloses mounting the cameras (42 to 44) on a common supporting frame (45) (e.g., System 100 provides a multi-camera approach for performing calibration, where a movable platform device (e.g., conveyer belt 104) includes multiple placement positions for multiple cameras (i.e., mounting the cameras on a common supporting frame) [0032]),
- arranging calibration structure carrier components (46 to 49) as a group of calibration structure carrier components (46 to 49) around the supporting frame (45) (e.g., Each of the cameras are located at a designated position, spaced apart from one another at a set distance, with their image capture direction facing the target 102 (i.e., arranging calibration structure carrier components around the supporting frame) [0034]; see Fig. 1A; the target 102 is a multi-faceted structure comprised of multiple planar regions (i.e., as a group of calibration structure carrier components), where each facet/planar region 202 includes a respective planar target [0039]),
- capturing the calibration structure carrier components (46 to 49) that are located in a field of view of the cameras (42 to 44) in a predetermined relative position of the supporting frame (45) to the group of calibration structure carrier components (46 to 49) (e.g., Each of the cameras are located at a designated position, spaced apart from one another at a set distance (i.e., in a predetermined relative position of the supporting frame), with their image capture direction facing the target 102 (i.e., capturing the calibration structure carrier components that are located in a field of view of the cameras) [0034]; see Fig. 1A; the target 102 is a multi-faceted structure comprised of multiple planar regions (i.e., to the group of calibration structure carrier components), where each facet/planar region 202 includes a respective planar target [0039]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Myokan and Buge with Kranski for mounting the cameras (42 to 44) on a common supporting frame (45), - arranging calibration structure carrier components (46 to 49) as a group of calibration structure carrier components (46 to 49) around the supporting frame (45), - and capturing the calibration structure carrier components (46 to 49) that are located in a field of view of the cameras (42 to 44) in a predetermined relative position of the supporting frame (45) to the group of calibration structure carrier components (46 to 49) as this would give the advantage to determine the true parameters of a camera device that produces an image, which allows for determination of calibration data of the camera such as intrinsic parameters and extrinsic parameters, (see Kranski, [0004]).
Regarding Claim 9, Myokan, Buge and Kranski disclose the limitations as discussed above in Claim 8.
Myokan and Buge do not explicitly disclose displacing the supporting frame (45) in such a manner that at least one of the cameras (42 to 44) captures a calibration structure carrier component (46 to 49) which has not been previously detected by this camera (42 to 44), - repeating the capturing and displacement until each of the cameras (42 to 44) has captured at least calibration structures (18 to 21) of two of the calibration structure carrier components (46 to 49) wherein calibration structures (18 to 21) of at least one of the calibration structure carrier components (46 to 49) have been captured by two cameras (42 to 44).
Kranski discloses displacing the supporting frame (45) in such a manner that at least one of the cameras (42 to 44) captures a calibration structure carrier component (46 to 49) which has not been previously detected by this camera (42 to 44) (e.g., The conveyer belt 104 operates to shift the location of the cameras from the direction of the input stack 112 to the direction of the output stack 114 (i.e., displacing the supporting frame). The movement of the conveyer belt 104 is paused when the cameras reach set positions for calibration [0035]; When the conveyer is paused, each camera at its respective position then captures an image of the target 102 (i.e., in such a manner that at least one of the cameras captures a calibration structure carrier component) [0036]; as shown in FIG. 8F, the conveyer belt 104 begins moving again to shift camera 610a into a new position (i.e., which has not been previously detected by this camera). The next camera 610b to be calibrated is also loaded onto the conveyer belt 104 [0059])
- repeating the capturing and displacement until each of the cameras (42 to 44) has captured at least calibration structures (18 to 21) of two of the calibration structure carrier components (46 to 49) (e.g., each camera at its respective position then captures an image of the target 102. After an image has been captured by each camera, the conveyer moves again (i.e., repeating the capturing and displacement until each of the cameras has captured at least calibration structures) such that the cameras shift to their next succeeding positions to capture another image of the target 102. In this manner, each camera will successively capture an image of the target 102 from each of the positions 1, 2, 3, and 4 (i.e., two of the calibration structure carrier components). Once the camera has completed taking an image from each position 1-4, the next shifting of its position will cause that camera to be placed into the output stack 114 [0036]),
calibration structures (18 to 21) of at least one of the calibration structure carrier components (46 to 49) have been captured by two cameras (42 to 44) (e.g., the conveyer belt 104 is paused when camera 110d is located as position 1, camera 110c is located at position 2, camera 110b is located at position 3, and camera 110a is located at position 4 (i.e., captured by two cameras) [0035]; The images captured by the cameras include some or all of the marker planes (i.e., of at least one of the calibration structure carrier components) within the target 102 (i.e., calibration structures) [0037]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Myokan and Buge with Kranski for displacing the supporting frame (45) in such a manner that at least one of the cameras (42 to 44) captures a calibration structure carrier component (46 to 49) which has not been previously detected by this camera (42 to 44), - repeating the capturing and displacement until each of the cameras (42 to 44) has captured at least calibration structures (18 to 21) of two of the calibration structure carrier components (46 to 49) wherein calibration structures (18 to 21) of at least one of the calibration structure carrier components (46 to 49) have been captured by two cameras (42 to 44) as this would give the advantage of reducing the number of images captured while simultaneously preserving overall information density, (see Kranski, [Abstract]), and to determine the true parameters of a camera device that produces an image, which allows for determination of calibration data of the camera such as intrinsic parameters and extrinsic parameters, (see Kranski, [0004]).
Regarding Claim 10, Myokan, Buge and Kranski disclose the limitations as discussed above in Claim 8.
Myokan further discloses the calibration structures (18 to 21) of one (46) of the calibration structure carrier components (46 to 49) are used as master structures for specifying a coordinate system (xyz) of the relative positions to be determined (e.g., The calibration apparatus 10 acquires data regarding an image (i.e., the calibration structures) of the chart 200 (i.e., one of the calibration structure carrier components) for calibration that is captured by the imaging apparatus 12, and performs calibration computation based on the acquired data in order to derive an internal parameter and an external parameter. When these parameters are used, the relation between a pixel m(u, v) in the captured image and a position M(X, Y, Z) in a world coordinate system (i.e., are used as master structures for specifying a coordinate system (xyz) of the relative positions to be determined) [0039]; a transformation parameter for acquiring the 3D model position coordinates in the chart 200 (i.e., one of the calibration structure carrier components) from the index of a feature point (i.e., the calibration structures). Moreover, position coordinates (x″, y″, z″) based on the unit of length in the 3D model of the chart are determined (i.e., are used as master structures for specifying a coordinate system (xyz) of the relative positions to be determined) [0087-0088]).
Conclusion
Any inquiry concerning this communication or earlier communications from the
examiner should be directed to Agustin R Campozano whose telephone number is (571)- 272-0256. The examiner can normally be reached Mon-Fri 8-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Catherine T. Rastovski can be reached on (571) 270-0349. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be
obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Agustin R Campozano/Examiner, Art Unit 2863
/Catherine T. Rastovski/Supervisory Primary Examiner, Art Unit 2863