DETAILED ACTION
Notice of AIA Status
The present application is being examined under the AIA the first inventor to file provisions.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/04/2025 has been entered.
Response to Arguments
Applicant’s arguments, filed 12/04/2025, with respect to the 35 U.S.C. 103 rejections for claims 1-5, 7-9, 11-19, 36 and 53-54 are respectfully moot because the arguments do not apply to current reference(s) or combination of reference(s) used in the current rejection(s).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C.
102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness
rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C.
Claims 1-5, 7-9, 11-19, 36 and 53-54 are rejected under 35 U.S.C. 103 as being unpatentable over CHI (US 20220258356 A1), hereinafter referenced as CHI in view of KROEGER (US 20200357140 A1), hereinafter referenced as KROEGER and in further view of CLAVEAU et al. (US 20170287166 A1), hereinafter referenced as CLAVEAU and in further view of KOTHARI et al. (US 20230399015 A1), hereinafter referenced as KOTHARI.
Regarding claim 1, CHI explicitly teaches a method (Fig. 1. Paragraph [0066]-CHI discloses the spatial calibration method of a robot ontology coordinate system based on a visual perception device provided in the embodiments of this disclosure is applicable to an disclosure environment shown in FIG. 1 (wherein Fig. 1 illustrates a four legged robot with a mounted camera). In paragraph [0070]-CHI discloses as shown in FIG. 3, a spatial calibration method of a robot ontology coordinate system based on a visual perception device is provided (wherein the system determines an unknown variable (i.e. a transformation relationship between the visual perception coordinate system and the ontology coordinate system) based on an equivalence relationship between the transformation relationships of a first sampling point and the unknown variable and the transformation relationship of a second sampling point and the unknown variable)), comprising:
receiving a first image (Fig. 1. Paragraph [0121]-CHI discloses when operations need to be performed by using the robot, the robot may control the visual perception device in real time to perform image acquisition on the target object in the environment, to obtain a current image. After obtaining the current image, the robot may obtain visual pose information of the target object in the visual perception coordinate system according to information such as coordinates of the target object in the image and internal parameters of a camera. Please also read paragraph [0066]) of a robot (Fig. 1, #120 called a robot. Paragraph [0066]) captured by a first camera (Fig. 1, #110 called a camera Paragraph [0066]) of a robot (Fig. 1, #120 called a robot. Paragraph [0066]), wherein the first image includes an object (Fig. 1, #140 called a target calibration object. Paragraph [0066]) having at least one known dimension (Fig. 1. Paragraph [0074]-CHI discloses the target calibration object refers to an object used for calibration. A size of the target calibration object may be pre-determined and may include lengths between feature points of the target calibration object (wherein the calibration object may be a checkerboard, Apriltag, ArUco, or other graphics and the target calibration object may be disposed at an end of the target motion mechanism (i.e. legs)). In paragraph [0075]-CHI discloses in FIG. 4, a rectangle range formed by dashed lines represents a field of view of the visual perception device. Setting of sampling points of the target motion mechanism need to ensure that the calibration object is within the field of view. In paragraph [0111]-CHI discloses the coordinates of the target calibration object may be represented by using coordinates of feature points in the target calibration object);
PNG
media_image1.png
315
610
media_image1.png
Greyscale
Fig. 4 (CHI), illustrates a mobile four-legged robot with a single-camera system and an associated coordinate system.
CHI fails to explicitly teach receiving a second image captured by a second camera of the robot, wherein the second image includes the object, wherein a field of view of the first camera and a field of view of the second camera at least partially overlap; projecting a set of points on the object in the first image to pixel locations in the second image; determining, for each point of the projected set of points on the object, a first distance between the point on the object in the second image and the pixel location of the corresponding projected point in the second image.
However, KROEGER explicitly teaches receiving a second image (Fig. 1, #110(2) and #208(2) called a second image. Paragraph [0021 and 0036]) captured by a second camera (Fig. 4, #106(2) and #206(2), called a second camera. Paragraph [0021 and 0035]) of the robot (Fig. 4, #104 and #204, called an autonomous vehicle. Paragraph [0021 and 0035]), wherein the second image includes the object (Fig. 1-2. Paragraph [0034]-KROEGER discloses FIG. 2 depicts a pictorial flow diagram of process 200 for calibrating cameras disposed on an autonomous vehicle (wherein an autonomous vehicle is a robot, the calibration processes include both intrinsic and extrinsic calibration, the calibration process involves the projection of points into overlapping images, and each point corresponds to the same image feature or portion in both images). In paragraph [0035]-KROEGER discloses at operation 202, the process can include capturing images of an environment at multiple cameras. The operation 202 illustrates a vehicle 204 having a first camera 206(1) and a second camera 206(2) disposed on the vehicle 204. The first camera 206(1) captures image data such as an image 208(1) and the second camera 206(2) captures image data such as a second image 208(2) (wherein the object may be an image feature or image portion present in both images)). Please also read paragraph [0020-0021]), wherein a field of view of the first camera (Fig. 4, #106(1) and #206(1), called a first camera. Paragraph [0021 and 0036]) and a field of view of the second camera at least partially overlap (Fig. 2. Paragraph [0035]-KROEGER discloses the cameras 206(1), 206(2) are generally configured next to each other, both facing in the direction of travel and with significant overlap in their fields of view (wherein image acquisition between both cameras may occur substantially simultaneously or over different times). Please also see Fig. 1 and read paragraph [0022]);
PNG
media_image2.png
426
746
media_image2.png
Greyscale
Fig. 2 (KROEGER), illustrates an autonomous vehicle (i.e. a robot) and a two-camera system with overlapping views that simulataneously captures images of an environment.
projecting a set of points on the object (Fig. 1. Paragraph [0037]-KROEGER discloses at operation 210, the process can include identifying point pairs. The operation 210 may identify, for portions of the first image 208(1) and the second image 208(2) that overlap, first points 212a, 214a, 216a, 218a, 220a, 222a, 224a, 226a, 228a, 230a, in the first image 208(1) and second points 212b, 214b, 216b, 218b, 220b, 222b, 224b, 226b, 228b, 230b, in the second image 208(2). Please also read paragraph [0023 and 0040-0043]) in the first image (Fig. 1-2, #110(1) and #208(1), called a first image. Paragraph [0021 and 0035]) to pixel locations in the second image (Fig. 1-2, #110(2) and #208(2), called a second image. Paragraph [0021 and 0035]. Further in paragraph [0037]-KROEGER discloses the first points and the second points may be image features, e.g., with the first point 212a corresponding to an image feature or portion in the first image 208(1) and the second point 212b corresponding to the same image feature or portion in the second image 208(2), the first point 214a corresponding to another image feature or portion in the first image 208(1) and the second point 214b corresponding to the same other image feature or portion in the second image 208(2), and so forth (wherein the object may be an image feature, logical grouping of features or image portion that is present in both images, such as a vehicle, roadway, line, edge or reference point)); and
PNG
media_image3.png
298
814
media_image3.png
Greyscale
Fig. 2 (KROEGER) illustrates the projection of points, which each correspond to the same image feature or image portion within images #208(1) and (2).
determining, for each point of the projected set of points on the object (Fig. 2. Paragraph [0040]-KROEGER discloses at operation 232, the process 200 can determine errors associated with point pairs. In paragraph [0041]-KROEGER discloses the point 212b has an associated hollow circle 238, the point 216b has an associated hollow circle 240, and the point 228b has an associated hollow circle 242. The points 212b, 216b, 228b generally represent the detected location of the features (e.g., distorted location) and the hollow circles 238, 240, 242 represent reprojections of associated features in the environment. Each hollow circle 238, 240, 242 represents a reprojection of the first points corresponding to the points 212b, 216b, 228b. The hollow circle 238 may represent a reprojection of the point 212a from the first image 208(a) into the second image 208(b), the hollow circle 240 may represent a reprojection of the point 216a from the first image 208(a) into the second image 208(b), and the hollow circle 242 may represent a reprojection of the point 228a from the first image 208(a) into the second image 208(b), each assuming an associated depth of the points. Please also read paragraph [0023-0025]), a first distance between the point on the object in the second image (Fig. 1, #110(2) and #208(2), called a second image. Paragraph [0021 and 0035]) and the pixel location of the corresponding projected point in the second image (Fig. 2. Paragraph [0041]-KROEGER discloses the error associated with the reprojection optimization for the point 212b may be the distance, e.g., the Euclidian distance measured in pixels, between the point 212b and the hollow circle 238. The error associated with the point 216b may be the distance between the point 216b and the hollow circle 240 and the error associated with the point 228b may be the distance between the point 228b and the hollow circle 242 (wherein the hollow circles represent reprojected points of the points in the first image #208(1), which, as mentioned above, correspond to the same feature or image portion as the points projected in the second image #208(2)));
PNG
media_image4.png
290
704
media_image4.png
Greyscale
Fig. 2 (KROEGER) illustrates multiple “X”s and hollow circles that represent the projection/reprojection of points from image #208(1) to image #208(2) and undistorted point locations, which each correspond to image portions or features.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI of having a method, comprising: receiving a first image captured by a first camera of a robot, wherein the first image includes an object having at least one known dimension, with the teachings of KROEGER of having receiving a second image captured by a second camera of the robot, wherein the second image includes the object, wherein a field of view of the first camera and a field of view of the second camera at least partially overlap; projecting a set of points on the object in the first image to pixel locations in the second image; determining, for each point of the projected set of points on the object, a first distance between the point on the object in the second image and the pixel location of the corresponding projected point in the second image.
Wherein having CHI’s method having receiving a first image captured by a first camera of a robot, wherein the first image includes an object having at least one known dimension.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KROEGER concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KROEGER’s systems and methods improve the functioning of a computing, the calibration for cameras and the processing and perception of systems by providing more accurate starting points and better fused data for segmentation, classification. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KROEGER (US 20200357140 A1), Abstract and Paragraph [0017].
CHI in view of KROEGER fail to explicitly teach determining a reprojection error based on a statistical measure of the first distances.
However, CLAVEAU explicitly teaches determining a reprojection error based on a statistical measure of the first distances (Fig.13. Paragraph [0125]-CLAVEAU discloses FIG. 13 is a flow diagram of method 300 for extrinsically calibrating a network of cameras using a calibration target (wherein the systems and techniques can be applied to robotics and the camera network may comprise stereo cameras and/or time-synchronized cameras 44a to 44d with overlapping views that acquire images of the same calibration target (e.g. fiducial) from different views). In paragraph [0143]-CLAVEAU discloses one approach to assess the completion level of the extrinsic calibration is to continuously or repeatedly (e.g., periodically) compute the average reprojection error of target points in the reference images and then compare the computed error with a predetermined threshold below which extrinsic calibration is considered complete or satisfactory (wherein the reprojection error represents the distances between projected points and an average reprojection error represents a statistical measure)); and
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER of having a method, with the teachings of CLAVEAU of having and determining a reprojection error based on a statistical measure of the first distances.
Wherein having CHI’s method having and determining a reprojection error based on a statistical measure of the first distances.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and CLAVEAU concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while CLAVEAU’s systems and methods improve camera calibration as well as the efficiency and execution time of the image acquisition and analysis process. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and CLAVEAU et al. (US 20170287166 A1), Abstract and Paragraph [0007, 0020, 0022 and 0117].
CHI in view of KROEGER fail to explicitly teach generating an instruction to perform an action when the reprojection error is greater than a threshold value, the threshold value set based on the at least one known dimension.
However, KOTHARI explicitly teaches generating an instruction to perform an action (Fig.1. Paragraph [0029]-KOTHARI discloses referring now to FIG. 1, a system 100 for camera calibration and/or validating camera calibration is illustratively depicted (wherein the system 100 is an autonomous vehicle and calibration uses a calibration target such as a fiducial). In paragraph [0062]-KOTHARI discloses the system may determine whether the confidence score is above or below a threshold (wherein the threshold may be a predetermined, updated and/or dynamic). If the confidence score is above the threshold, then the system may consider the sensor (e.g., camera) to be calibrated. If the confidence score is below the threshold, then the system may consider the sensor (e.g., camera) be not calibrated. In paragraph [0063]-KOTHARI discloses if the cameras are calibrated (306: YES), steps 302-306 may be repeated, for example, periodically, upon occurrence of certain events (e.g., a detection of a jolt, rain, etc.), and/or upon receipt of user instructions. If one or more cameras are not calibrated (306: NO), the system may generate a signal that will result in an action (308)) when the reprojection error is greater than a threshold value (Fig. 1. Paragraph [0057]-KOTHARI discloses for camera-based calibration factor, the identified pixel coordinates of the corners may be re-projected back and correlated with calibration target images to determine a reprojection error as the distance between the pixel coordinates of a corner detected in a calibration image and a corresponding world point projected into the same image. A value of reprojection error larger than 1 pixel may be indicative a sensor calibration issue. A reprojection error larger than about 0.5 pixel, about 0.7 pixel, about 1.1 pixel, about 1.3 pixel, about 0.5-1.1 pixel, about 0.6-1.2 pixel, or the like may be indicative a sensor calibration issue (wherein errors and validation are based on statistical analyses). In paragraph [0061]-KOTHARI discloses the system may use the camera calibration validation factor and the motion-based validation factor to generate a confidence score, which is an assessment of confidence in the accuracy of the calibration of the camera that captured the image frames of the calibration target), the threshold value set based on the at least one known dimension (Fig. 1. Paragraph In [0036]-KOTHARI discloses the process for calibrating cameras involves imaging a calibration target from multiple viewpoints, and then identifying calibration points in the image that correspond to known points on the calibration target. In paragraph [0037]-KOTHARI discloses referring now to FIG. 2A, an example calibration target 270 (e.g., the calibration target 170 of FIG. 1) is illustrated (wherein a calibration target may be a fiducials, checkerboard and/or AprilTags, and the targets are associated with tags). In paragraph [0041]-KOTHARI discloses a tag may include associated fiducial information such as an identification of the corresponding fiducial, size of the fiducial, color of the fiducial, associated corner of the fiducial (e.g., top left, bottom right, etc.), or the like).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU of having a method, with the teachings of KOTHARI of having generating an instruction to perform an action when the reprojection error is greater than a threshold value, the threshold value set based on the at least one known dimension.
Wherein having CHI’s method having generating an instruction to perform an action when the reprojection error is greater than a threshold value, the threshold value set based on the at least one known dimension.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KOTHARI concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KOTHARI’s systems and methods improve the calibration accuracy of sensors of an autonomous vehicle. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KOTHARI et al. (US 20230399015 A1), Abstract and Paragraph [0044 and 0061].
Regarding claim 2, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 1, CHI in view of KROEGER fails to explicitly teach wherein the object includes a set of corner points, and wherein the set of points on the object projected to pixel locations in the second image includes at least two of the set of corner points.
However, KOTHARI explicitly teaches wherein the object includes a set of corner points (Fig. 3. Paragraph [0035]-KOTHARI discloses referring now to FIG. 2A, an example calibration target 270 (e.g., the calibration target 170 of FIG. 1) is illustrated. A checkerboard fiducial has a quadrilateral boundary within which varying patterns of black and white blocks are arranged. The pattern can include any shape, image, icon, letter, symbol, number, or pattern. Example checkerboard fiducials can include AprilTags. In paragraph [0039]-KOTHARI discloses uniquely identifiable tags 205(a)-(n) are positioned at one or more corners of some or all of the fiducials 201(a)-(n), where a tag may be used to identify a fiducial within a captured image (wherein a fully tagged calibration target 270 is configured has a tag on each of its four corners, and tags include information such as the size of the fiducial, location, associated corner (e.g., top left, bottom right, etc.), etc.). The uniquely identifiable tags may be positioned at a subset of the corners of some of the fiducials (e.g., 2, 3, etc.). In paragraph [0055]-KOTHARI discloses corners of the fiducials on the calibration target image may be used for precisely identifying the feature point location), and wherein the set of points on the object projected to pixel locations in the second image includes at least two of the set of corner points (Fig. 3. In paragraph [0057]-KOTHARI discloses for camera-based calibration factor, the identified pixel coordinates of the corners may be re-projected back and correlated with calibration target images to determine a reprojection error as the distance between the pixel coordinates of a corner detected in a calibration image and a corresponding world point projected into the same image).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of KOTHARI of having wherein the object includes a set of corner points, and wherein the set of points on the object projected to pixel locations in the second image includes at least two of the set of corner points.
Wherein having CHI’s method having wherein the object includes a set of corner points, and wherein the set of points on the object projected to pixel locations in the second image includes at least two of the set of corner points.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KOTHARI concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KOTHARI’s systems and methods improve the calibration accuracy of sensors of an autonomous vehicle. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KOTHARI et al. (US 20230399015 A1), Abstract and Paragraph [0044 and 0061].
Regarding claim 3, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 2, CHI in view of KROEGER fails to explicitly teach wherein the object is a rectangle having four corner points, and wherein the set of points on the object projected to pixel locations in the second image includes the four corner points of the rectangle.
However, KOTHARI explicitly teaches wherein the object is a rectangle having four corner points (Fig. 3. Paragraph [0035]-KOTHARI discloses referring now to FIG. 2A, an example calibration target 270 (e.g., the calibration target 170 of FIG. 1) is illustrated. A checkerboard fiducial has a quadrilateral boundary within which varying patterns of black and white blocks are arranged. The pattern can include any shape, image, icon, letter, symbol, number, or pattern. Example checkerboard fiducials can include AprilTags. In paragraph [0039]-KOTHARI discloses uniquely identifiable tags 205(a)-(n) are positioned at one or more corners of some or all of the fiducials 201(a)-(n), where a tag may be used to identify a fiducial within a captured image (wherein a fully tagged calibration target 270 is configured has a tag on each of its four corners, and tags include information such as the size of the fiducial, location, associated corner (e.g., top left, bottom right, etc.), etc.). The uniquely identifiable tags may be positioned at a subset of the corners of some of the fiducials (e.g., 2, 3, etc.). In paragraph [0055]-KOTHARI discloses corners of the fiducials on the calibration target image may be used for precisely identifying the feature point location), and wherein the set of points on the object projected to pixel locations in the second image includes the four corner points of the rectangle (Fig. 3. In paragraph [0057]-KOTHARI discloses for camera-based calibration factor, the identified pixel coordinates of the corners may be re-projected back and correlated with calibration target images to determine a reprojection error as the distance between the pixel coordinates of a corner detected in a calibration image and a corresponding world point projected into the same image).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of KOTHARI of having wherein the object is a rectangle having four corner points, and wherein the set of points on the object projected to pixel locations in the second image includes the four corner points of the rectangle.
Wherein having CHI’s method having wherein the object is a rectangle having four corner points, and wherein the set of points on the object projected to pixel locations in the second image includes the four corner points of the rectangle.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KOTHARI concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KOTHARI’s systems and methods improve the calibration accuracy of sensors of an autonomous vehicle. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KOTHARI et al. (US 20230399015 A1), Abstract and Paragraph [0044 and 0061].
Regarding claim 4, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 1, CHI further teaches wherein the object is a fiducial marker in an environment of the robot (Fig. 1. Paragraph [0066]-CHI discloses the spatial calibration method of a robot ontology coordinate system based on a visual perception device is shown in FIG. 1. As shown in FIG. 1, a robot 120 may include a body 121 and target motion mechanisms 122. The target motion mechanisms 122 may be legs of the robot, and there may be four target motion mechanisms. Further in paragraph [0074]-CHI discloses the calibration object may be a checkerboard, Apriltag, ArUco, or other graphics. Apriltag may be understood as a simplified QR code. ArUco is a trellis diagram of Hamming code).
Regarding claim 5, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 4, CHI further teaches wherein the fiducial marker is an AprilTag (Fig. 1. Paragraph [0074]-CHI discloses the calibration object may be a checkerboard, Apriltag, ArUco, or other graphics. Apriltag may be understood as a simplified QR code. ArUco is a trellis diagram of Hamming code).
Regarding claim 7, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 1, CHI in view of KROEGER fail to teach wherein determining the reprojection error based on the first distances comprises: determining a second distance of a longest edge of the object along two of the set of points on the object.
However, KOTHARI explicitly teaches wherein determining the reprojection error based on the first distances (Fig. 3. Paragraph [0055]-KOTHARI discloses the purpose of camera-based calibration is to correlate the pixel coordinates on the camera image plane with the physical coordinates of a calibration target. In paragraph [0056]-KOTHARI discloses the system may analyze each captured image to retrieve tag data from tags included in the image such as, without limitation, fiducial size, type, and location included in the captured image. In paragraph [0057]-KOTHARI discloses for camera-based calibration factor, the identified pixel coordinates of the corners may be re-projected back and correlated with calibration target images to determine a reprojection error as the distance between the pixel coordinates of a corner detected in a calibration image and a corresponding world point projected into the same image. A value of reprojection error larger than 1 pixel may be indicative a sensor calibration issue. A value of reprojection error larger than about 0.5 pixel, about 0.7 pixel, about 1.1 pixel, about 1.3 pixel, about 0.5-1.1 pixel, about 0.6-1.2 pixel, or the like may be indicative a sensor calibration issue. Outlier errors may be eliminated based on statistical analyses) comprises:
determining a second distance of a longest edge of the object along two of the set of points on the object (Fig. 3. Paragraph [0035]-KOTHARI discloses referring now to FIG. 2A, an example calibration target 270 (e.g., the calibration target 170 of FIG. 1) is illustrated. Example checkerboard fiducials can include AprilTags. One or more of the fiducials of a calibration target panel may have different shapes (e.g., a triangular shape, a rectangular shape, a circular shape, a square shape etc.) and/or internal patterns. In paragraph [0039]-KOTHARI discloses uniquely identifiable tags 205(a)-(n) are positioned at one or more corners of some or all of the fiducials 201(a)-(n), where a tag may be used to identify a fiducial within a captured image (wherein a fully tagged calibration target 270 is configured has a tag on each of its four corners, and tags include information such as the size of the fiducial, location, associated corner (e.g., top left, bottom right, etc.), etc.). The uniquely identifiable tags may be positioned at a subset of the corners of some of the fiducials (e.g., 2, 3, etc.). In paragraph [0055]-KOTHARI discloses corners of the fiducials on the calibration target image may be used for precisely identifying the feature point location).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of KOTHARI of having wherein determining the reprojection error based on the first distances comprises: determining a second distance of a longest edge of the object along two of the set of points on the object.
Wherein having CHI’s method having wherein determining the reprojection error based on the first distances comprises: determining a second distance of a longest edge of the object along two of the set of points on the object.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KOTHARI concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KOTHARI’s systems and methods improve the calibration accuracy of sensors of an autonomous vehicle. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KOTHARI et al. (US 20230399015 A1), Abstract and Paragraph [0044 and 0061].
CHI in view of KROEGER fail to teach dividing each of the first distances by the second distance to generate normalized first distances, wherein the statistical measure comprises an average of the normalized first distances.
However, CLAVEAU explicitly teaches dividing each of the first distances by the second distance to generate normalized first distances, wherein the statistical measure comprises an average of the normalized first distances (Fig. 13. Paragraph [0143]-CLAVEAU discloses one possible approach to assess the completion level of the extrinsic calibration is to continuously or repeatedly (e.g., periodically) compute the average reprojection error of target points in the reference images and then compare the computed error with a predetermined threshold below which extrinsic calibration is considered complete or satisfactory. The obtaining steps 310 and 312 can be performed iteratively until the calibration error gets lower than a predetermined error value, at which point the providing step 302, identifying step 306 and assigning step 308 can also be stopped. Please also read paragraph [0149]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of CLAVEAU of having dividing each of the first distances by the second distance to generate normalized first distances, wherein the statistical measure comprises an average of the normalized first distances.
Wherein having CHI’s method having dividing each of the first distances by the second distance to generate normalized first distances, wherein the statistical measure comprises an average of the normalized first distances.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and CLAVEAU concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while CLAVEAU’s systems and methods improve camera calibration as well as the efficiency and execution time of the image acquisition and analysis process. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and CLAVEAU et al. (US 20170287166 A1), Abstract and Paragraph [0007, 0020, 0022 and 0117].
Regarding claim 8, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 1, CHI fails to explicitly teach wherein the first camera is a vision camera and the second camera is a depth camera.
However, KROEGER further teaches wherein the first camera (Fig. 1 and 2, #106(1) and #210(1) called a camera. Paragraph [0021]) is a vision camera and the second camera (Fig. 1-2, #106(2) and #210(2) called a camera. Paragraph [0021]) is a depth camera (Fig. 2. Paragraph [0076]-KROEGER discloses the sensor system(s) 506 can include cameras (e.g., RGB, IR, intensity, depth, time of flight, etc.)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of KROEGER of having wherein the first camera is a vision camera and the second camera is a depth camera.
Wherein having CHI’s method having wherein the first camera is a vision camera and the second camera is a depth camera.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KROEGER concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KROEGER’s systems and methods improve the functioning of a computing, the calibration for cameras and the processing and perception of systems by providing more accurate starting points and better fused data for segmentation, classification. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KROEGER (US 20200357140 A1), Abstract and Paragraph [0017].
Regarding claim 9, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 8, CHI in view of KROEGER fails to explicitly teach wherein the depth camera is a stereo vision camera.
However, CLAVEAU explicitly teaches wherein the depth camera is a stereo vision camera (Fig. 13. Paragraph [0082]-CLAVEAU discloses the cameras can be depth cameras (e.g., structured light cameras such as the first-generation Microsoft Kinect® or modulated light cameras such as time-of-flight cameras). In paragraph [0126]-CLAVEAU discloses non-limiting examples of multi-camera networks to which the present techniques can be applied include stereo camera rigs. In paragraph [0149]-CLAVEAU discloses the validation method based on 3D reconstruction involves reconstructing the inner corners of a checkerboard calibration target using stereo information or techniques, and then reprojecting the resulting 3D points into the validation images).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of CLAVEAU of having wherein the depth camera is a stereo vision camera.
Wherein having CHI’s method having wherein the depth camera is a stereo vision camera.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and CLAVEAU concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while CLAVEAU’s systems and methods improve camera calibration as well as the efficiency and execution time of the image acquisition and analysis process. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and CLAVEAU et al. (US 20170287166 A1), Abstract and Paragraph [0007, 0020, 0022 and 0117].
Regarding claim 11, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 1, CHI in view of KROEGER fails to explicitly teach wherein generating an instruction to perform an action when the reprojection error is greater than a threshold value comprises generating an alert.
However, KOTHARI explicitly teaches wherein generating an instruction to perform an action when the reprojection error is greater than a threshold value comprises generating an alert (Fig. 3. Paragraph [0057]-KOTHARI discloses for camera-based calibration factor, the identified pixel coordinates of the corners may be re-projected back and correlated with calibration target images to determine a reprojection error as the distance between the pixel coordinates of a corner detected in a calibration image and a corresponding world point projected into the same image. A value of reprojection error larger than 1 pixel may be indicative a sensor calibration issue. In paragraph [0061]-KOTHARI discloses if the cameras are calibrated (306: YES), steps 302-306 may be repeated, for example, periodically, upon occurrence of certain events (e.g., a detection of a jolt, rain, etc.), and/or upon receipt of user instructions. If one or more cameras are not calibrated (306: NO), the system may generate a signal that will result in an action (308). In paragraph [0065]-KOTHARI discloses users can be instructed to recalibrate the camera(s) using a notification or alert (for example, using a vehicle interface such as an interactive display or audio system). Please also read paragraph [0056, 0058-0059 and 0062]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of KOTHARI of having wherein generating an instruction to perform an action when the reprojection error is greater than a threshold value comprises generating an alert.
Wherein having CHI’s method having wherein generating an instruction to perform an action when the reprojection error is greater than a threshold value comprises generating an alert.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KOTHARI concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KOTHARI’s systems and methods improve the calibration accuracy of sensors of an autonomous vehicle. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KOTHARI et al. (US 20230399015 A1), Abstract and Paragraph [0044 and 0061].
Regarding claim 12, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 1, CHI in view of KROEGER fails to explicitly teach wherein generating an instruction to perform an action when the reprojection error is greater than a threshold value comprises generating an instruction to stop autonomous navigation of the robot.
However, KOTHARI explicitly teaches wherein generating an instruction to perform an action when the reprojection error is greater than a threshold value comprises generating an instruction to stop autonomous navigation of the robot (Fig. 3. Paragraph [0057]-KOTHARI discloses for camera-based calibration factor, the identified pixel coordinates of the corners may be re-projected back and correlated with calibration target images to determine a reprojection error as the distance between the pixel coordinates of a corner detected in a calibration image and a corresponding world point projected into the same image. A value of reprojection error larger than 1 pixel may be indicative a sensor calibration issue. In paragraph [0061]-KOTHARI discloses if the cameras are calibrated (306: YES), steps 302-306 may be repeated, for example, periodically, upon occurrence of certain events (e.g., a detection of a jolt, rain, etc.), and/or upon receipt of user instructions. If one or more cameras are not calibrated (306: NO), the system may generate a signal that will result in an action (308). The system may identify an action for the AV to perform and causes the AV to perform the action (310). The action may include recalibrating the sensor, altering a velocity of the AV, and/or any other suitable action in response to the action assessment. Please also read paragraph [0056, 0058-0059 and 0062]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of KOTHARI of having wherein generating an instruction to perform an action when the reprojection error is greater than a threshold value comprises generating an instruction to stop autonomous navigation of the robot.
Wherein having CHI’s method having wherein generating an instruction to perform an action when the reprojection error is greater than a threshold value comprises generating an instruction to stop autonomous navigation of the robot.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KOTHARI concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KOTHARI’s systems and methods improve the calibration accuracy of sensors of an autonomous vehicle. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KOTHARI et al. (US 20230399015 A1), Abstract and Paragraph [0044 and 0061].
Regarding claim 13, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 1, CHI fails to explicitly teach wherein generating an instruction to perform an action comprises generating an instruction to calibrate one or more parameters associated with the first camera and/or the second camera based on the reprojection error.
However, KROEGER explicitly teaches wherein generating an instruction to perform an action comprises generating an instruction to calibrate one or more parameters associated with the first camera and/or the second camera based on the reprojection error (Fig. 2. Paragraph [0067]-KROEGER discloses the extrinsic calibration component 536 can also reduce the set of point pairs to be considered, e.g., by removing outliers and noise. The extrinsic calibration component 536 may determine a projection error using the epipolar lines, and point pairs having an error (e.g., a distance between a point and an epipolar line) equal to or above a threshold error may be excluded from the set of point pairs. The extrinsic calibration component 536 can then determine a correction function based on the subset of point pairs. In paragraph [0070]-KROEGER discloses the intrinsic calibration component 538 may determine a re-projection error using the re-projected points and estimates of point depth, with point pairs having an error (e.g., a distance between a point and re-projected point) equal to or above a threshold error may be excluded from the set of point pairs. The intrinsic calibration component 534 can then determine a correction function based on the subset of point pairs, e.g., by optimizing a correction matrix using the subset of point pairs).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of KROEGER of having wherein generating an instruction to perform an action comprises generating an instruction to calibrate one or more parameters associated with the first camera and/or the second camera based on the reprojection error.
Wherein having CHI’s method having wherein generating an instruction to perform an action comprises generating an instruction to calibrate one or more parameters associated with the first camera and/or the second camera based on the reprojection error.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KROEGER concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KROEGER’s systems and methods improve the functioning of a computing, the calibration for cameras and the processing and perception of systems by providing more accurate starting points and better fused data for segmentation, classification. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KROEGER (US 20200357140 A1), Abstract and Paragraph [0017].
Regarding claim 14, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 13, CHI fails to explicitly teach wherein calibrating one or more parameters associated with the first camera and/or the second camera comprises updating a lens model for the first camera and/or the second camera.
However, KROEGER explicitly teaches wherein calibrating one or more parameters (Fig. 1-2. Paragraph [0065] the calibration data component 534 can include functionality to store calibration data associated with one or more sensors of the vehicle 502. The calibration data can store mounting angles and/or positions of sensors and/or any extrinsic and/or intrinsic information associated with the one or more sensors, including calibration angles, mounting location, height, direction, yaw, tilt, pan, timing information, lens distortion parameters, transmission medium parameters, and the like. Further, the calibration data component 534 can store a log of some or all of the calibration operations performed, such as a time elapsed from the most recent calibration, and the like.) associated with the first camera (Fig. 1-2, #106(1) and #210(1) called a first camera. Paragraph [0021 and 0035]) and/or the second camera (Fig. 1-2, #106(2) and #210(2) called a second camera. Paragraph [0021 and 0035]) comprises updating a lens model for the first camera and/or the second camera (Fig. 1-2. Paragraph [0067]-KROEGER discloses the extrinsic calibration component 536 can also reduce the set of point pairs to be considered, e.g., by removing outliers and noise. The extrinsic calibration component 536 may determine a projection error using the epipolar lines, and point pairs having an error (e.g., a distance between a point and an epipolar line) equal to or above a threshold error may be excluded from the set of point pairs. The extrinsic calibration component 536 can then determine a correction function based on the subset of point pairs. In paragraph [0070]-KROEGER discloses the intrinsic calibration component 538 may determine a re-projection error using the re-projected points and estimates of point depth, with point pairs having an error (e.g., a distance between a point and re-projected point) equal to or above a threshold error may be excluded from the set of point pairs. The intrinsic calibration component 534 can then determine a correction function based on the subset of point pairs, e.g., by optimizing a correction matrix using the subset of point pairs. In paragraph [0071]-KROEGER discloses the extrinsic calibration component 536 and the intrinsic calibration component 538 can perform operations in parallel).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of KROEGER of having wherein calibrating one or more parameters associated with the first camera and/or the second camera comprises updating a lens model for the first camera and/or the second camera.
Wherein having CHI’s method having wherein calibrating one or more parameters associated with the first camera and/or the second camera comprises updating a lens model for the first camera and/or the second camera.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KROEGER concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KROEGER’s systems and methods improve the functioning of a computing, the calibration for cameras and the processing and perception of systems by providing more accurate starting points and better fused data for segmentation, classification. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KROEGER (US 20200357140 A1), Abstract and Paragraph [0017].
Regarding claim 15, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 13, CHI fails to explicitly teach wherein the robot is configured to use an extrinsics transform to relate a first coordinate system of the first camera to a second coordinate system of the second camera, and calibrating one or more parameters associated with the first camera and/or the second camera comprises updating the extrinsics transform.
However, KROEGER explicitly teaches wherein the robot is configured to use an extrinsics transform to relate a first coordinate system of the first camera (Fig. 1-2, #106(1) and #206(1) called a camera. Paragraph [0021 and 0035]) to a second coordinate system of the second camera (Fig. 1-2, #106(2) and #206(2) called a camera. Paragraph [0021 and 0035]), and calibrating one or more parameters associated with the first camera and/or the second camera comprises updating the extrinsics transform (Fig. 1-2. Paragraph [0034]-KROEGER discloses FIG. 2 depicts a pictorial flow diagram of an example process 200 for calibrating cameras disposed on an autonomous vehicle. In paragraph [0065]-KROEGER discloses the calibration data component 534 can store one or more calibration angles (or calibration characteristics, generally) associated with a sensor, such as calibration transforms for an array of cameras. The calibration data can store mounting angles and/or positions of sensors and/or any extrinsic and/or intrinsic information. In paragraph [0067]-KROEGER discloses the extrinsic calibration component 536 may determine a projection error. The extrinsic calibration component 536 can then determine a correction function based on the subset of point pairs. In paragraph [0071]-KROEGER discloses the extrinsic calibration component 536 and the intrinsic calibration component 538 can perform operations in parallel).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of KROEGER of having wherein the robot is configured to use an extrinsics transform to relate a first coordinate system of the first camera to a second coordinate system of the second camera, and calibrating one or more parameters associated with the first camera and/or the second camera comprises updating the extrinsics transform.
Wherein having CHI’s method having wherein the robot is configured to use an extrinsics transform to relate a first coordinate system of the first camera to a second coordinate system of the second camera, and calibrating one or more parameters associated with the first camera and/or the second camera comprises updating the extrinsics transform.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KROEGER concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KROEGER’s systems and methods improve the functioning of a computing, the calibration for cameras and the processing and perception of systems by providing more accurate starting points and better fused data for segmentation, classification. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KROEGER (US 20200357140 A1), Abstract and Paragraph [0017].
Regarding claim 16, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 15, CHI further teaches wherein updating the extrinsics transform (Fig. 1. Paragraph [0062]-CHI discloses a transformation relationship between a visual perception coordinate system and an ontology coordinate system that is obtained based on a spatial calibration method of a robot ontology coordinate system based on a visual perception device provided in the embodiments is applicable to a scheme of controlling a robot by using a robot decision model based on artificial intelligence (AI). A second pose of a target object in the ontology coordinate system may be obtained according to the transformation relationship between the visual perception coordinate system and the ontology coordinate system and a first pose of the target object in the visual perception coordinate system. The robot decision model may make decisions according to the second pose of the target object and environment information in an image acquired by a camera. In paragraph [0068]-CHI discloses the robot may store the transformation relationship between the visual perception coordinate system and the ontology coordinate system, and control, by using the transformation relationship, the motion mechanism to move) comprises:
capturing a set of first images (Fig. 1. Paragraph [0121]-CHI discloses when operations need to be performed by using the robot, the robot may control the visual perception device in real time to perform image acquisition on the target object in the environment, to obtain a current image. After obtaining the current image, the robot may obtain visual pose information of the target object in the visual perception coordinate system. Please also read paragraph [0066]) from the first camera (Fig. 1, #110 called a camera Paragraph [0066]), wherein each of the first images in the set includes the object (Fig. 1, #140 called a target calibration object. Paragraph [0066]. Further in paragraph [0074]-CHI discloses the target calibration object refers to an object used for calibration. A size of the target calibration object may be pre-determined and may include lengths between feature points of the target calibration object (wherein the calibration object may be a checkerboard, Apriltag, ArUco, or other graphics and the target calibration object may be disposed at an end of the target motion mechanism (i.e. legs)). In paragraph [0075]-CHI discloses in FIG. 4, a rectangle range formed by dashed lines represents a field of view of the visual perception device. Setting of sampling points of the target motion mechanism need to ensure that the calibration object is within the field of view. In paragraph [0111]-CHI discloses the coordinates of the target calibration object may be represented by using coordinates of feature points in the target calibration object).
CHI in view of KROEGER fails to explicitly teach capturing a set of second images from the second camera, wherein each of the second images in the set includes the object, each of the first images having a corresponding second image in the set of second image taken at a same time as the first image using a same pose; performing a non-linear optimization over the first set of images and the second set of images to minimize the reprojection error for pairs of images from the first set and the second set, wherein an output of the non-linear optimization is a current extrinsics transform; and updating the extrinsics transform used by the robot based on the current extrinsics transform output from the non-linear optimization.
However, CLAVEAU explicitly teaches capturing a set of second images from the second camera (Fig. 8, #44a-44d, called cameras. Paragraph [0126]-CLAVEAU discloses Fig. 8 is an example of a multi-camera network 56 including four time-synchronized cameras 44a to 44d rigidly mounted and disposed around an observable scene 54. Further in paragraph [0125]-CLAVEAU discloses method 300 extrinsically calibrates a network of cameras using a calibration target (wherein the systems/techniques can be applied to or implemented in robotics, navigation systems, object positioning, etc.). is depicted. In paragraph [0128]-CLAVEAU discloses each multi-camera bin is associated with a specific combination of two or more cameras having partially overlapping fields of view. Each multi-camera bin is a two-camera bin associated with two of the cameras of the network (wherein six camera pairs and six associated two-camera bins can be defined from the four cameras 44a to 44d)). The qualified target images stored in each multi-camera bin can be used to either obtain the extrinsic camera parameters or validate the obtained extrinsic camera parameters), wherein each of the second images in the set includes the object (Fig. 13, #20 called a calibration target. Paragraph [0085]-CLAVEAU discloses the camera calibration process uses a planar calibration object or target provided with fiducial markers or other reference features on the surface thereof exposed to the cameras. The fiducial markers on the calibration target can form a calibration pattern, for example a checkerboard pattern or a dot matrix (wherein the object is a fiducial, such as a checkerboard pattern). In paragraph [0087]-CLAVEAU discloses feature extraction and image processing techniques can be used to detect and recognize fiducial features in captured target images. The inner corners 36 of the checkerboard pattern 26 of the calibration target 20 can be detected as identifying fiducial features that provide known world points (wherein fiducials may also include orientation markers)), each of the first images having a corresponding second image in the set of second image taken at a same time as the first image using a same pose (Fig. 13. Paragraph [0127]-CLAVEAU discloses for each camera, each target image represents a view of the calibration target captured in a respective target pose and at a respective acquisition time. In paragraph [0131]-CLAVEAU discloses the identifying step 306 can include a step of searching the calibration target in each target image acquired by each camera, which can involve looking for one or more fiducial features present on the calibration target. Please also see Fig. 4A-4l and paragraph [0091]);
performing a non-linear optimization over the first set of images and the second set of images to minimize the reprojection error for pairs of images from the first set and the second set, wherein an output of the non-linear optimization is a current extrinsics transform (Fig. 13. Paragraph [0127]-CLAVEAU discloses the method 300 can include a step 302 of providing, for each camera, a plurality of target images acquired with the camera. For each camera, each target image represents a view of the calibration target captured in a respective target pose and at a respective acquisition time. The images of the calibration target acquired by the cameras will provide reference images to be used in the calibration calculations. In paragraph [0128]-CLAVEAU discloses the qualified target images stored in each multi-camera bin can be used to obtain the extrinsic camera parameters or validate the obtained extrinsic camera parameters. Moreover, in paragraph [0142]-CLAVEAU discloses the global calibration of all the cameras can involve the refinement of the extrinsic parameters of all cameras through an optimization method such as non-linear least-squares analysis that seeks to minimize the reprojection error of the target pose fiducials); and
updating the extrinsics transform used by the robot based on the current extrinsics transform output from the non-linear optimization (Fig. 2. Paragraph [0143]-CLAVEAU discloses method 300 of FIG. 13 can allow the user to monitor the progress of the extrinsic camera calibration. One possible approach to assess the completion level of the extrinsic calibration is to continuously or repeatedly (e.g., periodically) compute the average reprojection error of target points in the reference images and then compare the computed error with a predetermined threshold below which extrinsic calibration is considered complete or satisfactory. The obtaining steps 310 and 312 can be performed iteratively until the calibration error gets lower than a predetermined error value, at which point the providing step 302, identifying step 306 and assigning step 308 can also be stopped. In paragraph [0145]-CLAVEAU discloses once the extrinsic camera parameters have been obtained, the techniques can provide a step of validation of the calibration results. Criteria or measures can include the reprojection error and the rectification error for the intrinsic parameters; and the reconstruction error and the alignment or registration error for the overall extrinsic calibration. In paragraph [0146]-CLAVEAU discloses when the camera calibration is completed, the reprojection error for every checkerboard corner in every qualified target pose (i.e., reference image) can be computed and presented to the user. Please also read paragraph [0149]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a method, with the teachings of CLAVEAU of having capturing a set of second images from the second camera, wherein each of the second images in the set includes the object, each of the first images having a corresponding second image in the set of second image taken at a same time as the first image using a same pose; performing a non-linear optimization over the first set of images and the second set of images to minimize the reprojection error for pairs of images from the first set and the second set, wherein an output of the non-linear optimization is a current extrinsics transform; and updating the extrinsics transform used by the robot based on the current extrinsics transform output from the non-linear optimization.
Wherein having CHI’s method having capturing a set of second images from the second camera, wherein each of the second images in the set includes the object, each of the first images having a corresponding second image in the set of second image taken at a same time as the first image using a same pose; performing a non-linear optimization over the first set of images and the second set of images to minimize the reprojection error for pairs of images from the first set and the second set, wherein an output of the non-linear optimization is a current extrinsics transform; and updating the extrinsics transform used by the robot based on the current extrinsics transform output from the non-linear optimization.
The motivation behind the modification would have been to obtain method that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and CLAVEAU concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while CLAVEAU’s systems and methods improve camera calibration as well as the efficiency and execution time of the image acquisition and analysis process. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and CLAVEAU et al. (US 20170287166 A1), Abstract and Paragraph [0007, 0020, 0022 and 0117].
Regarding claim 17, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the method of claim 15, CHI further teaches further comprising: determining a pose of the robot using the updated extrinsics transform (Fig. 1. Paragraph [0128]-CHI discloses after the target pose information corresponding to the target object is obtained, the robot may control, according to a relative pose relationship between current pose information of the target motion mechanism and the target pose information, the target motion mechanism to perform at least one of translation or rotation, to avoid the target object or grab the target object. Please also read paragraph [0062, 0066 and 0107-0110]).
Regarding claim 18, CHI explicitly teaches a robot (Fig. 1, #120 called a robot. Paragraph [0066]), comprising:
a perception system (Fig. 1. Paragraph [0164]-CHI discloses as shown in FIG. 10, a robot control apparatus is provided. The apparatus may be part of a computer device by using a software module, a hardware module, or a combination of the software module and the hardware module. The apparatus specifically includes: a visual pose information obtaining module 1002, a target transformation relationship obtaining module 1004, a target pose information obtaining module 1006, and a control module 1008. Further in paragraph [0066]-CHI discloses the spatial calibration method of a robot ontology coordinate system based on a visual perception device provided in the embodiments of this disclosure is applicable to the disclosure environment shown in FIG. 1) including:
a first camera (Fig. 1, #110 called a camera Paragraph [0066]) configured to capture a first image (Fig. 1. Paragraph [0066]-CHI discloses a camera 110 is mounted on the body 121, and an end of the target motion mechanism 122 is connected to a target calibration object 140 by using a connecting member 130. It may be understood that the end of the target motion mechanism 122 may be directly connected to the target calibration object 140. The camera 110 is configured for visual perception, and the robot 120 may perform, according to a preset eye-in-foot calibration algorithm, the spatial calibration method of a robot ontology coordinate system based on a visual perception device), wherein the first image includes an object (Fig. 1, #140 called a target calibration object. Paragraph [0066]) having at least one known dimension (Fig. 1. Paragraph [0074]-CHI discloses the target calibration object refers to an object used for calibration. A size of the target calibration object may be pre-determined and may include lengths between feature points of the target calibration object (wherein the calibration object may be a checkerboard, Apriltag, ArUco, or other graphics and the target calibration object may be disposed at an end of the target motion mechanism (i.e. legs), and the feature points used may be the lattices of the pattern). In paragraph [0075]-CHI discloses in FIG. 4, a rectangle range formed by dashed lines represents a field of view of the visual perception device. Setting of sampling points of the target motion mechanism need to ensure that the calibration object is within the field of view. In paragraph [0111]-CHI discloses the coordinates of the target calibration object may be represented by using coordinates of feature points in the target calibration object);
CHI fails to explicitly teach a second camera configured to capture a second image, wherein the second image includes the object, wherein a field of view of the first camera and a field of view of the second camera at least partially overlap; and at least one computer processor configured to: project a set of points on the object in the first image to pixel locations in the second image; determine, for each point of the projected set of points on the object, a first distance between the point on the object in the second image and the pixel location of the corresponding projected point in the second image.
However, KROEGER explicitly teaches a second camera (Fig. 4, #106(2) and #206(2), called a second camera. Paragraph [0021 and 0035]) configured to capture a second image (Fig. 1-2, #110(2) and #208(2) called a second image. Paragraph [0021 and 0036]. Further in paragraph [0034]-KROEGER discloses FIG. 2 depicts a pictorial flow diagram of process 200 for calibrating cameras disposed on an autonomous vehicle (wherein an autonomous vehicle is a robot, the calibration processes include both intrinsic and extrinsic calibration, the calibration process involves the projection of points into overlapping images, and each point corresponds to the same image feature or portion in both images). In paragraph [0035]-KROEGER discloses at operation 202, the process can include capturing images of an environment at multiple cameras. The operation 202 illustrates a vehicle 204 having a first camera 206(1) and a second camera 206(2) disposed on the vehicle 204. The first camera 206(1) captures image data such as an image 208(1) and the second camera 206(2) captures image data such as a second image 208(2)), wherein the second image includes the object (Fig. 5. Paragraph [0059]-KROEGER discloses the perception component 522 can include functionality to perform object detection, segmentation, and/or classification. The perception component 522 can provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle 502 and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, animal, unknown, etc.). The perception component 522 can provide processed sensor data that indicates one or more characteristics associated with a detected entity and/or the environment in which the entity is positioned. Characteristics associated with an entity can include an x-position (global position), a y-position (global position), a z-position (global position), an orientation, an entity type (e.g., a classification), a velocity of the entity, an extent of the entity (e.g., size), etc. (wherein the object may be an image feature or image portion present in both images, such as a vehicle, a line, edge, etc.). Please also read paragraph [0020-0021 and 0050]), wherein a field of view of the first camera (Fig. 4, #106(1) and #206(1), called a first camera. Paragraph [0021 and 0036]) and a field of view of the second camera at least partially overlap (Fig. 2. Paragraph [0035]-KROEGER discloses the cameras 206(1), 206(2) are generally configured next to each other, both facing in the direction of travel and with significant overlap in their fields of view (wherein image acquisition between both cameras may occur simultaneously or over different times). Please also see Fig. 1 and read paragraph [0022]); and
at least one computer processor (Fig. 5, #516 called processors. Paragraph [0057]-KROEGER discloses vehicle computing device 504 can include one or more processors 516 and memory 518 communicatively coupled with the one or more processors 516) configured to:
project a set of points on the object (Fig. 1. Paragraph [0037]-KROEGER discloses at operation 210, the process can include identifying point pairs. The operation 210 may identify, for portions of the first image 208(1) and the second image 208(2) that overlap, first points 212a, 214a, 216a, 218a, 220a, 222a, 224a, 226a, 228a, 230a, in the first image 208(1) and second points 212b, 214b, 216b, 218b, 220b, 222b, 224b, 226b, 228b, 230b, in the second image 208(2). Please also read paragraph [0023 and 0040-0043]) in the first image (Fig. 1-2, #110(1) and #208(1), called a first image. Paragraph [0021 and 0035]) to pixel locations in the second image (Fig. 1-2, #110(2) and #208(2), called a second image. Paragraph [0021 and 0035]. Further in paragraph [0037]-KROEGER discloses the first points and the second points may be image features, e.g., with the first point 212a corresponding to an image feature or portion in the first image 208(1) and the second point 212b corresponding to the same image feature or portion in the second image 208(2), the first point 214a corresponding to another image feature or portion in the first image 208(1) and the second point 214b corresponding to the same other image feature or portion in the second image 208(2), and so forth (wherein the object may be an image feature, logical grouping of features or image portion that is present in both images, such as a vehicle, roadway, line, edge or reference point));
determine, for each point of the projected set of points on the object (Fig. 2. Paragraph [0040]-KROEGER discloses at operation 232, the process 200 can determine errors associated with point pairs. In paragraph [0041]-KROEGER discloses the point 212b has an associated hollow circle 238, the point 216b has an associated hollow circle 240, and the point 228b has an associated hollow circle 242. The points 212b, 216b, 228b generally represent the detected location of the features (e.g., distorted location) and the hollow circles 238, 240, 242 represent reprojections of associated features in the environment. Each hollow circle 238, 240, 242 represents a reprojection of the first points corresponding to the points 212b, 216b, 228b. The hollow circle 238 may represent a reprojection of the point 212a from the first image 208(a) into the second image 208(b), the hollow circle 240 may represent a reprojection of the point 216a from the first image 208(a) into the second image 208(b), and the hollow circle 242 may represent a reprojection of the point 228a from the first image 208(a) into the second image 208(b), each assuming an associated depth of the points. Please also read paragraph [0023-0025]), a first distance between the point on the object in the second image (Fig. 1, #110(2) and #208(2), called a second image. Paragraph [0021 and 0035]) and the pixel location of the corresponding projected point in the second image (Fig. 2. Paragraph [0041]-KROEGER discloses the error associated with the reprojection optimization for the point 212b may be the distance, e.g., the Euclidian distance measured in pixels, between the point 212b and the hollow circle 238. The error associated with the point 216b may be the distance between the point 216b and the hollow circle 240 and the error associated with the point 228b may be the distance between the point 228b and the hollow circle 242 (wherein the hollow circles represent reprojected points of the points in the first image #208(1), which, as mentioned above, correspond to the same feature or image portion as the points projected in the second image #208(2)));
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI of having a robot, comprising: a perception system including: a first camera configured to capture a first image, wherein the first image includes an object having at least one known dimension, with the teachings of KROEGER of having a second camera configured to capture a second image, wherein the second image includes the object, wherein a field of view of the first camera and a field of view of the second camera at least partially overlap; and at least one computer processor configured to: project a set of points on the object in the first image to pixel locations in the second image; determine, for each point of the projected set of points on the object, a first distance between the point on the object in the second image and the pixel location of the corresponding projected point in the second image.
Wherein having CHI’s robot having a second camera configured to capture a second image, wherein the second image includes the object, wherein a field of view of the first camera and a field of view of the second camera at least partially overlap; and at least one computer processor configured to: project a set of points on the object in the first image to pixel locations in the second image; determine, for each point of the projected set of points on the object, a first distance between the point on the object in the second image and the pixel location of the corresponding projected point in the second image.
The motivation behind the modification would have been to obtain a robot that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KROEGER concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KROEGER’s systems and methods improve the functioning of a computing, the calibration for cameras and the processing and perception of systems by providing more accurate starting points and better fused data for segmentation, classification. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KROEGER (US 20200357140 A1), Abstract and Paragraph [0017].
CHI in view of KROEGER fail to explicitly teach determine a reprojection error based on a statistical measure of the first distances.
However, CLAVEAU explicitly teaches determine a reprojection error based on a statistical measure of the first distances (Fig.13. Paragraph [0125]-CLAVEAU discloses FIG. 13 is a flow diagram of method 300 for extrinsically calibrating a network of cameras using a calibration target (wherein the systems and techniques can be applied to robotics and the camera network may comprise stereo cameras and/or time-synchronized cameras 44a to 44d with overlapping views that acquire images of the same calibration target (e.g. fiducial) from different views). In paragraph [0143]-CLAVEAU discloses one approach to assess the completion level of the extrinsic calibration is to continuously or repeatedly (e.g., periodically) compute the average reprojection error of target points in the reference images and then compare the computed error with a predetermined threshold below which extrinsic calibration is considered complete or satisfactory (wherein the reprojection error represents the distances between projected points and an average reprojection error represents a statistical measure)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER of having a robot, with the teachings of CLAVEAU of having determine a reprojection error based on a statistical measure of the first distances
Wherein having CHI’s robot having determine a reprojection error based on a statistical measure of the first distances.
The motivation behind the modification would have been to obtain a robot that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and CLAVEAU concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while CLAVEAU’s systems and methods improve camera calibration as well as the efficiency and execution time of the image acquisition and analysis process. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and CLAVEAU et al. (US 20170287166 A1), Abstract and Paragraph [0007, 0020, 0022 and 0117].
CHI in view of KROEGER fail to explicitly teach and generate an instruction to perform an action when the reprojection error is greater than a threshold value, the threshold value set based on the at least one known dimension.
However, KOTHARI explicitly teaches and generate an instruction to perform an action (Fig. 1. Paragraph [0029]-KOTHARI discloses referring now to FIG. 1, a system 100 for camera calibration and/or validating camera calibration is illustratively depicted (wherein the system 100 is an autonomous vehicle and calibration uses a calibration target such as a fiducial). In paragraph [0062]-KOTHARI discloses the system may determine whether the confidence score is above or below a threshold (wherein the threshold may be a predetermined, updated and/or dynamic). If the confidence score is above the threshold, then the system may consider the sensor (e.g., camera) to be calibrated. If the confidence score is below the threshold, then the system may consider the sensor (e.g., camera) be not calibrated. In paragraph [0063]-KOTHARI discloses if the cameras are calibrated (306: YES), steps 302-306 may be repeated, for example, periodically, upon occurrence of certain events (e.g., a detection of a jolt, rain, etc.), and/or upon receipt of user instructions. If one or more cameras are not calibrated (306: NO), the system may generate a signal that will result in an action (308)) when the reprojection error is greater than a threshold value (Fig. 1. Paragraph [0057]-KOTHARI discloses for camera-based calibration factor, the identified pixel coordinates of the corners may be re-projected back and correlated with calibration target images to determine a reprojection error as the distance between the pixel coordinates of a corner detected in a calibration image and a corresponding world point projected into the same image. A value of reprojection error larger than 1 pixel may be indicative a sensor calibration issue. A reprojection error larger than about 0.5 pixel, about 0.7 pixel, about 1.1 pixel, about 1.3 pixel, about 0.5-1.1 pixel, about 0.6-1.2 pixel, or the like may be indicative a sensor calibration issue (wherein errors and validation are based on statistical analyses). In paragraph [0061]-KOTHARI discloses the system may use the camera calibration validation factor and the motion-based validation factor to generate a confidence score, which is an assessment of confidence in the accuracy of the calibration of the camera that captured the image frames of the calibration target), the threshold value set based on the at least one known dimension (Fig. 1. Paragraph [0036]-KOTHARI discloses the process for calibrating cameras involves imaging a calibration target from multiple viewpoints, and then identifying calibration points in the image that correspond to known points on the calibration target. In paragraph [0037]-KOTHARI discloses referring now to FIG. 2A, an example calibration target 270 (e.g., the calibration target 170 of FIG. 1) is illustrated (wherein a calibration target may be a fiducials, checkerboard and/or AprilTags, and the targets are associated with tags). In paragraph [0041]-KOTHARI discloses a tag may include associated fiducial information such as an identification of the corresponding fiducial, size of the fiducial, color of the fiducial, associated corner of the fiducial (e.g., top left, bottom right, etc.), or the like).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU of having a robot, with the teachings of KOTHARI of having and generate an instruction to perform an action when the reprojection error is greater than a threshold value, the threshold value set based on the at least one known dimension.
Wherein having CHI’s robot having and generate an instruction to perform an action when the reprojection error is greater than a threshold value, the threshold value set based on the at least one known dimension.
The motivation behind the modification would have been to obtain a robot that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KOTHARI concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KOTHARI’s systems and methods improve the calibration accuracy of sensors of an autonomous vehicle. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KOTHARI et al. (US 20230399015 A1), Abstract and Paragraph [0044 and 0061].
Regarding claim 19, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teach the robot of claim 18, CHI in view of KROEGER fail to explicitly teach wherein the object includes a set of corner points, and wherein the set of points on the object projected to pixel locations in the second image includes at least two of the set of corner points.
However, KOTHARI explicitly teaches wherein the object includes a set of corner points (Fig. 3. Paragraph [0035]-KOTHARI discloses referring now to FIG. 2A, an example calibration target 270 (e.g., the calibration target 170 of FIG. 1) is illustrated. A checkerboard fiducial has a quadrilateral boundary within which varying patterns of black and white blocks are arranged. The pattern can include any shape, image, icon, letter, symbol, number, or pattern. Example checkerboard fiducials can include AprilTags. In paragraph [0039]-KOTHARI discloses uniquely identifiable tags 205(a)-(n) are positioned at one or more corners of some or all of the fiducials 201(a)-(n), where a tag may be used to identify a fiducial within a captured image (wherein a fully tagged calibration target 270 is configured has a tag on each of its four corners, and tags include information such as the size of the fiducial, location, associated corner (e.g., top left, bottom right, etc.), etc.). The uniquely identifiable tags may be positioned at a subset of the corners of some of the fiducials (e.g., 2, 3, etc.). In paragraph [0055]-KOTHARI discloses corners of the fiducials on the calibration target image may be used for precisely identifying the feature point location), and wherein the set of points on the object projected to pixel locations in the second image includes at least two of the set of corner points (Fig. 3. In paragraph [0057]-KOTHARI discloses for camera-based calibration factor, the identified pixel coordinates of the corners may be re-projected back and correlated with calibration target images to determine a reprojection error as the distance between the pixel coordinates of a corner detected in a calibration image and a corresponding world point projected into the same image).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a robot, comprising: a perception system including: a first camera configured to capture a first image; and a second camera configured to capture a second image, wherein the second image includes the object, with the teachings of KAJI of having wherein the object includes a set of corner points, and wherein the set of points on the object projected to pixel locations in the second image includes at least two of the set of corner points.
Wherein having CHI’s robot having wherein the object includes a set of corner points, and wherein the set of points on the object projected to pixel locations in the second image includes at least two of the set of corner points.
The motivation behind the modification would have been to obtain robot that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KOTHARI concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KOTHARI’s systems and methods improve the calibration accuracy of sensors of an autonomous vehicle. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KOTHARI et al. (US 20230399015 A1), Abstract and Paragraph [0044 and 0061].
Regarding claim 36, CHI explicitly teaches a non-transitory computer readable medium encoded with a plurality of instructions that, when executed by at least one computer processor perform a method (Fig. 11. Paragraph [0171]-CHI discloses a computer device is provided. The computer device may be a robot, and an internal structure diagram thereof may be shown in FIG. 11. The computer device includes a processor, a memory, and a network interface that are connected by using a system bus. The processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium (or a non-transitory storage medium) and an internal memory. The non-volatile storage medium stores an operating system, computer-readable instructions, and a database. The computer-readable instructions, when executed by the processor, implement a spatial calibration method of a robot ontology coordinate system based on a visual perception device or a robot control method), the method comprising:
receiving a first image (Fig. 1. Paragraph [0121]-CHI discloses when operations need to be performed by using the robot, the robot may control the visual perception device in real time to perform image acquisition on the target object in the environment, to obtain a current image. After obtaining the current image, the robot may obtain visual pose information of the target object in the visual perception coordinate system according to information such as coordinates of the target object in the image and internal parameters of a camera. Please also read paragraph [0066]) captured by a first camera (Fig. 1, #110 called a camera Paragraph [0066]) of a robot (Fig. 1, #120 called a robot. Paragraph [0066]), wherein the first image includes an object having at least one known dimension (Fig. 1. Paragraph [0074]-CHI discloses the target calibration object refers to an object used for calibration. A size of the target calibration object may be pre-determined and may include lengths between feature points of the target calibration object (wherein the calibration object may be a checkerboard, Apriltag, ArUco, or other graphics and the target calibration object may be disposed at an end of the target motion mechanism (i.e. legs)). In paragraph [0075]-CHI discloses in FIG. 4, a rectangle range formed by dashed lines represents a field of view of the visual perception device. Setting of sampling points of the target motion mechanism need to ensure that the calibration object is within the field of view. In paragraph [0111]-CHI discloses the coordinates of the target calibration object may be represented by using coordinates of feature points in the target calibration object);
CHI fails to explicitly teach receiving a second image captured by a second camera of the robot, wherein the second image includes the object, wherein a field of view of the first camera and a field of view of the second camera at least partially overlap; projecting a set of points on the object in the first image to pixel locations in the second image; determining, for each point of the projected set of points on the object, a first distance between the point on the object in the second image and the pixel location of the corresponding projected point in the second image.
However, KROEGER explicitly teaches receiving a second image (Fig. 1-2, #110(2) and #208(2) called an image. Paragraph [0021]) captured by a second camera (Fig. 4, #106(2) and #206(2), called a second camera. Paragraph [0021 and 0035]) of the robot (Fig. 4, #104 and #204, called an autonomous vehicle. Paragraph [0021 and 0035]. Further in paragraph [0034]-KROEGER discloses FIG. 2 depicts a pictorial flow diagram of process 200 for calibrating cameras disposed on an autonomous vehicle (wherein an autonomous vehicle is a robot, the calibration processes include both intrinsic and extrinsic calibration, the calibration process involves the projection of points into overlapping images, and each point corresponds to the same image feature or portion in both images). In paragraph [0035]-KROEGER discloses at operation 202, the process can include capturing images of an environment at multiple cameras. The operation 202 illustrates a vehicle 204 having a first camera 206(1) and a second camera 206(2) disposed on the vehicle 204. The first camera 206(1) captures image data such as an image 208(1) and the second camera 206(2) captures image data such as a second image 208(2)), wherein the second image includes the object (Fig. 5. Paragraph [0059]-KROEGER discloses the perception component 522 can include functionality to perform object detection, segmentation, and/or classification. The perception component 522 can provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle 502 and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, animal, unknown, etc.). The perception component 522 can provide processed sensor data that indicates one or more characteristics associated with a detected entity and/or the environment in which the entity is positioned. Characteristics associated with an entity can include an x-position (global position), a y-position (global position), a z-position (global position), an orientation, an entity type (e.g., a classification), a velocity of the entity, an extent of the entity (e.g., size), etc. (wherein the object may be an image feature or image portion present in both images, such as a vehicle, a line, edge, etc.). Please also read paragraph [0020-0021 and 0050]), wherein a field of view of the first camera (Fig. 4, #106(1) and #206(1), called a first camera. Paragraph [0021 and 0036]) and a field of view of the second camera at least partially overlap (Fig. 2. Paragraph [0035]-KROEGER discloses the cameras 206(1), 206(2) are generally configured next to each other, both facing in the direction of travel and with significant overlap in their fields of view (wherein image acquisition between both cameras may occur substantially simultaneously or over different times). Please also see Fig. 1 and read paragraph [0022]);
projecting a plurality of points on the object (Fig. 1. Paragraph [0037]-KROEGER discloses at operation 210, the process can include identifying point pairs. The operation 210 may identify, for portions of the first image 208(1) and the second image 208(2) that overlap, first points 212a, 214a, 216a, 218a, 220a, 222a, 224a, 226a, 228a, 230a, in the first image 208(1) and second points 212b, 214b, 216b, 218b, 220b, 222b, 224b, 226b, 228b, 230b, in the second image 208(2). Please also read paragraph [0023 and 0040-0043]) in the first image (Fig. 1-2, #110(1) and #208(1), called a first image. Paragraph [0021 and 0035]) to pixel locations in the second image (Fig. 1-2, #110(2) and #208(2), called a second image. Paragraph [0021 and 0035]. Further in paragraph [0037]-KROEGER discloses the first points and the second points may be image features, e.g., with the first point 212a corresponding to an image feature or portion in the first image 208(1) and the second point 212b corresponding to the same image feature or portion in the second image 208(2), the first point 214a corresponding to another image feature or portion in the first image 208(1) and the second point 214b corresponding to the same other image feature or portion in the second image 208(2), and so forth (wherein the object may be an image feature, logical grouping of features or image portion that is present in both images, such as a vehicle, roadway, line, edge or reference point)); and
determining, for each point of the projected set of points on the object (Fig. 2. Paragraph [0040]-KROEGER discloses at operation 232, the process 200 can determine errors associated with point pairs. In paragraph [0041]-KROEGER discloses the point 212b has an associated hollow circle 238, the point 216b has an associated hollow circle 240, and the point 228b has an associated hollow circle 242. The points 212b, 216b, 228b generally represent the detected location of the features (e.g., distorted location) and the hollow circles 238, 240, 242 represent reprojections of associated features in the environment. Each hollow circle 238, 240, 242 represents a reprojection of the first points corresponding to the points 212b, 216b, 228b. The hollow circle 238 may represent a reprojection of the point 212a from the first image 208(a) into the second image 208(b), the hollow circle 240 may represent a reprojection of the point 216a from the first image 208(a) into the second image 208(b), and the hollow circle 242 may represent a reprojection of the point 228a from the first image 208(a) into the second image 208(b), each assuming an associated depth of the points. Please also read paragraph [0023-0025]), a first distance between the point on the object in the second image (Fig. 1, #110(2) and #208(2), called a second image. Paragraph [0021 and 0035]) and the pixel location of the corresponding projected point in the second image (Fig. 2. Paragraph [0041]-KROEGER discloses the error associated with the reprojection optimization for the point 212b may be the distance, e.g., the Euclidian distance measured in pixels, between the point 212b and the hollow circle 238. The error associated with the point 216b may be the distance between the point 216b and the hollow circle 240 and the error associated with the point 228b may be the distance between the point 228b and the hollow circle 242);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI of having a non-transitory computer readable medium encoded with a set of instructions that, when executed by at least one computer processor perform a method, the method comprising: receiving a first image captured by a first camera of a robot, wherein the first image includes an object having at least one known dimension;, with the teachings of KROEGER of having receiving a second image captured by a second camera of the robot, wherein the second image includes the object, wherein a field of view of the first camera and a field of view of the second camera at least partially overlap; projecting a set of points on the object in the first image to pixel locations in the second image; determining, for each point of the projected set of points on the object, a first distance between the point on the object in the second image and the pixel location of the corresponding projected point in the second image.
Wherein having CHI’s non-transitory computer readable medium having receiving a second image captured by a second camera of the robot, wherein the second image includes the object, wherein a field of view of the first camera and a field of view of the second camera at least partially overlap; projecting a set of points on the object in the first image to pixel locations in the second image; determining, for each point of the projected set of points on the object, a first distance between the point on the object in the second image and the pixel location of the corresponding projected point in the second image.
The motivation behind the modification would have been to obtain a non-transitory computer readable medium that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both KROEGER and CHI concern image processing and camera calibration. Wherein KROEGER’s systems and methods improve the functioning of a computing, the calibration for cameras and the processing and perception of systems by providing more accurate starting points and better fused data for segmentation, classification, while CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved. Please see KROEGER (US 20200357140 A1), Abstract and Paragraph [0017] and CHI (US 20220258356 A1), Abstract and Paragraph [0090].
KROEGER in view of CHI fails to explicitly teach and determining a reprojection error based on a statistical measure of the first distances.
However, CLAVEAU explicitly teaches and determining a reprojection error based on a statistical measure of the first distances (Fig.13. Paragraph [0125]-CLAVEAU discloses FIG. 13 is a flow diagram of method 300 for extrinsically calibrating a network of cameras using a calibration target (wherein the systems and techniques can be applied to robotics and the camera network may comprise stereo cameras and/or time-synchronized cameras 44a to 44d with overlapping views that acquire images of the same calibration target (e.g. fiducial) from different views). In paragraph [0143]-CLAVEAU discloses one approach to assess the completion level of the extrinsic calibration is to continuously or repeatedly (e.g., periodically) compute the average reprojection error of target points in the reference images and then compare the computed error with a predetermined threshold below which extrinsic calibration is considered complete or satisfactory (wherein the reprojection error represents the distances between projected points and an average reprojection error represents a statistical measure)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER of having a non-transitory computer readable medium encoded with a set of instructions that, when executed by at least one computer processor perform a method, the method, with the teachings of CLAVEAU of having and determining a reprojection error based on a statistical measure of the first distances.
Wherein having CHI’s non-transitory computer readable medium having and determining a reprojection error based on a statistical measure of the first distances.
The motivation behind the modification would have been to obtain a non-transitory computer readable medium that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and CLAVEAU concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while CLAVEAU’s systems and methods improve camera calibration as well as the efficiency and execution time of the image acquisition and analysis process. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and CLAVEAU et al. (US 20170287166 A1), Abstract and Paragraph [0007, 0020, 0022 and 0117].
CHI in view of KROEGER fail to explicitly teach and generating an instruction to perform an action when the reprojection error is greater than a threshold value, the threshold value set based on the at least one known dimension.
However, KOTHARI explicitly teaches and generating an instruction to perform an action (Fig. 1. Paragraph [0029]-KOTHARI discloses referring now to FIG. 1, a system 100 for camera calibration and/or validating camera calibration is illustratively depicted (wherein the system 100 is an autonomous vehicle and calibration uses a calibration target such as a fiducial). In paragraph [0062]-KOTHARI discloses the system may determine whether the confidence score is above or below a threshold (wherein the threshold may be a predetermined, updated and/or dynamic). If the confidence score is above the threshold, then the system may consider the sensor (e.g., camera) to be calibrated. If the confidence score is below the threshold, then the system may consider the sensor (e.g., camera) be not calibrated. In paragraph [0063]-KOTHARI discloses if the cameras are calibrated (306: YES), steps 302-306 may be repeated, for example, periodically, upon occurrence of certain events (e.g., a detection of a jolt, rain, etc.), and/or upon receipt of user instructions. If one or more cameras are not calibrated (306: NO), the system may generate a signal that will result in an action (308)) when the reprojection error is greater than a threshold value (Fig. 1. Paragraph [0057]-KOTHARI discloses for camera-based calibration factor, the identified pixel coordinates of the corners may be re-projected back and correlated with calibration target images to determine a reprojection error as the distance between the pixel coordinates of a corner detected in a calibration image and a corresponding world point projected into the same image. A value of reprojection error larger than 1 pixel may be indicative a sensor calibration issue. A reprojection error larger than about 0.5 pixel, about 0.7 pixel, about 1.1 pixel, about 1.3 pixel, about 0.5-1.1 pixel, about 0.6-1.2 pixel, or the like may be indicative a sensor calibration issue (wherein errors and validation are based on statistical analyses). In paragraph [0061]-KOTHARI discloses the system may use the camera calibration validation factor and the motion-based validation factor to generate a confidence score, which is an assessment of confidence in the accuracy of the calibration of the camera that captured the image frames of the calibration target), the threshold value set based on the at least one known dimension (Fig. 1. Paragraph [0036]-KOTHARI discloses the process for calibrating cameras involves imaging a calibration target from multiple viewpoints, and then identifying calibration points in the image that correspond to known points on the calibration target. In paragraph [0037]-KOTHARI discloses referring now to FIG. 2A, an example calibration target 270 (e.g., the calibration target 170 of FIG. 1) is illustrated (wherein a calibration target may be a fiducials, checkerboard and/or AprilTags, and the targets are associated with tags). In paragraph [0041]-KOTHARI discloses a tag may include associated fiducial information such as an identification of the corresponding fiducial, size of the fiducial, color of the fiducial, associated corner of the fiducial (e.g., top left, bottom right, etc.), or the like).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU of having a non-transitory computer readable medium encoded with a set of instructions that, when executed by at least one computer processor perform a method, the method, with the teachings of KOTHARI of having and generating an instruction to perform an action when the reprojection error is greater than a threshold value, the threshold value set based on the at least one known dimension.
Wherein having CHI’s non-transitory computer readable medium having and generating an instruction to perform an action when the reprojection error is greater than a threshold value, the threshold value set based on the at least one known dimension.
The motivation behind the modification would have been to obtain a non-transitory computer readable medium that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and KOTHARI concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while KOTHARI’s systems and methods improve the calibration accuracy of sensors of an autonomous vehicle. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and KOTHARI et al. (US 20230399015 A1), Abstract and Paragraph [0044 and 0061].
Regarding claim 53, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teach the robot of claim 18, CHI in view of KROEGER fail to explicitly teach wherein the statistical measure is an average of the first distances.
However, CLAVEAU explicitly teaches wherein the statistical measure is an average of the first distances (Fig.13. Paragraph [0125]-CLAVEAU discloses FIG. 13 is a flow diagram of method 300 for extrinsically calibrating a network of cameras using a calibration target (wherein the systems and techniques can be applied to robotics and the camera network may comprise stereo cameras and/or time-synchronized cameras 44a to 44d with overlapping views that acquire images of the same calibration target (e.g. fiducial) from different views). In paragraph [0143]-CLAVEAU discloses one approach to assess the completion level of the extrinsic calibration is to continuously or repeatedly (e.g., periodically) compute the average reprojection error of target points in the reference images and then compare the computed error with a predetermined threshold below which extrinsic calibration is considered complete or satisfactory. Please also read paragraph [0146, and 0152-0154]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a robot, with the teachings of CLAVEAU of having and determining a reprojection error based on a statistical measure of the first distances.
Wherein having CHI’s robot having and determining a reprojection error based on a statistical measure of the first distances.
The motivation behind the modification would have been to obtain a robot that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and CLAVEAU concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while CLAVEAU’s systems and methods improve camera calibration as well as the efficiency and execution time of the image acquisition and analysis process. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and CLAVEAU et al. (US 20170287166 A1), Abstract and Paragraph [0007, 0020, 0022 and 0117].
Regarding claim 54, CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI explicitly teaches the non-transitory computer readable medium of claim 36, CHI in view of KROEGER fail to explicitly teach wherein the statistical measure is an average of the first distances.
However, CLAVEAU explicitly teaches wherein the statistical measure is an average of the first distances (Fig. 6. Paragraph [0149]-CLAVEAU discloses the validation method based on 3D reconstruction involves reconstructing the inner corners of a checkerboard calibration target using stereo information or techniques, and then reprojecting the resulting 3D points into the validation images. The root-mean-square (RMS) value of the reprojection error can be computed for each image. The validation method based on image registration consists in projecting the image of one camera into the image of another camera. A target distance must be specified. The default target distance can be the average z-coordinate (depth) of the reconstructed checkerboard corners. Please also read paragraph [0146, and 0152-0154]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHI in view of KROEGER and in further view of CLAVEAU and in further view of KOTHARI of having a non-transitory computer readable medium encoded with a set of instructions that, when executed by at least one computer processor perform a method, the method, with the teachings of CLAVEAU of having and determining a reprojection error based on a statistical measure of the first distances.
Wherein having CHI’s non-transitory computer readable medium having and determining a reprojection error based on a statistical measure of the first distances.
The motivation behind the modification would have been to obtain a non-transitory computer readable medium that improves the functioning of a computing device, perception systems and camera/sensor calibration, since both CHI and CLAVEAU concern image processing and camera calibration. Wherein CHI’s systems and methods provide a visual perception coordinate system and the ontology coordinate system that can be obtained efficiently and accurately, and the efficiency of obtaining the transformation relationship between the visual perception coordinate system and the ontology coordinate system can be improved, while CLAVEAU’s systems and methods improve camera calibration as well as the efficiency and execution time of the image acquisition and analysis process. Please see CHI (US 20220258356 A1), Abstract and Paragraph [0090] and CLAVEAU et al. (US 20170287166 A1), Abstract and Paragraph [0007, 0020, 0022 and 0117].
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure.
WANG et al. (US 20220375129 A1)- Provided are methods for deep learning-based camera calibration, which can include receiving first and second images captured by a camera, processing the first image using a first neural network to determine a depth of the first image, processing the first image and the second image using a second neural network to determine a transformation between a pose of the camera for the first image and a pose of the camera for the second image, generating a projection image based on the depth of the first image, the transformation of the pose of the camera, and intrinsic parameters of the camera, comparing the second image and the projection image to determine a reprojection error, and adjusting at least one of the intrinsic parameters of the camera based on the reprojection error. Systems and computer program products are also provided........……….Please see Fig. 2 and 5-6. Abstract.
HUANG et al. (US 20210004610 A1)- According to an aspect of an embodiment, operations may comprise determining a target position and orientation for a calibration board with respect to a camera of a vehicle, detecting a first position and orientation of the calibration board with respect to the camera of the vehicle, determining instructions for moving the calibration board from the first position and orientation to the target position and orientation, transmitting the instructions to a device, detecting a second position and orientation of the calibration board, determining whether the second position and orientation is within a threshold of matching the target position and orientation, and, in response to determining that the second position and orientation is within the threshold of matching the target position and orientation, capturing one or more calibration camera images using the camera and calibrating one or more sensors of the vehicle using the one or more calibration camera images.......……….Please see Fig. 18-21. Abstract.
HU et al. (US 20210110575 A1)- Systems and methods for automatic camera calibration without using a robotic actuator or similar hardware. An electronic display screen projects an image of a simulated three-dimensional calibration pattern, such as a checkerboard, oriented in a particular pose. The camera captures an image of the calibration pattern that is displayed on the screen, and this image together with the transform of the simulated three-dimensional calibration pattern are used to calibrate the camera. Multiple different pictures of different poses are employed to determine the optimal set of poses that produces the lowest reprojection error. To aid in selecting different poses, i.e., spatial positions and orientations of the simulated three-dimensional calibration pattern, poses may be selected from only that portion of the camera's field of view which is expected to be typically used in operation of the camera......……….Please see Fig. 1-3. Abstract.
BAO et al. (US 20190096091 A1)- Described herein are systems and methods that provide easy and effective camera calibration. In one or more embodiments, a set of calibration patterns, such as array of unique markers, are used as a calibration target. The unique calibration markers resolve ambiguity when only partial views of the calibration target are captured. Embodiments disclosed herein also allow for an interactive calibration process that can direct users to specific locations within the camera image that require additional calibration image captures. Also, in one or more embodiments, the calibration process may be checked at one or more stages to help insure that the final camera intrinsic parameters will be sufficiently accuracy. One of the verifications may include addressing overfitting by, for example, using a first subset of the captured images to compute the intrinsic parameters and using a second set of the captured images to verify those intrinsic parameters.......……….Please see Fig. 4-6. Abstract.
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to Aaron Bonansinga whose telephone number is (703) 756-5380 The examiner can normally be reached on Monday-Friday, 9:00 a.m. - 6:00 p.m. ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Chineyere Wills-Burns can be reached by phone at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON TIMOTHY BONANSINGA/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673