DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Application
2. Claims 1-18 have been examined in this application. This communication is the first action on the merits.
Priority
3. The Examiner has noted the Applicants claiming Priority from Provisional Application PRO 63/546,439 filed on 10/30/2023. Therefore, the earliest effective filing date examined for this case is reflective of 10/30/2023.
35 U.S.C. § 101 Subject Matter Eligibility Analysis
4. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
5. Step 1: Claims 1-18 are each focused to a statutory category namely, a “method” or a “process” (Claims 1-8), a “computing device” or an “apparatus” (Claims 9-17), and a “non-transitory computer-readable medium” or an “article of manufacture” (Claim 18).
Step 2A Prong One: Step 2a: Prong 1 – Is the Claim Directed to a Judicial Exception?
This prong determines if the claim is directed to a law of nature, natural phenomenon, or abstract idea. Independent Claims 1, 9 and 18 includes a mix of physical components and data processing steps. Examiner provides a breakdown of each of the steps for Independent Claim 1 shown below. For example;
"capturing (i) depth data depicting an object, and (ii) image data depicting the object"; "capture, via a sensor, (i) depth data depicting an object, and (ii) image data depicting the object": This involves a specific physical action by a sensor (a machine/manufacture) in the real world to obtain data. Capturing data with a specialized sensor is a concrete, physical step.
"determining a mask corresponding to the object from the image data"; "identifying candidate points in the depth data based on the mask"; "for each of a plurality of points in the depth data, determine an indicator based on (i) whether the point is one of the candidate points, and (ii) a distance between the point and a reference feature in the depth data"; "assigning each of the plurality of points having an indicator that exceeds a threshold to a set of points representing the object"; "dimensioning the object based on the set of points":
These are data manipulation and processing steps. On their own, they could be construed as mathematical concepts or mental processes, which are abstract ideas. However, when read in the context of the entire claim, these steps are part of a specific process to achieve a practical, concrete result: dimensioning a real-world object. The focus is not on the mathematical formulas themselves, but on their application within a specific technical process.
Conclusion for Step 2A, Prong 1: Independent Claims 1, 9 and 18 as a whole are directed to a specific technical process for measuring a physical object using a specific data capture and processing technique, rather than a mere abstract idea or conventional computer use. The combination of hardware and specific, technically-rooted data transformations tends to move the claim out of the "abstract idea" category, especially if the specification highlights a technical problem and a novel solution.
Conclusion (Step 2a - Prong 1): Independent Claims 1, 9 and 18 are not directed to an abstract idea, but even if it were, it passes Prong 2. Therefore, at step 2a prong 1, Yes, Claims 1-18 do not recite an abstract idea under step 2a prong 1.
Step 2a: Prong 2 – Does the Claim Integrate the Exception into a Practical Application?
Even assuming arguendo that the Independent Claims 1, 9 and 18 do not recite an abstract idea, these claims integrate any potential abstract idea into a practical application, making it patent-eligible under the 2024 USPTO guidelines with respect to Step 2A Prong Two of the eligibility inquiry (as explained in MPEP § 2106.04(d)). For Independent Claims 1, 9 and 18, the steps, when viewed as a whole, are considered to be patent-eligible because they integrate any potential judicial exceptions (such as mathematical algorithms or data processing techniques) into a practical application, specifically a new and useful method for measuring a physical object using a unique combination of sensor data. Each element is analyzed below for Independent Claim 1 for example;
"capturing (i) depth data depicting an object, and (ii) image data depicting the object" / "capture, via a sensor, (i) depth data depicting an object, and (ii) image data depicting the object": This is a physical, data-gathering step using a specific type of technology (depth and image sensors) to interact with the real world. This is not an abstract idea but rather a specific technological means to acquire a particular type of data, contributing strongly to the practical application requirement.
"determining a mask corresponding to the object from the image data": While this step involves data processing, it is tied to the specific, real-world data captured by the sensor. It is part of a process to manipulate sensor data in a specific way to achieve a physical result (dimensioning an object), rather than a free-floating abstract calculation.
"identifying candidate points in the depth data based on the mask": This step is a specific data filtering process that leverages the output of a prior, sensor-based step ("the mask") to process other sensor data ("the depth data"). This specific interaction between two different types of sensor data processing integrates the algorithm into a concrete application of technology.
"for each of a plurality of points in the depth data, determine an indicator based on (i) whether the point is one of the candidate points, and (ii) a distance between the point and a reference feature in the depth data": This describes a specific, technical algorithm for processing depth data. However, it is applied to a specific set of physical data (points in depth data) and tied to a physical outcome, not a general, abstract mathematical concept.
"assigning each of the plurality of points having an indicator that exceeds a threshold to a set of points representing the object": This data manipulation step is a means to achieve a specific technological goal: accurately isolating the data points of the physical object.
"dimensioning the object based on the set of points": This is the final step where the processed data is used to achieve a concrete, real-world result: measuring the physical dimensions of an object. This transforms the entire process from an abstract mathematical exercise into a practical, applied technology for measuring physical items, thus satisfying the practical application test under MPEP guidance.
Therefore, Independent Claims 1, 9 and 18, the combination of these steps constitute reciting additional elements that integrate the judicial exception into a practical application by providing (1) improvements to the functioning of a computer, or to any other technology or technical field (see MPEP § 2106.05 (a)) or (2) applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claims as a whole is more than a drafting effort designed to monopolize the judicial exception (see MPEP § 2106.05 (e)).
Conclusion (Step 2a - Prong 2): Independent Claims 1, 9 and 18 as a whole are found patent-eligible because it is not merely an abstract idea implemented on a generic computer. Instead, it is a specific, integrated series of steps that use particular sensors and data processing techniques to solve a concrete technical problem: the accurate, automated dimensioning of a physical object. The specific nature of the data capture (depth and image data), the novel processing steps (masking and indicator-based assignment), and the concrete end result (dimensioning the object) demonstrate that the judicial exceptions have been integrated into a practical application. Thus, Claims 1-18 are patent eligible under 35 U.S.C. § 101 step 2a prong 2 of the 35 U.S.C. § 101 analysis.
Claim Rejections - 35 USC § 103
6. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US PG Pub (US 2020/0109939 A1) hereinafter Phan, et. al., and in view of US PG Pub (US 2023/0286165 A1) hereinafter Gaisser, and in further view of US Patent # (US 11,669,988 B1) hereinafter Miller, et. al.
Regarding Independent Claim 1, Phan method for image-assisted region growing for object segmentation and dimensioning teaches the following:
- capturing (i) depth data depicting an object (see at least Phan: ¶ [0029] & ¶ [0060] & Fig. 7B. Phan teaches that the mobile automation apparatus 103 to navigate the environment and to capture data. The processor 120 can be further configured to obtain the captured data via a communications interface 124 for storage in a repository 132 and subsequent processing (e.g. to detect objects such as shelved products in the captured data, and detect status information corresponding to the objects). See also Phan at ¶ [0060]: The depth measurements may be maintained in a list 704 in association with the image coordinates. Additional example points 706, 708, 709, 712 and 714 are also illustrated. As shown in the list 704 of depth measurements, the point 708 is located on the surface of a product, while the point 709 is behind the product, e.g. on the shelf back 116 (at a depth of 528 mm, compared to a depth of 235 mm for the point 708).), and (ii) image data depicting the object (see at least Phan: ¶ [0042] & Fig. 4B & Figs. 7A-7B. Phan notes that the points 616a and 616b correspond to image coordinates defined according to an image frame of reference 702 (which in the present example is parallel with the XZ plane of the frame of reference 102). As seen in FIG. 7B, the depth measurements in the frame of reference 102 associated with the points 616 are also retained through the performance of the method 700, although they are not directly represented in the image coordinates (which are two-dimensional). See also Phan at ¶ [0042]: “The server 101 can be configured to process the point cloud, the raw lidar data, image data captured by the cameras 207, or a combination thereof, to identify shelf edges 118 according to predefined characteristics of the shelf edges 118. Examples of such characteristics include that the shelf edges 118 are likely to be substantially planar, and are also likely to be closer to the apparatus 103 as the apparatus 103 travels the length 119 of a shelf module 110) than other objects (such as the shelf backs 116 and products 112).”).
- determining a mask corresponding to the object from the image data (see at least Phan: ¶ [abstract] & ¶ [0046-0048] & Figs. 5A-5B. Phan teaches that a mask indicating, for a plurality of portions of an image of the support structure captured from a capture pose, respective confidence levels that the portions depict the back of the support structure. See also Phan at Fig. 5B noting an example back of shelf mask corresponding to the image of Fig. 5A. See also Phan at ¶ [0046-0048]: A mask also referred to as a back of shelf (BoS) mask or a BoS map. The mask corresponds to the at least one image mentioned above. That is, for each image obtained at block 310, one corresponding mask can also be obtained. The mask is derived from the corresponding image, and indicates, for each of a plurality of portions of the image, a confidence level that the portion depicts the shelf back 116. The portions can be individual pixels, if the mask has the same resolution as the image. In other examples, the mask has a lower resolution than the image, and each confidence level in the mask therefore corresponds to a portion of the image that contains multiple pixels.);
Moreover, Phan method for image-assisted region growing for object segmentation and dimensioning does not explicitly disclose, but Gaisser in the analogous art for image-assisted region growing for object segmentation and dimensioning does disclose the following:
- identifying candidate points in the depth data based on the mask (see at least Gaisser: ¶ [0100-0102] & ¶ [0105-0106] & ¶ [0130-0131]. Gaisser notes for each cell 6101, a plane 6201 may be determined according to the x, y, and z coordinates of the points in the 3D image information 5700 that are encompassed by the cell 6101. Thus, for a kernel size of 20×20, 400 points of the 3D image information 5700 may be used to determine the plane 6201. See also Gaisser at ¶ [0102]: “The height difference may be determined, for example, as the average height difference between corresponding points on the first extended plane 6201BA and the second plane 6201A, wherein the corresponding points correspond grid points in the point cloud of the 3D image information 5700”. See also Gaisser at ¶ [0106]: The height gradient cost map 6200 may include a series of values representing a height gradient of points (in some embodiments, all points) in the 3D point cloud with respect to neighboring points in the 3D point cloud. The points in the height gradient cost map 6200 may be those points in the 3D point cloud image information 5700 that are separated by a stride. See also Gaisser at ¶ [0130-0131]: The actual points 8023 on the surface of the object 8022 do not all fall within the bounding box 8021, due to the deformable nature of the object 8022. Accordingly, in the operation 4008, detection mask information may be generated to identify portions of an object within a bounding box that are more or less suitable for object picking. See also Gaisser at Fig. 8B: “FIG. 8B illustrates detection mask information 8300. The detection mask information 8300 may include information about the objects within the bounding box 8021 (e.g., the bounding box for an image segment 7301 generated during operation 7010). The detection mask information 8300 includes identified areas 8024 and 8027 and unidentified area 8026”.)
- for each of a plurality of points in the depth data (see at least Gaisser: ¶ [0041-0043] & ¶ [0061] & ¶ [0065-0066] & Fig. 6B. Gaisser notes that the respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to another reference point. The 3D image information 2700 may include a point cloud (3D point cloud) which includes respective coordinates for various locations on structures of objects in the camera field of view (e.g., 3210). The depth information may be used to identify objects or estimate how objects are spatially arranged. In some instances, the spatial structure information may include or may be used to generate a point cloud that describes locations of one or more surfaces of an object. See at least Gaisser at ¶ [0103].), determining an indicator based on (i) whether the point is one of the candidate points (see at least Gaisser: ¶ [0041] & ¶ [0061] & ¶ [0065-0066] & ¶ [0125]. Gaisser notes that image analysis is performed according to or using spatial structure information that may include depth information which describes respective depth value of various locations relative a chosen point. The depth information may be used to identify objects or estimate how objects are spatially arranged. In some instances, the spatial structure information may include or may be used to generate a point cloud that describes locations of one or more surfaces of an object. See also Gaisser at ¶ [0061]: References herein related to image analysis by a computing system may be performed according to or using spatial structure information that may include depth information which describes respective depth value of various locations relative a chosen point. The depth information may be used to identify objects or estimate how objects are spatially arranged. See also Gaisser at ¶ [0065-0066]: The 3D image information 2700 may include, e.g., a depth map or a point cloud that indicates respective depth values of various locations on one or more surfaces (e.g., top surface or other outer surface) of the objects 3520. The respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to another reference point. The 3D image information 2700 may include a first image portion 2710, also referred to as an image portion, that indicates respective depth values for a set of locations 2710 1-2710 n, which are also referred to as physical locations on a surface of an object 3520. Further, the 3D image information 2700 may further include a second, a third, a fourth, and a fifth portion 2720, 2730, 2740, and 2750. These portions may then further indicate respective depth values for a set of locations, which may be represented by 2720 1-2720 n, 2730 1-2730 n, 2740 1-2740 n, and 2750 1-2750 n respectively. See also Gaisser at ¶ [0125]: Referring now to FIGS. 7C and 7D, the image segment 7301 may be selected as the object region 7201 having a seed 7204 located therein. The seed 7204 may be the point the surface cost map having the lowest cost (e.g., the smoothest point least likely to represent a boundary or discontinuity). A segment map 7300 (FIG. 7D) containing the image segment 7301 may be generated by removing all object regions 7201 that do not include the seed.), and (ii) a distance between the point and a reference feature in the depth data (see at least Gaisser: ¶ [0066] & ¶ [0078] & ¶ [0082] & ¶ [0108]. Gaisser notes that the respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to another reference point. See also Gaisser at ¶ [0078]: The 3D image information may include a depth map, or more generally include depth information, which may describe respective depth values of various locations in the camera field of view 3210 relative to the camera 1200 or relative to some other reference point. See also Gaisser at ¶ [0082]: The camera 1200/1200A/1200B may be stationary relative to a reference point, such as a floor on which the container 3510 is placed or relative to the robot base 3310. For example, the camera 1200 in FIG. 3A may be mounted to a ceiling, such as a ceiling of a warehouse, or to a mounting frame which remains stationary relative to the floor, relative to the robot base 3310, or some other reference point. See also Gaisser at ¶ [0088]: The depth value may be relative to the camera (e.g., 1200/1200A) which generated the 3D image information, or may be relative to some other reference point. See also Gaisser at ¶ [0108]. See also Gaisser at [0115-0116]: “A distance threshold may be selected according to an object size. Any detected height difference that is equal to or larger than the distance threshold may be set to the maximum value for height difference.”);
- assigning each of the plurality of points (see at least Gaisser: ¶ [0093] & ¶ [0099] & ¶ [0103] & ¶ [0106]. Gaisser notes that the surface cost map may assign a surface cost map value to each point of a point cloud representative of the plurality of objects 3520 or a portion thereof. The surface cost map value assigned to any point or kernel may be representative of differences between that point or kernel and neighboring points or kernels. See also Gaisser at ¶ [0099]: Surface cost map values are assigned to the cell centers 6102 and, when performing calculations, each cell 6101 is compared to its non-overlapping neighboring cells 6101. See also Gaisser at ¶ [0103] & ¶ [0106].) having an indicator that exceeds a threshold to a set of points representing the object (see at least Gaisser: Fig. 2F & Fig. 7B & ¶ [0108-0110] & ¶ [0121]. Gaisser notes that the threshold borders 7102 represent regions having a surface cost map value exceeding the threshold while the object portions 7101 represent regions having a surface cost map value not exceeding the threshold. The threshold borders 7102 may thus be represented by “false” values in the threshold mask 7100 while the object portions 7101 are represented as “true” values. The assignment of “false” and “true” values is by convention only, and any suitable distinction may be applied. See also Gaisser at ¶ [0108]: The distance threshold parameter may be a threshold beyond which any height difference is assigned a maximum value. If the height difference between two planes exceeds the distance threshold, then that height difference may be set as a predetermined value (e.g., the distance threshold). See also Gaisser at Fig. 2F.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Phan method for image-assisted region growing for object segmentation and dimensioning with the aforementioned teachings of: identifying candidate points in the depth data based on the mask & for each of a plurality of points in the depth data, determining an indicator based on (i) whether the point is one of the candidate points, and (ii) a distance between the point and a reference feature in the depth data & assigning each of the plurality of points having an indicator that exceeds a threshold to a set of points representing the object, and in view of Gaisser, whereby the system of Gaisser provides technical improvements to a robotic system configured for use in object identification, pickable region identification, and object transfer. Technical improvements described herein may increase the speed, precision, and accuracy of these tasks and further facilitate the detection, pickable region identification, and transfer of objects from a source container or repository to a destination. The robotic systems and computational systems described herein address the technical problem of identifying, detecting pickable regions, and retrieving objects from a container, where the objects may be irregularly arranged. By addressing this technical problem, the technology of object identification, pickable region detection, and object retrieval is improved (see at least Gaisser: ¶ [0034]).
Further, the claimed invention is merely a combination of old elements in a similar field for image-assisted region growing for object segmentation and dimensioning and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Gaisser, the results of the combination were predictable.
Moreover, Phan / Gaisser method for image-assisted region growing for object segmentation and dimensioning does not explicitly disclose, but Miller in the analogous art for image-assisted region growing for object segmentation and dimensioning does disclose the following:
- dimensioning the object based on the set of points (see at least Miller: Figs. 9A-9B & Fig. 10B & Col. 6, Lns. 50-54. Miller teaches that the system 100 may determine the dimensions of the target object (106, FIG. 1) (e.g., length 110, width 112, depth 114; FIG. 1) by capturing and analyzing 3D imaging data of the target object via the 3D image sensors (204, FIG. 2) of the mobile device 102. For example, the 3D image sensors 204 may, for each frame captured, generate a point cloud including the target object 106 and its immediate environment (e.g., including the flat surface (108, FIG. 1) on which the target object is disposed and a wall 302 (or other background) in front of which the target object is disposed). The point cloud 300 may be generated based on a depth map (not depicted) captured by the 3D image sensors 204. See also Miller at Fig. 10B step 1016 -> “Measuring the at least three edge segments from the origin point along the at least three edges to determine a second subset of points, each point of the second subset having a depth value indicative of the target object”. Then Miller at Fig. 10B step 1020 -> “Determining, via the mobile computing device, at least one dimension corresponding to an edge of the target object based on the one or more edge distances”. See also Miller at Col. 2, Lns. 24-28: Miller teaches a method for dimensioning an object is disclosed. The method includes obtaining a point cloud of a target object, the point cloud including a plurality of points. The method further includes determining an origin point of the target object from within the plurality of points.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Phan / Gaisser method for image-assisted region growing for object segmentation and dimensioning with the aforementioned teachings of: dimensioning the object based on the set of points, and in further view of Miller, whereby these devices in the Miller system may now be equipped with three-dimensional imaging systems incorporating cameras configured to detect infrared radiation combined with infrared or laser illuminators (e.g., light detection and ranging (LIDAR) systems) to enable the camera to derive depth information. It may be desirable for a mobile device to capture three-dimensional (3D) images of objects, or two-dimensional (2D) images with depth information, and derive from the captured imagery additional information about the objects portrayed, such as the dimensions of the objects or other details otherwise accessible through visual comprehension, such as significant marking, encoded information, or visible damage (see at least Miller: Col. 1, Lns. 50-64).
Further, the claimed invention is merely a combination of old elements in a similar field for image-assisted region growing for object segmentation and dimensioning and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Miller, the results of the combination were predictable.
Regarding Independent Claim 9, Phan computing device for image-assisted region growing for object segmentation and dimensioning teaches the following:
- a sensor (see at least Phan: Fig. 1 & ¶ [0028]. Phan teaches the sensors 104 shown in Fig. 1.);
- a processor configured to (see at least Phan: Fig. 1 & ¶ [0029] & ¶ [0070-0071]. Phan teaches a processor 120 shown in Fig. 1.);
- capture, via the sensor (see at least Phan: Fig. 1 & ¶ [0028].), (i) depth data depicting an object (see at least Phan: ¶ [0029] & ¶ [0060] & Fig. 7B. Phan teaches that the mobile automation apparatus 103 to navigate the environment and to capture data. The processor 120 can be further configured to obtain the captured data via a communications interface 124 for storage in a repository 132 and subsequent processing (e.g. to detect objects such as shelved products in the captured data, and detect status information corresponding to the objects). See also Phan at ¶ [0060]: The depth measurements may be maintained in a list 704, in association with the image coordinates. Additional example points 706, 708, 709, 712 and 714 are also illustrated. As shown in the list 704 of depth measurements, the point 708 is located on the surface of a product, while the point 709 is behind the product, e.g. on the shelf back 116 (at a depth of 528 mm, compared to a depth of 235 mm for the point 708).), and (ii) image data depicting the object (see at least Phan: ¶ [0042] & Fig. 4B & Figs. 7A-7B. Phan notes that the points 616a and 616b correspond to image coordinates defined according to an image frame of reference 702 (which in the present example is parallel with the XZ plane of the frame of reference 102). As seen in FIG. 7B, the depth measurements in the frame of reference 102 associated with the points 616 are also retained through the performance of the method 700, although they are not directly represented in the image coordinates (which are two-dimensional). See also Phan at ¶ [0042]: “The server 101 can be configured to process the point cloud, the raw lidar data, image data captured by the cameras 207, or a combination thereof, to identify shelf edges 118 according to predefined characteristics of the shelf edges 118. Examples of such characteristics include that the shelf edges 118 are likely to be substantially planar, and are also likely to be closer to the apparatus 103 as the apparatus 103 travels the length 119 of a shelf module 110) than other objects (such as the shelf backs 116 and products 112).”).
- determine a mask corresponding to the object from the image data (see at least Phan: ¶ [abstract] & ¶ [0046-0048] & Figs. 5A-5B. Phan teaches that a mask indicating, for a plurality of portions of an image of the support structure captured from a capture pose, respective confidence levels that the portions depict the back of the support structure. See also Phan at Fig. 5B noting an example back of shelf mask corresponding to the image of Fig. 5A. See also Phan at ¶ [0046-0048]: A mask also referred to as a back of shelf (BoS) mask or a BoS map. The mask corresponds to the at least one image mentioned above. That is, for each image obtained at block 310, one corresponding mask can also be obtained. The mask is derived from the corresponding image, and indicates, for each of a plurality of portions of the image, a confidence level that the portion depicts the shelf back 116. The portions can be individual pixels, if the mask has the same resolution as the image. In other examples, the mask has a lower resolution than the image, and each confidence level in the mask therefore corresponds to a portion of the image that contains multiple pixels.).
Moreover, Phan computing device for image-assisted region growing for object segmentation and dimensioning does not explicitly disclose, but Gaisser in the analogous art for image-assisted region growing for object segmentation and dimensioning does disclose the following:
- identify candidate points in the depth data based on the mask (see at least Gaisser: ¶ [0100-0102] & ¶ [0105-0106] & ¶ [0130-0131]. Gaisser notes for each cell 6101, a plane 6201 may be determined according to the x, y, and z coordinates of the points in the 3D image information 5700 that are encompassed by the cell 6101. Thus, for a kernel size of 20×20, 400 points of the 3D image information 5700 may be used to determine the plane 6201. See also Gaisser at ¶ [0102]: “The height difference may be determined, for example, as the average height difference between corresponding points on the first extended plane 6201BA and the second plane 6201A, wherein the corresponding points correspond grid points in the point cloud of the 3D image information 5700”. See also Gaisser at ¶ [0106]: The height gradient cost map 6200 may include a series of values representing a height gradient of points (in some embodiments, all points) in the 3D point cloud with respect to neighboring points in the 3D point cloud. The points in the height gradient cost map 6200 may be those points in the 3D point cloud image information 5700 that are separated by a stride. See also Gaisser at ¶ [0130-0131]: The actual points 8023 on the surface of the object 8022 do not all fall within the bounding box 8021, due to the deformable nature of the object 8022. Accordingly, in the operation 4008, detection mask information may be generated to identify portions of an object within a bounding box that are more or less suitable for object picking. See also Gaisser at Fig. 8B: “FIG. 8B illustrates detection mask information 8300. The detection mask information 8300 may include information about the objects within the bounding box 8021 (e.g., the bounding box for an image segment 7301 generated during operation 7010). The detection mask information 8300 includes identified areas 8024 and 8027 and unidentified area 8026”.)
- for each of a plurality of points in the depth data (see at least Gaisser: ¶ [0041-0043] & ¶ [0061] & ¶ [0065-0066] & Fig. 6B. Gaisser notes that the respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to another reference point. The 3D image information 2700 may include a point cloud (3D point cloud) which includes respective coordinates for various locations on structures of objects in the camera field of view (e.g., 3210). The depth information may be used to identify objects or estimate how objects are spatially arranged. In some instances, the spatial structure information may include or may be used to generate a point cloud that describes locations of one or more surfaces of an object. See at least Gaisser at ¶ [0103].), determining an indicator based on (i) whether the point is one of the candidate points (see at least Gaisser: ¶ [0041] & ¶ [0061] & ¶ [0065-0066] & ¶ [0125]. Gaisser notes that image analysis is performed according to or using spatial structure information that may include depth information which describes respective depth value of various locations relative a chosen point. The depth information may be used to identify objects or estimate how objects are spatially arranged. In some instances, the spatial structure information may include or may be used to generate a point cloud that describes locations of one or more surfaces of an object. See also Gaisser at ¶ [0061]: References herein related to image analysis by a computing system may be performed according to or using spatial structure information that may include depth information which describes respective depth value of various locations relative a chosen point. The depth information may be used to identify objects or estimate how objects are spatially arranged. See also Gaisser at ¶ [0065-0066]: The 3D image information 2700 may include, e.g., a depth map or a point cloud that indicates respective depth values of various locations on one or more surfaces (e.g., top surface or other outer surface) of the objects 3520. The respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to another reference point. The 3D image information 2700 may include a first image portion 2710, also referred to as an image portion, that indicates respective depth values for a set of locations 2710 1-2710 n, which are also referred to as physical locations on a surface of an object 3520. Further, the 3D image information 2700 may further include a second, a third, a fourth, and a fifth portion 2720, 2730, 2740, and 2750. These portions may then further indicate respective depth values for a set of locations, which may be represented by 2720 1-2720 n, 2730 1-2730 n, 2740 1-2740 n, and 2750 1-2750 n respectively. See also Gaisser at ¶ [0125]: Referring now to FIGS. 7C and 7D, the image segment 7301 may be selected as the object region 7201 having a seed 7204 located therein. The seed 7204 may be the point the surface cost map having the lowest cost (e.g., the smoothest point least likely to represent a boundary or discontinuity). A segment map 7300 (FIG. 7D) containing the image segment 7301 may be generated by removing all object regions 7201 that do not include the seed.), and (ii) a distance between the point and a reference feature in the depth data (see at least Gaisser: ¶ [0066] & ¶ [0078] & ¶ [0082] & ¶ [0108]. Gaisser notes that the respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to another reference point. See also Gaisser at ¶ [0078]: The 3D image information may include a depth map, or more generally include depth information, which may describe respective depth values of various locations in the camera field of view 3210 relative to the camera 1200 or relative to some other reference point. See also Gaisser at ¶ [0082]: The camera 1200/1200A/1200B may be stationary relative to a reference point, such as a floor on which the container 3510 is placed or relative to the robot base 3310. For example, the camera 1200 in FIG. 3A may be mounted to a ceiling, such as a ceiling of a warehouse, or to a mounting frame which remains stationary relative to the floor, relative to the robot base 3310, or some other reference point. See also Gaisser at ¶ [0088]: The depth value may be relative to the camera (e.g., 1200/1200A) which generated the 3D image information, or may be relative to some other reference point. See also Gaisser at ¶ [0108]. See also Gaisser at [0115-0116]: “A distance threshold may be selected according to an object size. Any detected height difference that is equal to or larger than the distance threshold may be set to the maximum value for height difference.”);
- assign each of the plurality of points (see at least Gaisser: ¶ [0093] & ¶ [0099] & ¶ [0103] & ¶ [0106]. Gaisser notes that the surface cost map may assign a surface cost map value to each point of a point cloud representative of the plurality of objects 3520 or a portion thereof. The surface cost map value assigned to any point or kernel may be representative of differences between that point or kernel and neighboring points or kernels. See also Gaisser at ¶ [0099]: Surface cost map values are assigned to the cell centers 6102 and, when performing calculations, each cell 6101 is compared to its non-overlapping neighboring cells 6101. See also Gaisser at ¶ [0103] & ¶ [0106].) having an indicator that exceeds a threshold to a set of points representing the object (see at least Gaisser: Fig. 2F & Fig. 7B & ¶ [0108-0110] & ¶ [0121]. Gaisser notes that the threshold borders 7102 represent regions having a surface cost map value exceeding the threshold while the object portions 7101 represent regions having a surface cost map value not exceeding the threshold. The threshold borders 7102 may thus be represented by “false” values in the threshold mask 7100 while the object portions 7101 are represented as “true” values. The assignment of “false” and “true” values is by convention only, and any suitable distinction may be applied. See also Gaisser at ¶ [0108]: The distance threshold parameter may be a threshold beyond which any height difference is assigned a maximum value. If the height difference between two planes exceeds the distance threshold, then that height difference may be set as a predetermined value (e.g., the distance threshold). See also Gaisser at Fig. 2F.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Phan computing device for image-assisted region growing for object segmentation and dimensioning with the aforementioned teachings of: identify candidate points in the depth data based on the mask & for each of a plurality of points in the depth data, determining an indicator based on (i) whether the point is one of the candidate points, and (ii) a distance between the point and a reference feature in the depth data & assign each of the plurality of points having an indicator that exceeds a threshold to a set of points representing the object, and in view of Gaisser, whereby the system of Gaisser provides technical improvements to a robotic system configured for use in object identification, pickable region identification, and object transfer. Technical improvements described herein may increase the speed, precision, and accuracy of these tasks and further facilitate the detection, pickable region identification, and transfer of objects from a source container or repository to a destination. The robotic systems and computational systems described herein address the technical problem of identifying, detecting pickable regions, and retrieving objects from a container, where the objects may be irregularly arranged. By addressing this technical problem, the technology of object identification, pickable region detection, and object retrieval is improved (see at least Gaisser: ¶ [0034]).
Further, the claimed invention is merely a combination of old elements in a similar field for image-assisted region growing for object segmentation and dimensioning and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Gaisser, the results of the combination were predictable.
Moreover, Phan / Gaisser computing device for image-assisted region growing for object segmentation and dimensioning does not explicitly disclose, but Miller in the analogous art for image-assisted region growing for object segmentation and dimensioning does disclose the following:
- dimension the object based on the set of points (see at least Miller: Figs. 9A-9B & Fig. 10B & Col. 6, Lns. 50-54. Miller teaches that the system 100 may determine the dimensions of the target object (106, FIG. 1) (e.g., length 110, width 112, depth 114; FIG. 1) by capturing and analyzing 3D imaging data of the target object via the 3D image sensors (204, FIG. 2) of the mobile device 102. For example, the 3D image sensors 204 may, for each frame captured, generate a point cloud including the target object 106 and its immediate environment (e.g., including the flat surface (108, FIG. 1) on which the target object is disposed and a wall 302 (or other background) in front of which the target object is disposed). The point cloud 300 may be generated based on a depth map (not depicted) captured by the 3D image sensors 204. See also Miller at Fig. 10B step 1016 -> “Measuring the at least three edge segments from the origin point along the at least three edges to determine a second subset of points, each point of the second subset having a depth value indicative of the target object”. Then Miller at Fig. 10B step 1020 -> “Determining, via the mobile computing device, at least one dimension corresponding to an edge of the target object based on the one or more edge distances”. See also Miller at Col. 2, Lns. 24-28: Miller teaches a method for dimensioning an object is disclosed. The method includes obtaining a point cloud of a target object, the point cloud including a plurality of points. The method further includes determining an origin point of the target object from within the plurality of points.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Phan / Gaisser computing device for image-assisted region growing for object segmentation and dimensioning with the aforementioned teachings of: dimension the object based on the set of points, and in further view of Miller, whereby these devices in the Miller system may now be equipped with three-dimensional imaging systems incorporating cameras configured to detect infrared radiation combined with infrared or laser illuminators (e.g., light detection and ranging (LIDAR) systems) to enable the camera to derive depth information. It may be desirable for a mobile device to capture three-dimensional (3D) images of objects, or two-dimensional (2D) images with depth information, and derive from the captured imagery additional information about the objects portrayed, such as the dimensions of the objects or other details otherwise accessible through visual comprehension, such as significant marking, encoded information, or visible damage (see at least Miller: Col. 1, Lns. 50-64).
Further, the claimed invention is merely a combination of old elements in a similar field for image-assisted region growing for object segmentation and dimensioning and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Miller, the results of the combination were predictable.
Regarding Independent Claim 18, Phan non-transitory computer-readable medium for image-assisted region growing for object segmentation and dimensioning teaches the following:
- storing instructions executable by a processor (see at least Phan: Fig. 1 & ¶ [0029-0030] & ¶ [0070-0071]. Phan teaches a processor 120 shown in Fig. 1.) of a computing device (see at least Phan: ¶ [0023-0024] & Fig. 1. Phan teaches the client computing device 105 is illustrated in FIG. 1 as a mobile computing device, such as a tablet, smart phone or the like.) to:
- capture, via the sensor (see at least Phan: Fig. 1 & ¶ [0028].), (i) depth data depicting an object (see at least Phan: ¶ [0029] & ¶ [0060] & Fig. 7B. Phan teaches that the mobile automation apparatus 103 to navigate the environment and to capture data. The processor 120 can be further configured to obtain the captured data via a communications interface 124 for storage in a repository 132 and subsequent processing (e.g. to detect objects such as shelved products in the captured data, and detect status information corresponding to the objects). See also Phan at ¶ [0060]: The depth measurements may be maintained in a list 704, in association with the image coordinates. Additional example points 706, 708, 709, 712 and 714 are also illustrated. As shown in the list 704 of depth measurements, the point 708 is located on the surface of a product, while the point 709 is behind the product, e.g. on the shelf back 116 (at a depth of 528 mm, compared to a depth of 235 mm for the point 708).), and (ii) image data depicting the object (see at least Phan: ¶ [0042] & Fig. 4B & Figs. 7A-7B. Phan notes that the points 616a and 616b correspond to image coordinates defined according to an image frame of reference 702 (which in the present example is parallel with the XZ plane of the frame of reference 102). As seen in FIG. 7B, the depth measurements in the frame of reference 102 associated with the points 616 are also retained through the performance of the method 700, although they are not directly represented in the image coordinates (which are two-dimensional). See also Phan at ¶ [0042]: “The server 101 can be configured to process the point cloud, the raw lidar data, image data captured by the cameras 207, or a combination thereof, to identify shelf edges 118 according to predefined characteristics of the shelf edges 118. Examples of such characteristics include that the shelf edges 118 are likely to be substantially planar, and are also likely to be closer to the apparatus 103 as the apparatus 103 travels the length 119 of a shelf module 110) than other objects (such as the shelf backs 116 and products 112).”).
- determine a mask corresponding to the object from the image data (see at least Phan: ¶ [abstract] & ¶ [0046-0048] & Figs. 5A-5B. Phan teaches that a mask indicating, for a plurality of portions of an image of the support structure captured from a capture pose, respective confidence levels that the portions depict the back of the support structure. See also Phan at Fig. 5B noting an example back of shelf mask corresponding to the image of Fig. 5A. See also Phan at ¶ [0046-0048]: A mask also referred to as a back of shelf (BoS) mask or a BoS map. The mask corresponds to the at least one image mentioned above. That is, for each image obtained at block 310, one corresponding mask can also be obtained. The mask is derived from the corresponding image, and indicates, for each of a plurality of portions of the image, a confidence level that the portion depicts the shelf back 116. The portions can be individual pixels, if the mask has the same resolution as the image. In other examples, the mask has a lower resolution than the image, and each confidence level in the mask therefore corresponds to a portion of the image that contains multiple pixels.).
Moreover, Phan non-transitory computer-readable medium for image-assisted region growing for object segmentation and dimensioning does not explicitly disclose, but Gaisser in the analogous art for image-assisted region growing for object segmentation and dimensioning does disclose the following:
- identify candidate points in the depth data based on the mask (see at least Gaisser: ¶ [0100-0102] & ¶ [0105-0106] & ¶ [0130-0131]. Gaisser notes for each cell 6101, a plane 6201 may be determined according to the x, y, and z coordinates of the points in the 3D image information 5700 that are encompassed by the cell 6101. Thus, for a kernel size of 20×20, 400 points of the 3D image information 5700 may be used to determine the plane 6201. See also Gaisser at ¶ [0102]: “The height difference may be determined, for example, as the average height difference between corresponding points on the first extended plane 6201BA and the second plane 6201A, wherein the corresponding points correspond grid points in the point cloud of the 3D image information 5700”. See also Gaisser at ¶ [0106]: The height gradient cost map 6200 may include a series of values representing a height gradient of points (in some embodiments, all points) in the 3D point cloud with respect to neighboring points in the 3D point cloud. The points in the height gradient cost map 6200 may be those points in the 3D point cloud image information 5700 that are separated by a stride. See also Gaisser at ¶ [0130-0131]: The actual points 8023 on the surface of the object 8022 do not all fall within the bounding box 8021, due to the deformable nature of the object 8022. Accordingly, in the operation 4008, detection mask information may be generated to identify portions of an object within a bounding box that are more or less suitable for object picking. See also Gaisser at Fig. 8B: “FIG. 8B illustrates detection mask information 8300. The detection mask information 8300 may include information about the objects within the bounding box 8021 (e.g., the bounding box for an image segment 7301 generated during operation 7010). The detection mask information 8300 includes identified areas 8024 and 8027 and unidentified area 8026”.)
- for each of a plurality of points in the depth data (see at least Gaisser: ¶ [0041-0043] & ¶ [0061] & ¶ [0065-0066] & Fig. 6B. Gaisser notes that the respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to another reference point. The 3D image information 2700 may include a point cloud (3D point cloud) which includes respective coordinates for various locations on structures of objects in the camera field of view (e.g., 3210). The depth information may be used to identify objects or estimate how objects are spatially arranged. In some instances, the spatial structure information may include or may be used to generate a point cloud that describes locations of one or more surfaces of an object. See at least Gaisser at ¶ [0103].), determining an indicator based on (i) whether the point is one of the candidate points (see at least Gaisser: ¶ [0041] & ¶ [0061] & ¶ [0065-0066] & ¶ [0125]. Gaisser notes that image analysis is performed according to or using spatial structure information that may include depth information which describes respective depth value of various locations relative a chosen point. The depth information may be used to identify objects or estimate how objects are spatially arranged. In some instances, the spatial structure information may include or may be used to generate a point cloud that describes locations of one or more surfaces of an object. See also Gaisser at ¶ [0061]: References herein related to image analysis by a computing system may be performed according to or using spatial structure information that may include depth information which describes respective depth value of various locations relative a chosen point. The depth information may be used to identify objects or estimate how objects are spatially arranged. See also Gaisser at ¶ [0065-0066]: The 3D image information 2700 may include, e.g., a depth map or a point cloud that indicates respective depth values of various locations on one or more surfaces (e.g., top surface or other outer surface) of the objects 3520. The respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to another reference point. The 3D image information 2700 may include a first image portion 2710, also referred to as an image portion, that indicates respective depth values for a set of locations 2710 1-2710 n, which are also referred to as physical locations on a surface of an object 3520. Further, the 3D image information 2700 may further include a second, a third, a fourth, and a fifth portion 2720, 2730, 2740, and 2750. These portions may then further indicate respective depth values for a set of locations, which may be represented by 2720 1-2720 n, 2730 1-2730 n, 2740 1-2740 n, and 2750 1-2750 n respectively. See also Gaisser at ¶ [0125]: Referring now to FIGS. 7C and 7D, the image segment 7301 may be selected as the object region 7201 having a seed 7204 located therein. The seed 7204 may be the point the surface cost map having the lowest cost (e.g., the smoothest point least likely to represent a boundary or discontinuity). A segment map 7300 (FIG. 7D) containing the image segment 7301 may be generated by removing all object regions 7201 that do not include the seed.), and (ii) a distance between the point and a reference feature in the depth data (see at least Gaisser: ¶ [0066] & ¶ [0078] & ¶ [0082] & ¶ [0108]. Gaisser notes that the respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to another reference point. See also Gaisser at ¶ [0078]: The 3D image information may include a depth map, or more generally include depth information, which may describe respective depth values of various locations in the camera field of view 3210 relative to the camera 1200 or relative to some other reference point. See also Gaisser at ¶ [0082]: The camera 1200/1200A/1200B may be stationary relative to a reference point, such as a floor on which the container 3510 is placed or relative to the robot base 3310. For example, the camera 1200 in FIG. 3A may be mounted to a ceiling, such as a ceiling of a warehouse, or to a mounting frame which remains stationary relative to the floor, relative to the robot base 3310, or some other reference point. See also Gaisser at ¶ [0088]: The depth value may be relative to the camera (e.g., 1200/1200A) which generated the 3D image information, or may be relative to some other reference point. See also Gaisser at ¶ [0108]. See also Gaisser at [0115-0116]: “A distance threshold may be selected according to an object size. Any detected height difference that is equal to or larger than the distance threshold may be set to the maximum value for height difference.”);
- assign each of the plurality of points (see at least Gaisser: ¶ [0093] & ¶ [0099] & ¶ [0103] & ¶ [0106]. Gaisser notes that the surface cost map may assign a surface cost map value to each point of a point cloud representative of the plurality of objects 3520 or a portion thereof. The surface cost map value assigned to any point or kernel may be representative of differences between that point or kernel and neighboring points or kernels. See also Gaisser at ¶ [0099]: Surface cost map values are assigned to the cell centers 6102 and, when performing calculations, each cell 6101 is compared to its non-overlapping neighboring cells 6101. See also Gaisser at ¶ [0103] & ¶ [0106].) having an indicator that exceeds a threshold to a set of points representing the object (see at least Gaisser: Fig. 2F & Fig. 7B & ¶ [0108-0110] & ¶ [0121]. Gaisser notes that the threshold borders 7102 represent regions having a surface cost map value exceeding the threshold while the object portions 7101 represent regions having a surface cost map value not exceeding the threshold. The threshold borders 7102 may thus be represented by “false” values in the threshold mask 7100 while the object portions 7101 are represented as “true” values. The assignment of “false” and “true” values is by convention only, and any suitable distinction may be applied. See also Gaisser at ¶ [0108]: The distance threshold parameter may be a threshold beyond which any height difference is assigned a maximum value. If the height difference between two planes exceeds the distance threshold, then that height difference may be set as a predetermined value (e.g., the distance threshold). See also Gaisser at Fig. 2F.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Phan non-transitory computer-readable medium for image-assisted region growing for object segmentation and dimensioning with the aforementioned teachings of: identify candidate points in the depth data based on the mask & for each of a plurality of points in the depth data, determining an indicator based on (i) whether the point is one of the candidate points, and (ii) a distance between the point and a reference feature in the depth data & assign each of the plurality of points having an indicator that exceeds a threshold to a set of points representing the object, and in view of Gaisser, whereby the system of Gaisser provides technical improvements to a robotic system configured for use in object identification, pickable region identification, and object transfer. Technical improvements described herein may increase the speed, precision, and accuracy of these tasks and further facilitate the detection, pickable region identification, and transfer of objects from a source container or repository to a destination. The robotic systems and computational systems described herein address the technical problem of identifying, detecting pickable regions, and retrieving objects from a container, where the objects may be irregularly arranged. By addressing this technical problem, the technology of object identification, pickable region detection, and object retrieval is improved (see at least Gaisser: ¶ [0034]).
Further, the claimed invention is merely a combination of old elements in a similar field for image-assisted region growing for object segmentation and dimensioning and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Gaisser, the results of the combination were predictable.
Moreover, Phan / Gaisser non-transitory computer-readable medium for image-assisted region growing for object segmentation and dimensioning does not explicitly disclose, but Miller in the analogous art for image-assisted region growing for object segmentation and dimensioning does disclose the following:
- dimension the object based on the set of points (see at least Miller: Figs. 9A-9B & Fig. 10B & Col. 6, Lns. 50-54. Miller teaches that the system 100 may determine the dimensions of the target object (106, FIG. 1) (e.g., length 110, width 112, depth 114; FIG. 1) by capturing and analyzing 3D imaging data of the target object via the 3D image sensors (204, FIG. 2) of the mobile device 102. For example, the 3D image sensors 204 may, for each frame captured, generate a point cloud including the target object 106 and its immediate environment (e.g., including the flat surface (108, FIG. 1) on which the target object is disposed and a wall 302 (or other background) in front of which the target object is disposed). The point cloud 300 may be generated based on a depth map (not depicted) captured by the 3D image sensors 204. See also Miller at Fig. 10B step 1016 -> “Measuring the at least three edge segments from the origin point along the at least three edges to determine a second subset of points, each point of the second subset having a depth value indicative of the target object”. Then Miller at Fig. 10B step 1020 -> “Determining, via the mobile computing device, at least one dimension corresponding to an edge of the target object based on the one or more edge distances”. See also Miller at Col. 2, Lns. 24-28: Miller teaches a method for dimensioning an object is disclosed. The method includes obtaining a point cloud of a target object, the point cloud including a plurality of points. The method further includes determining an origin point of the target object from within the plurality of points.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Phan / Gaisser non-transitory computer-readable medium for image-assisted region growing for object segmentation and dimensioning with the aforementioned teachings of: dimension the object based on the set of points, and in further view of Miller, whereby these devices in the Miller system may now be equipped with three-dimensional imaging systems incorporating cameras configured to detect infrared radiation combined with infrared or laser illuminators (e.g., light detection and ranging (LIDAR) systems) to enable the camera to derive depth information. It may be desirable for a mobile device to capture three-dimensional (3D) images of objects, or two-dimensional (2D) images with depth information, and derive from the captured imagery additional information about the objects portrayed, such as the dimensions of the objects or other details otherwise accessible through visual comprehension, such as significant marking, encoded information, or visible damage (see at least Miller: Col. 1, Lns. 50-64).
Further, the claimed invention is merely a combination of old elements in a similar field for image-assisted region growing for object segmentation and dimensioning and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Miller, the results of the combination were predictable.
Regarding Dependent Claims 2 and 11, Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning teaches the limitations of Independent Claims 1 and 9 above, and Phan further teaches the method / computing device for image-assisted region growing for object segmentation and dimensioning comprising:
- detecting, from the depth data, a surface supporting the object (see at least Phan: ¶ [0029] & ¶ [0060] & Fig. 7B & Fig. 8B. Phan teaches obtaining the captured data via a communications interface 124 for storage in a repository 132 and subsequent processing (e.g. to detect objects such as shelved products in the captured data, and detect status information corresponding to the objects). See also Phan at ¶ [0060]: “As shown in the list 704 of depth measurements, the point 708 is located on the surface of a product, while the point 709 is behind the product, e.g. on the shelf back 116 (at a depth of 528 mm, compared to a depth of 235 mm for the point 708).”).
- wherein the reference feature includes the surface (see at least Phan: ¶ [0035] & ¶ [0042] & ¶ [0060]. Phan teaches that the apparatus 103 is configured to track a location of the apparatus 103 (e.g. a location of the center of the chassis 201) in the common frame of reference 102 previously established in the retail facility, permitting data captured by the mobile automation apparatus 103 to be registered to the common frame of reference. Additionally, Phan teaches determining coordinates of the center of the field of view shown at ¶ [0056-0057].)
Regarding Dependent Claims 3 and 12, Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning teaches the limitations of Claims 1-2, 9 and 11 above, and Phan further teaches the method / computing device for image-assisted region growing for object segmentation and dimensioning comprising:
- wherein detecting the surface supporting the object includes (see at least Phan: ¶ [0029] & ¶ [0060] & Fig. 7B & Fig. 8B. Phan teaches obtaining the captured data via a communications interface 124 for storage in a repository 132 and subsequent processing (e.g. to detect objects such as shelved products in the captured data, and detect status information corresponding to the objects). See also Phan at ¶ [0060]: “As shown in the list 704 of depth measurements, the point 708 is located on the surface of a product, while the point 709 is behind the product, e.g. on the shelf back 116 (at a depth of 528 mm, compared to a depth of 235 mm for the point 708).”).
- selecting a portion of the depth data excluding the candidate points (see at least Phan: Fig. 3 & Figs. 7A-7B & ¶ [0058-0062]. Phan teaches that taking the points shown in FIG. 7B, the unoccluded subset of depth measurements obtained therefrom is shown in FIG. 8A, in which it is seen that the points 616 a and 709 have been discarded from the unoccluded subset 800. At block 325, the server 101 can be configured to perform one or more additional filtering operations to excluded further points from the unoccluded subset. See also Phan at ¶ [0058-0059]: “Select an unoccluded set of depth measurements from the points in the initial set. The initial set of points selected at block 315, although falling within the FOV 602, may nevertheless include points that were not imaged by the camera 207 because they are occluded from view by the camera 207 by other objects”. Turning to FIG. 7A, an example method 700 of selecting the unoccluded subset of depth measurements is illustrated.)
Regarding Dependent Claims 4 and 13, Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning teaches the limitations of Independent Claims 1 and 9 above, and Gaisser further teaches the method / computing device for image-assisted region growing for object segmentation and dimensioning comprising:
- wherein determining the indicator includes: determining a first indicator component (see at least Gaisser: Fig. 2F & Figs. 7B-7E & ¶ [0121]. Gaisser teaches that the object portions 7101 represent a first estimation of object surfaces while the threshold borders 7102 represent a first estimation of object boundaries or discontinuities.) based on whether the point is one of the candidate points (see at least Gaisser: ¶ [0041] & ¶ [0061] & ¶ [0065-0066] & ¶ [0125]. Gaisser notes that image analysis is performed according to or using spatial structure information that may include depth information which describes respective depth value of various locations relative a chosen point. The depth information may be used to identify objects or estimate how objects are spatially arranged. In some instances, the spatial structure information may include or may be used to generate a point cloud that describes locations of one or more surfaces of an object. See also Gaisser at ¶ [0061]: References herein related to image analysis by a computing system may be performed according to or using spatial structure information that may include depth information which describes respective depth value of various locations relative a chosen point. The depth information may be used to identify objects or estimate how objects are spatially arranged. See also Gaisser at ¶ [0065-0066]: The 3D image information 2700 may include, e.g., a depth map or a point cloud that indicates respective depth values of various locations on one or more surfaces (e.g., top surface or other outer surface) of the objects 3520. The respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to another reference point. The 3D image information 2700 may include a first image portion 2710, also referred to as an image portion, that indicates respective depth values for a set of locations 2710 1-2710 n, which are also referred to as physical locations on a surface of an object 3520. Further, the 3D image information 2700 may further include a second, a third, a fourth, and a fifth portion 2720, 2730, 2740, and 2750. These portions may then further indicate respective depth values for a set of locations, which may be represented by 2720 1-2720 n, 2730 1-2730 n, 2740 1-2740 n, and 2750 1-2750 n respectively. See also Gaisser at ¶ [0125]: Referring now to FIGS. 7C and 7D, the image segment 7301 may be selected as the object region 7201 having a seed 7204 located therein. The seed 7204 may be the point the surface cost map having the lowest cost (e.g., the smoothest point least likely to represent a boundary or discontinuity). A segment map 7300 (FIG. 7D) containing the image segment 7301 may be generated by removing all object regions 7201 that do not include the seed.);
- determining a second indicator component (see at least Gaisser: Fig. 2F & Figs. 7B-7E & ¶ [0121]. Gaisser teaches that the object portions 7101 represent a first estimation of object surfaces while the threshold borders 7102 represent a first estimation of object boundaries or discontinuities. See also Gaisser at ¶ [0148]: Applying a second cost threshold to a remaining portion of the surface cost map to generate a second thresholded mask; eroding the second thresholded mask to generate a second eroded mask; and applying the connected components analysis to the second eroded mask to identify a second image segment.) based on the distance between the point and the reference feature (see at least Gaisser: ¶ [0066] & ¶ [0078] & ¶ [0082] & ¶ [0108]. Gaisser notes that the respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to another reference point. See also Gaisser at ¶ [0078]: The 3D image information may include a depth map, or more generally include depth information, which may describe respective depth values of various locations in the camera field of view 3210 relative to the camera 1200 or relative to some other reference point. See also Gaisser at ¶ [0082]: The camera 1200/1200A/1200B may be stationary relative to a reference point, such as a floor on which the container 3510 is placed or relative to the robot base 3310. For example, the camera 1200 in FIG. 3A may be mounted to a ceiling, such as a ceiling of a warehouse, or to a mounting frame which remains stationary relative to the floor, relative to the robot base 3310, or some other reference point. See also Gaisser at ¶ [0088]: The depth value may be relative to the camera (e.g., 1200/1200A) which generated the 3D image information, or may be relative to some other reference point. See also Gaisser at ¶ [0108]. See also Gaisser at [0115-0116]: “A distance threshold may be selected according to an object size. Any detected height difference that is equal to or larger than the distance threshold may be set to the maximum value for height difference.”);
- combining the first and second indicator components (see at least Gaisser: ¶ [0094-0096] & ¶ [0111]. Gaisser notes that the surface cost map may include a height gradient map and a normal difference map or may be computed from a combination of a height gradient map and a normal difference map. The surface cost map may be generated from the 3D image information 5700 to include or be provided as a combination of a height gradient map and a normal difference map based on several cost map parameters. Such cost map parameters, explained in greater detail below, may include kernel, stride, distance threshold, normal threshold, and normal weight factor. See also Gaisser at ¶ [0111]. The surface cost map 6400 may be generated as a mathematical combination of the height gradient cost map 6200 and the normal difference cost map 6300. In embodiments, the computer system may combine the height difference values and the normal difference values according to a filtering operation, such as an average filter or a sobel filter. The values in the height gradient cost map 6200 and the normal difference cost map 6300 may be normalized and combined.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning with the aforementioned teachings of: determining ta first indicator component based on whether the point is one of the candidate points; determining a second indicator component based on the distance between the point and reference feature and combining the first and second indicator components, and in further view of Gaisser, whereby the system of Gaisser provides technical improvements to a robotic system configured for use in object identification, pickable region identification, and object transfer. Technical improvements described herein may increase the speed, precision, and accuracy of these tasks and further facilitate the detection, pickable region identification, and transfer of objects from a source container or repository to a destination. The robotic systems and computational systems described herein address the technical problem of identifying, detecting pickable regions, and retrieving objects from a container, where the objects may be irregularly arranged. By addressing this technical problem, the technology of object identification, pickable region detection, and object retrieval is improved (see at least Gaisser: ¶ [0034]).
Further, the claimed invention is merely a combination of old elements in a similar field for image-assisted region growing for object segmentation and dimensioning and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Gaisser, the results of the combination were predictable.
Regarding Dependent Claims 5 and 14, Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning teaches the limitations of Independent Claims 1 and 9 above, and Gaisser further teaches the method / computing device for image-assisted region growing for object segmentation and dimensioning comprising:
- further comprising selecting the plurality of points by (see at least Gaisser: ¶ [0113] & ¶ [0116-0118] & ¶ [0125].)
- selecting a seed point (see at least Gaisser: ¶ [0113] & ¶ [0125]) from the candidate points (see at least Gaisser: ¶ [0100-0102] & ¶ [0105-0106] & ¶ [0130-0131]. Gaisser notes for each cell 6101, a plane 6201 may be determined according to the x, y, and z coordinates of the points in the 3D image information 5700 that are encompassed by the cell 6101. Thus, for a kernel size of 20×20, 400 points of the 3D image information 5700 may be used to determine the plane 6201. See also Gaisser at ¶ [0102]: “The height difference may be determined, for example, as the average height difference between corresponding points on the first extended plane 6201BA and the second plane 6201A, wherein the corresponding points correspond grid points in the point cloud of the 3D image information 5700”. See also Gaisser at ¶ [0106]: The height gradient cost map 6200 may include a series of values representing a height gradient of points (in some embodiments, all points) in the 3D point cloud with respect to neighboring points in the 3D point cloud. The points in the height gradient cost map 6200 may be those points in the 3D point cloud image information 5700 that are separated by a stride. See also Gaisser at ¶ [0130-0131]: The actual points 8023 on the surface of the object 8022 do not all fall within the bounding box 8021, due to the deformable nature of the object 8022. Accordingly, in the operation 4008, detection mask information may be generated to identify portions of an object within a bounding box that are more or less suitable for object picking. See also Gaisser at Fig. 8B: “FIG. 8B illustrates detection mask information 8300. The detection mask information 8300 may include information about the objects within the bounding box 8021 (e.g., the bounding box for an image segment 7301 generated during operation 7010). The detection mask information 8300 includes identified areas 8024 and 8027 and unidentified area 8026”.)
- selecting a first point neighboring the seed point (see at least Gaisser: ¶ [0093-0094] & ¶ [0099-0101] & ¶ [0125]. Gaisser notes that the image segment 7301 may be selected as the object region 7201 having a seed 7204 located therein. The seed 7204 may be the point the surface cost map having the lowest cost (e.g., the smoothest point least likely to represent a boundary or discontinuity). A segment map 7300 (FIG. 7D) containing the image segment 7301 may be generated by removing all object regions 7201 that do not include the seed. See also Gaisser at ¶ [0093-0094]: “The surface cost map values are representative of differences between collections of points, referred to herein as kernels or cells, and neighboring kernels. Thus, the surface cost map value assigned to any point or kernel may be representative of differences between that point or kernel and neighboring points or kernels”. See also Gaisser at ¶ [0099]: Surface cost map values are assigned to the cell centers 6102 and, when performing calculations, each cell 6101 is compared to its non-overlapping neighboring cells 6101. See also Gaisser at ¶ [0113].);
- determining the indicator for the first point (see at least Gaisser: ¶ [0125] & ¶ [0132-0137]. Gaisser notes that the seed 7204 may be the point the surface cost map having the lowest cost (e.g., the smoothest point least likely to represent a boundary or discontinuity). See also Gaisser at ¶ [0132]: The safety volume may represent a volume which a selected object for picking may occupy. The safety volume is selected to reduce the likelihood that the selected object, once picked, will collide with something else within the object handling environment. This safety volume size thus creates a volume around the pickable region 9201 that may provide a margin of error in the potential dimensions of the object, for example, if the pickable region 9201 is not located at a center of the object 3520 to be picked. The size of the safety volume 9100 may then be modified as follows.);
- when the indicator exceeds the threshold (see at least Gaisser: ¶ [0108-0110] & ¶ [0117].), selecting a second point neighboring the first point (see at least Gaisser: ¶ [0093-0094] & ¶ [0106-0109] & ¶ [0113]. Gaisser notes that the height gradient cost map 6200 may include a series of values representing a height gradient of points (in some embodiments, all points) in the 3D point cloud with respect to neighboring points in the 3D point cloud. The points in the height gradient cost map 6200 may be those points in the 3D point cloud image information 5700 that are separated by a stride. The surface cost map value assigned to any point or kernel may be representative of differences between that point or kernel and neighboring points or kernels.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning with the aforementioned teachings of: further comprising selecting the plurality of points by: selecting a seed point from the candidate points; selecting a first point neighboring the seed point; determining the indicator for the first point and when the indicator exceeds the threshold, selecting a second point neighboring the first point, and in further view of Gaisser, whereby the system of Gaisser provides technical improvements to a robotic system configured for use in object identification, pickable region identification, and object transfer. Technical improvements described herein may increase the speed, precision, and accuracy of these tasks and further facilitate the detection, pickable region identification, and transfer of objects from a source container or repository to a destination. The robotic systems and computational systems described herein address the technical problem of identifying, detecting pickable regions, and retrieving objects from a container, where the objects may be irregularly arranged. By addressing this technical problem, the technology of object identification, pickable region detection, and object retrieval is improved (see at least Gaisser: ¶ [0034]).
Further, the claimed invention is merely a combination of old elements in a similar field for image-assisted region growing for object segmentation and dimensioning and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Gaisser, the results of the combination were predictable.
Regarding Dependent Claims 6 and 15, Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning teaches the limitations of Claims 1, 5, 9 and 14 above, and Gaisser further teaches the method / computing device for image-assisted region growing for object segmentation and dimensioning comprising:
- wherein the seed point corresponds to a center of the image data (see at least Gaisser: Figs. 7C-7D & ¶ [0125]. Gaisser techies that the image segment 7301 may be selected as the object region 7201 having a seed 7204 located therein. The seed 7204 may be the point the surface cost map having the lowest cost (e.g., the smoothest point least likely to represent a boundary or discontinuity). A segment map 7300 (FIG. 7D) containing the image segment 7301 may be generated by removing all object regions 7201 that do not include the seed.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning with the aforementioned teachings of: wherein the seed point corresponds to a center of the image data, and in further view of Gaisser, whereby the system of Gaisser provides technical improvements to a robotic system configured for use in object identification, pickable region identification, and object transfer. Technical improvements described herein may increase the speed, precision, and accuracy of these tasks and further facilitate the detection, pickable region identification, and transfer of objects from a source container or repository to a destination. The robotic systems and computational systems described herein address the technical problem of identifying, detecting pickable regions, and retrieving objects from a container, where the objects may be irregularly arranged. By addressing this technical problem, the technology of object identification, pickable region detection, and object retrieval is improved (see at least Gaisser: ¶ [0034]).
Further, the claimed invention is merely a combination of old elements in a similar field for image-assisted region growing for object segmentation and dimensioning and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Gaisser, the results of the combination were predictable.
Regarding Dependent Claims 7 and 16, Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning teaches the limitations of Claims 1, 5-6, 9 and 14-15 above, and Gaisser further teaches the method / computing device for image-assisted region growing for object segmentation and dimensioning comprising:
- wherein the reference feature includes the center of the image data (see at least Gaisser: Figs. 6C-6D & ¶ [0100] & ¶ [0123]. Gaisser notes that the structuring element may represent, for example, an NxN group of pixels or points with an output pixel/point, which may be located at a center of the structuring element. See also Gaisser at ¶ [0100] & Figs. 6C-6D: The plane 6201 may be determined according to an average of normal vectors at each point within the 3D image information 5700 within each cell 6101. Each plane 6201 includes a centroid 6202 and a normal 6203. The centroid 6202 is located at the geometric center of the plane 6201 and the normal 6203 extends orthogonally to the plane 6201 from the centroid 6202. The height of each plane 6201 may be defined as the height of its centroid 6202. See also Gaisser at ¶ [0109]: Gaisser teaches that the mean of these normal differences may be taken and assigned to the cell 6101 (e.g., the point at the center of the cell 6101) associated with the plane 6201. In this way, a normal differences cost map 6300 may be generated wherein each point within the surface cost map is assigned a normal difference indicative of angular differences between the plane 6201 centered at the point and the neighboring planes 6201.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning with the aforementioned teachings of: wherein the reference feature includes the center of the image data, and in further view of Gaisser, whereby the system of Gaisser provides technical improvements to a robotic system configured for use in object identification, pickable region identification, and object transfer. Technical improvements described herein may increase the speed, precision, and accuracy of these tasks and further facilitate the detection, pickable region identification, and transfer of objects from a source container or repository to a destination. The robotic systems and computational systems described herein address the technical problem of identifying, detecting pickable regions, and retrieving objects from a container, where the objects may be irregularly arranged. By addressing this technical problem, the technology of object identification, pickable region detection, and object retrieval is improved (see at least Gaisser: ¶ [0034]).
Further, the claimed invention is merely a combination of old elements in a similar field for image-assisted region growing for object segmentation and dimensioning and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Gaisser, the results of the combination were predictable.
Regarding Dependent Claims 8 and 17, Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning teaches the limitations of Independent Claims 1 and 9 above, and Miller further teaches the method / computing device for image-assisted region growing for object segmentation and dimensioning comprising:
- wherein dimensioning the object includes (see at least Miller: Figs. 9A-9B & Fig. 10B & Col. 6, Lns. 50-54. Miller teaches that the system 100 may determine the dimensions of the target object (106, FIG. 1) (e.g., length 110, width 112, depth 114; FIG. 1) by capturing and analyzing 3D imaging data of the target object via the 3D image sensors (204, FIG. 2) of the mobile device 102. For example, the 3D image sensors 204 may, for each frame captured, generate a point cloud including the target object 106 and its immediate environment (e.g., including the flat surface (108, FIG. 1) on which the target object is disposed and a wall 302 (or other background) in front of which the target object is disposed). The point cloud 300 may be generated based on a depth map (not depicted) captured by the 3D image sensors 204. See also Miller at Fig. 10B step 1016 -> “Measuring the at least three edge segments from the origin point along the at least three edges to determine a second subset of points, each point of the second subset having a depth value indicative of the target object”. Then Miller at Fig. 10B step 1020 -> “Determining, via the mobile computing device, at least one dimension corresponding to an edge of the target object based on the one or more edge distances”. See also Miller at Col. 2, Lns. 24-28: Miller teaches a method for dimensioning an object is disclosed. The method includes obtaining a point cloud of a target object, the point cloud including a plurality of points. The method further includes determining an origin point of the target object from within the plurality of points.);
- determining a bounding box encompassing the set of points (see at least Miller: Figs. 5A-5B & Figs. 9A-9C);
- determining dimensions of the bounding box (see at least Miller: Figs. 9A-9B & Fig. 10B & Col. 6, Lns. 50-54. Miller teaches that the system 100 may determine the dimensions of the target object (106, FIG. 1) (e.g., length 110, width 112, depth 114; FIG. 1) by capturing and analyzing 3D imaging data of the target object via the 3D image sensors (204, FIG. 2) of the mobile device 102. For example, the 3D image sensors 204 may, for each frame captured, generate a point cloud including the target object 106 and its immediate environment (e.g., including the flat surface (108, FIG. 1) on which the target object is disposed and a wall 302 (or other background) in front of which the target object is disposed). The point cloud 300 may be generated based on a depth map (not depicted) captured by the 3D image sensors 204. See also Miller at Fig. 10B step 1016 -> “Measuring the at least three edge segments from the origin point along the at least three edges to determine a second subset of points, each point of the second subset having a depth value indicative of the target object”. Then Miller at Fig. 10B step 1020 -> “Determining, via the mobile computing device, at least one dimension corresponding to an edge of the target object based on the one or more edge distances”. See also Miller at Col. 2, Lns. 24-28: Miller teaches a method for dimensioning an object is disclosed. The method includes obtaining a point cloud of a target object, the point cloud including a plurality of points. The method further includes determining an origin point of the target object from within the plurality of points. See also Miller at Figs. 5A-5B.).
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined / modified the teachings of Phan / Gaisser / Miller method / computing device for image-assisted region growing for object segmentation and dimensioning with the aforementioned teachings of: wherein dimensioning the object includes: determining a bounding box encompassing the set of points & determining dimensions of the bounding box, and in further view of Miller, whereby these devices in the Miller system may now be equipped with three-dimensional imaging systems incorporating cameras configured to detect infrared radiation combined with infrared or laser illuminators (e.g., light detection and ranging (LIDAR) systems) to enable the camera to derive depth information. It may be desirable for a mobile device to capture three-dimensional (3D) images of objects, or two-dimensional (2D) images with depth information, and derive from the captured imagery additional information about the objects portrayed, such as the dimensions of the objects or other details otherwise accessible through visual comprehension, such as significant marking, encoded information, or visible damage (see at least Miller: Col. 1, Lns. 50-64).
Further, the claimed invention is merely a combination of old elements in a similar field for image-assisted region growing for object segmentation and dimensioning and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Miller, the results of the combination were predictable.
Regarding Dependent Claim 10, Phan / Gaisser / Miller computing device for image-assisted region growing for object segmentation and dimensioning teaches the limitations of Independent Claim 9 above, and Phan further teaches the computing device for image-assisted region growing for object segmentation and dimensioning comprising:
- wherein the sensor includes a depth sensor and an image sensor (see at least Phan: ¶ [0028] & Fig. 1. Phan teaches that the apparatus 103 is equipped with a plurality of navigation and data capture sensors 104, such as image sensors (e.g. one or more digital cameras) and depth sensors (e.g. one or more Light Detection and Ranging (LIDAR) sensors, one or more depth cameras employing structured light patterns, such as infrared light, or the like).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DERICK HOLZMACHER whose telephone number is (571) 270-7853. The examiner can normally be reached on Monday-Friday 9:00 AM – 6:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached on 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-270-8853.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/DERICK J HOLZMACHER/Patent Examiner, Art Unit 3625
/VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671