DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-9 and 11-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chen (Chinese Patent Application CN 112660686A).
Regarding Claim 1, Chen discloses A method for determining material-cage stacking, (Page 1, lines 15-16: “The invention relates to the technical field of logistics automation, in particular to a depth camera-based material cage stacking method and device, electronic equipment, and system.”)comprising: obtaining a material-cage image by photographing a first stacking apparatus of a first material cage and a second stacking apparatus of a second material cage; (Page 4, lines 25-26: “Step S101, collecting a first depth map containing the left side of the upper and lower cages through the left depth camera;” and Page 4, lines 32-33: “Step S103, acquiring a second depth map on the right side of the upper and lower cages through the right depth camera;”)performing first target detection on the material-cage image with a first detection model to recognize the first stacking apparatus of the first material cage and the second stacking apparatus of the second material cage; (Pages 4-5, lines 59-4:”referring to FIG. 2, this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage; the feature extracted from the second depth map 12 is the upper right front vertical beam 8 of the cage; the feature extracted from the second depth map is the lower cage Right front vertical beam 9. In this way, no additional marks can be used to assist the forklift in determining the pose.“)determining first location information of the first stacking apparatus and second location information of the second stacking apparatus, and determining a first stacking result based on the first location information and the second location information; (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;” performing second target detection on the material-cage image with a second detection model to extract feature information of the material-cage image, (Pages 4-5, lines 59-4:”referring to FIG. 2, this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage; the feature extracted from the second depth map 12 is the upper right front vertical beam 8 of the cage; the feature extracted from the second depth map is the lower cage Right front vertical beam 9. In this way, no additional marks can be used to assist the forklift in determining the pose.“)and obtaining a second stacking result based on the feature information; (Page 4, lines 35-37: “Step S104, extract the feature on the right side of the upper cage from the second depth map, calculate the position p3 of the feature relative to the right depth camera, and extract the feature on the right side of the lower cage from the second depth map , And calculate the position p4 of the feature relative to the right depth camera;”)and determining whether the first material cage is able to be stacked on the second material cage, based on the first stacking result and the second stacking result(Page 4, lines 39-44: “Step S105, according to the position p1, position p2, position p3, and position p4, and the pose conversion relationship T between the right camera and the left camera, calculate the distance Δx, the distance Δy, and the deviation of the upper cage relative to the lower cage. Angle Δyaw; In step S106, the forklift AGV is controlled to make the distance Δx, the distance Δy and the angle Δyaw all approach 0, and the stacking of the upper and lower baskets is completed.”)wherein determining whether the first material cage is able to be stacked on the second material cage, based on the first stacking result and the second stacking result comprises: (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;” and Page 4, lines 35-37: “Step S104, extract the feature on the right side of the upper cage from the second depth map, calculate the position p3 of the feature relative to the right depth camera, and extract the feature on the right side of the lower cage from the second depth map , And calculate the position p4 of the feature relative to the right depth camera;”)obtaining stacking data by weighting and summing the first stacking result and the second stacking result with a stacking determining model; (Page 4, lines 50-55: “Because the depth camera is very close to the cage, the data quality is good, ensuring high accuracy of feature extraction, and directly detecting the relative deviation between the upper and lower cages, and performing feedback control algorithms to control the forklift AGV. Therefore, the measurement error of the depth camera and the calculation error caused by the external parameter calibration error can be suppressed (because these errors of the upper and lower baskets in the same camera can be offset), and high stacking repeatability can be ensured.”)and determining whether the first material cage is able to be stacked on the second material cage by comparing the stacking data with a stacking threshold.(Page 4, lines 39-44: “Step S105, according to the position p1, position p2, position p3, and position p4, and the pose conversion relationship T between the right camera and the left camera, calculate the distance Δx, the distance Δy, and the deviation of the upper cage relative to the lower cage. Angle Δyaw; In step S106, the forklift AGV is controlled to make the distance Δx, the distance Δy and the angle Δyaw all approach 0, and the stacking of the upper and lower baskets is completed.”).
Regarding Claim 2, Chen discloses the method of Claim 1, as seen above. Chen further discloses wherein the material-cage image comprises a first material-cage image and a second material-cage image; and obtaining the material-cage image by photographing the first stacking apparatus of the first material cage and the second stacking apparatus of the second material cage (Page 4, lines 25-26: “Step S101, collecting a first depth map containing the left side of the upper and lower cages through the left depth camera;” and Page 4, lines 32-33: “Step S103, acquiring a second depth map on the right side of the upper and lower cages through the right depth camera;”) comprises: obtaining the first material-cage image by photographing the first stacking apparatus in a first direction of the first material cage and the second stacking apparatus in the first direction of the second material cage (Page 4, lines 25-26: “Step S101, collecting a first depth map containing the left side of the upper and lower cages through the left depth camera;”); and obtaining the second material-cage image by photographing the first stacking apparatus in a second direction of the first material cage and the second stacking apparatus in the second direction of the second material cage (Page 4, lines 32-33: “Step S103, acquiring a second depth map on the right side of the upper and lower cages through the right depth camera;”), wherein the first direction is different from the second direction (Figure 2: Shows the different directions images by first depth map 11 and second depth map 12).
Regarding Claim 3, Chen discloses the method of Claim 2, as seen above. Chen further discloses wherein the first stacking apparatus and the second stacking apparatus each comprise foot cups and piers matched with the foot cups (Figure 2: Front vertical beams 6,8 and front vertical beams 7,9); obtaining the first material-cage image by photographing the first stacking apparatus in the first direction of the first material cage and the second stacking apparatus in the first direction of the second material cage comprises: obtaining the first material-cage image by photographing a foot cup in the first direction of the first material cage and a pier in the first direction of the second material cage (Pages 4-5, lines 59-2: “this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage; “); and obtaining the second material-cage image by photographing the first stacking apparatus in the second direction of the first material cage and the second stacking apparatus in the second direction of the second material cage comprises: obtaining the second material-cage image by photographing a foot cup in the second direction of the first material cage and a pier in the second direction of the second material cage (Page 5, lines 2-3: “the feature extracted from the second depth map 12 is the upper right front vertical beam 8 of the cage; the feature extracted from the second depth map is the lower cage Right front vertical beam 9.”).
Regarding Claim 4, Chen discloses the method of Claim 3, as seen above. Chen further discloses wherein the first direction and the second direction are two sides of a warehousing unmanned forklift (Figure 1: Shows forklift AGV 3 wherein each side of the forklift relates to a side photographed (Beams 6,7 on the left side, beams 8,9 on the right side)).
Regarding Claim 5, Chen discloses the method of Claim 1, as seen above. Chen further discloses wherein performing the first target detection on the material-cage image with the first detection model to recognize the first stacking apparatus of the first material cage and the second stacking apparatus of the second material cage comprises (Pages 4-5, lines 59-4:”referring to FIG. 2, this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage; the feature extracted from the second depth map 12 is the upper right front vertical beam 8 of the cage; the feature extracted from the second depth map is the lower cage Right front vertical beam 9. In this way, no additional marks can be used to assist the forklift in determining the pose.“): obtaining the feature information of the material-cage image with the first detection model, and recognizing the first stacking apparatus of the first material cage and the second stacking apparatus of the second material cage according to the feature information (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;”).
Regarding Claim 6, Chen discloses the method of Claim 1, as seen above. Chen further discloses wherein the first material cage has a first foot cup on a first surface of the first material cage and a first pier on a second surface of the first material cage (Figure 2: Shows beam 6 on the lower surface of material cage 4, and wherein beam 7 will be located on the upper surface of cage 4 following the structure described wherein Page 5, lines 7-9: “Without loss of generality, there are four vertical beams on the four corners of each cage, and their specifications and dimensions are consistent.”), the second material cage has a second foot cup on a first surface of the second material cage and a second pier on a second surface of the second material cage (Figure 2: Wherein beam 6 will be located on the bottom surface of cage 5 following the structure described, and wherein beam 7 is shown on the upper surface of cage 5, wherein Page 5, lines 7-9: “Without loss of generality, there are four vertical beams on the four corners of each cage, and their specifications and dimensions are consistent.”), and wherein determining the first location information of the first stacking apparatus and the second location information of the second stacking apparatus, and determining the first stacking result based on the first location information and the second location information (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;”) comprises: determining the first location information of the first foot cup in the first stacking apparatus and the second location information of the second pier in the second stacking apparatus (Figure 2: Beams 6 and 7); obtaining a distance between the first foot cup and the second pier in each of stacking groups based on the first location information and the second location information (Pages 4-5, lines 59-2: “referring to FIG. 2, this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage;”), wherein the first foot cup and the second pier in a same stacking group are aligned with each other in a vertical direction when the first material cage is stacked on the second material cage (Figure 2: Shows how beams 6 and 7 are vertically aligned to one another as they are stacked); for each of the stacking groups, obtaining a comparing result of the stacking group by comparing the distance between the first foot cup and the second pier with a distance threshold (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;”); and determining the first stacking result based on the comparing results of the stacking groups (Page 4, lines 39-44: “Step S105, according to the position p1, position p2, position p3, and position p4, and the pose conversion relationship T between the right camera and the left camera, calculate the distance Δx, the distance Δy, and the deviation of the upper cage relative to the lower cage. Angle Δyaw, wherein the first stacking result comes from calculations of positions on the left side p1 and p2).
Regarding Claim 7, Chen discloses the method of Claim 6, as seen above. Chen further discloses wherein the first surface of the first material cage is a bottom surface of the first material cage (Figure 2: Bottom surface of cage 4, where beam 6 is located), and the second surface of the first material cage is a top surface of the first material cage (Figure 2: Top surface of cage 4, where beam 7 would be located following the structure disclosed in Page 5, lines 7-9: “Without loss of generality, there are four vertical beams on the four corners of each cage, and their specifications and dimensions are consistent.”); the first surface of the second material cage is a bottom surface of the second material cage (Figure 2: Bottom surface of cage 5, wherein beam 6 would be located following the structure disclosed in Page 5, lines 7-9: “Without loss of generality, there are four vertical beams on the four corners of each cage, and their specifications and dimensions are consistent.”), and the second surface of the second material cage is a top surface of the second material cage (Figure 2: Top surface of cage 5, where beam 7 is located); and the bottom surface of the first material cage is close to or in contact with the top surface of the second material cage when the first material cage is stacked on the second material cage (Figure 2: Shows the bottom surface of cage 4 coming into close contact with the top surface of cage 5 when stacked).
Regarding Claim 8, Chen discloses the method of Claim 6, as seen above. Chen further discloses wherein determining the first location information of the first foot cup in the first stacking apparatus and the second location information of the second pier in the second stacking apparatus (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;”) comprises: assigning a midpoint of a rectangular frame defined by the first foot cup and the second pier as a positioning point of the first foot cup and the second pier (Page 5, lines 24-27: “At the same time, two of the depth cameras are installed at the same distance from the center of the forklift AGV. The relative position relationship of the two depth cameras is calibrated in advance by the external parameter calibration method. Without loss of generality, the pose conversion relationship of the right camera relative to the left camera is defined as T.”, wherein page 5, lines 54-55: “Calculate the coordinate value of the center of the upper cage in the depth camera coordinate system on the left as:” and page 6, lines 1-2: “Calculate the coordinate value of the center of the lower cage in the depth camera coordinate system on the left as:”); and obtaining the first location information and the second location information in real time according to internal and external parameters of an image obtaining device and an equation of ground in a coordinate system of the image obtaining device (Page 7, lines 18-24: “The first acquisition module 21 is used to acquire a first depth map containing the left side of the upper and lower cages through the left depth camera; The first extraction calculation module 22 extracts the feature on the left side of the upper cage from the first depth map, calculates the position p1 of the feature relative to the left depth camera, and extracts the lower cage from the first depth map The feature on the left, and calculate the position p2 of the feature relative to the depth camera on the left;”, wherein the calculation module calculates based upon positions and depths from the camera).
Regarding Claim 9, Chen discloses the method of Claim 6, as seen above. Chen further discloses wherein determining the first stacking result based on the comparing results of the stacking groups comprises: (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;” and Page 4, lines 39-44: “Step S105, according to the position p1, position p2, position p3, and position p4, and the pose conversion relationship T between the right camera and the left camera, calculate the distance Δx, the distance Δy, and the deviation of the upper cage relative to the lower cage. Angle Δyaw, wherein the first stacking result comes from calculations of positions on the left side p1 and p2) determining the first stacking result is that the first material cage is able to be stacked on the second material cage, on condition that the comparing results of the stacking groups each indicate that the first material cage is able to be stacked on the second material cage; or determining the first stacking result to be each of the comparing results of the stacking groups.(Pages 6-7, lines 59-4: “Through the above steps, it can be ensured that the left front vertical beam of the upper cage is aligned with the left front vertical beam of the lower cage, and the right front vertical beam of the upper cage is aligned with the right front vertical beam of the lower cage. When the two sets of vertical beams are aligned, it can be ensured that the upper cage is aligned with the lower cage as a whole, that is, the two sets of vertical beams behind the two cages are also automatically aligned. In this way, when the forklift arm is lower limit, the stacking can be ensured successfully.”)
Regarding Claim 11, Chen discloses the method of Claim 1, as seen above. Chen further discloses wherein the stacking determining model is a trained classifier (Page 4, lines 50-55: “Because the depth camera is very close to the cage, the data quality is good, ensuring high accuracy of feature extraction, and directly detecting the relative deviation between the upper and lower cages, and performing feedback control algorithms to control the forklift AGV. Therefore, the measurement error of the depth camera and the calculation error caused by the external parameter calibration error can be suppressed (because these errors of the upper and lower baskets in the same camera can be offset), and high stacking repeatability can be ensured.”, wherein the feedback control algorithm works as a trained classifier).
Regarding Claim 12, Chen discloses the method of Claim 1, as seen above. Chen further discloses further comprising: triggering operations of stacking the first material cage on the second material cage upon determining that the first material cage can be stacked on the second material cage; or triggering operations of preventing the first material cage from being stacked on the second material cage upon determining that the first material cage cannot be stacked on the second material cage (Pages 6-7, lines 59-4: “Through the above steps, it can be ensured that the left front vertical beam of the upper cage is aligned with the left front vertical beam of the lower cage, and the right front vertical beam of the upper cage is aligned with the right front vertical beam of the lower cage. When the two sets of vertical beams are aligned, it can be ensured that the upper cage is aligned with the lower cage as a whole, that is, the two sets of vertical beams behind the two cages are also automatically aligned. In this way, when the forklift arm is lower limit, the stacking can be ensured successfully.”).
Regarding Claim 13, Chen discloses the method of Claim 1, as seen above. Chen further discloses wherein the first location information is coordinate information of the first stacking apparatus (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;”, and the second location information is coordinate information of the second stacking apparatus (Page 4, lines 35-37: “Step S104, extract the feature on the right side of the upper cage from the second depth map, calculate the position p3 of the feature relative to the right depth camera, and extract the feature on the right side of the lower cage from the second depth map , And calculate the position p4 of the feature relative to the right depth camera;”, wherein coordinate information is disclosed from Page 5, lines 37-39: “The positions p1, p2, p3, p4, these position values all contain two degrees of freedom, for example, p1 contains p1.x, p1.y, which respectively represent the x and y axis values of the position in the depth camera coordinate system.”).
Regarding Claim 14, Chen discloses A computer device, comprising: a processor; and a memory configured to store computer instructions (Page 7, lines 58-60: “Correspondingly, the present invention also provides a computer-readable storage medium on which computer instructions are stored, characterized in that, when the instructions are executed by a processor, the above-mentioned depth camera-based cage stacking method is implemented.”)which, when executed by the processor, enable the processor to: obtain a material-cage image by photographing a first stacking apparatus of a first material cage and a second stacking apparatus of a second material cage with an image obtaining device; (Page 4, lines 25-26: “Step S101, collecting a first depth map containing the left side of the upper and lower cages through the left depth camera;” and Page 4, lines 32-33: “Step S103, acquiring a second depth map on the right side of the upper and lower cages through the right depth camera;”)perform first target detection on the material-cage image with a first detection model to recognize the first stacking apparatus of the first material cage and the second stacking apparatus of the second material cage; (Pages 4-5, lines 59-4:”referring to FIG. 2, this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage; the feature extracted from the second depth map 12 is the upper right front vertical beam 8 of the cage; the feature extracted from the second depth map is the lower cage Right front vertical beam 9. In this way, no additional marks can be used to assist the forklift in determining the pose.“)determine first location information of the first stacking apparatus and second location information of the second stacking apparatus, and determine a first stacking result based on the first location information and the second location information; (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;” perform second target detection on the material-cage image with a second detection model to extract feature information of the material-cage image, (Pages 4-5, lines 59-4:”referring to FIG. 2, this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage; the feature extracted from the second depth map 12 is the upper right front vertical beam 8 of the cage; the feature extracted from the second depth map is the lower cage Right front vertical beam 9. In this way, no additional marks can be used to assist the forklift in determining the pose.“)and obtain a second stacking result based on the feature information; (Page 4, lines 35-37: “Step S104, extract the feature on the right side of the upper cage from the second depth map, calculate the position p3 of the feature relative to the right depth camera, and extract the feature on the right side of the lower cage from the second depth map , And calculate the position p4 of the feature relative to the right depth camera;”)obtaining stacking data by weighting and summing the first stacking result and the second stacking result with a stacking determining model; (Page 4, lines 50-55: “Because the depth camera is very close to the cage, the data quality is good, ensuring high accuracy of feature extraction, and directly detecting the relative deviation between the upper and lower cages, and performing feedback control algorithms to control the forklift AGV. Therefore, the measurement error of the depth camera and the calculation error caused by the external parameter calibration error can be suppressed (because these errors of the upper and lower baskets in the same camera can be offset), and high stacking repeatability can be ensured.”)and determine whether the first material cage is able to be stacked on the second material cage by comparing the stacking data with a stacking threshold(Page 4, lines 39-44: “Step S105, according to the position p1, position p2, position p3, and position p4, and the pose conversion relationship T between the right camera and the left camera, calculate the distance Δx, the distance Δy, and the deviation of the upper cage relative to the lower cage. Angle Δyaw; In step S106, the forklift AGV is controlled to make the distance Δx, the distance Δy and the angle Δyaw all approach 0, and the stacking of the upper and lower baskets is completed.”)
Regarding Claim 15, Chen discloses the computer device of Claim 14, as seen above. Chen further discloses wherein the material-cage image comprises a first material-cage image and a second material-cage image; and the processor configured to obtain the material-cage image is configured to: obtain the first material-cage image by photographing the first stacking apparatus in a first direction of the first material cage and the second stacking apparatus in the first direction of the second material cage with the image obtaining device (Page 4, lines 25-26: “Step S101, collecting a first depth map containing the left side of the upper and lower cages through the left depth camera;”); and obtain the second material-cage image by photographing the first stacking apparatus in a second direction of the first material cage and the second stacking apparatus in the second direction of the second material cage with the image obtaining device (Page 4, lines 32-33: “Step S103, acquiring a second depth map on the right side of the upper and lower cages through the right depth camera;”), wherein the first direction is different from the second direction (Figure 2: Shows the different directions images by first depth map 11 and second dept map 12).
Regarding Claim 16, Chen discloses the computer device of Claim 15, as seen above. Chen further discloses wherein the first stacking apparatus and the second stacking apparatus each comprise foot cups and piers matched with the foot cups (Figure 2: Front vertical beams 6,8 and front vertical beams 7,9); the processor configured to obtain the first material-cage image is configured to: obtain the first material-cage image by photographing a foot cup in the first direction of the first material cage and a pier in the first direction of the second material cage with the image obtaining device (Pages 4-5, lines 59-2: “this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage; “); and the processor configured to obtain the second material-cage image is configured to: obtain the second material-cage image by photographing a foot cup in the second direction of the first material cage and a pier in the second direction of the second material cage with the image obtaining device (Page 5, lines 2-3: “the feature extracted from the second depth map 12 is the upper right front vertical beam 8 of the cage; the feature extracted from the second depth map is the lower cage Right front vertical beam 9.”).
Regarding Claim 17, Chen discloses the computer device of Claim 14, as seen above. Chen further discloses wherein the processor configured to perform the first target detection is configured to: obtain the feature information of the material-cage image with the first detection model (Pages 4-5, lines 59-4:”referring to FIG. 2, this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage; the feature extracted from the second depth map 12 is the upper right front vertical beam 8 of the cage; the feature extracted from the second depth map is the lower cage Right front vertical beam 9. In this way, no additional marks can be used to assist the forklift in determining the pose.“), and recognize the first stacking apparatus of the first material cage and the second stacking apparatus of the second material cage according to the feature information (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;”).
Regarding Claim 18, Chen discloses the computer device of Claim 14, as seen above. Chen further discloses wherein the first material cage has a first foot cup on a first surface of the first material cage and a first pier on a second surface of the first material cage (Figure 2: Shows beam 6 on the lower surface of material cage 4, and wherein beam 7 will be located on the upper surface of cage 4 following the structure described wherein Page 5, lines 7-9: “Without loss of generality, there are four vertical beams on the four corners of each cage, and their specifications and dimensions are consistent.”), the second material cage has a second foot cup on a first surface of the second material cage and a second pier on a second surface of the second material cage (Figure 2: Wherein beam 6 will be located on the bottom surface of cage 5 following the structure described, and wherein beam 7 is shown on the upper surface of cage 5, wherein Page 5, lines 7-9: “Without loss of generality, there are four vertical beams on the four corners of each cage, and their specifications and dimensions are consistent.”), and wherein the processor configured to determine the first location information of the first stacking apparatus and the second location information of the second stacking apparatus, and determine the first stacking result based on the first location information and the second location information (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;” ) is configured to: determine the first location information of the first foot cup in the first stacking apparatus and the second location information of the second pier in the second stacking apparatus (Figure 2: Beams 6 and 7); obtain a distance between the first foot cup and the second pier in each of stacking groups based on the first location information and the second location information (Pages 4-5, lines 59-2: “referring to FIG. 2, this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage;”), wherein the first foot cup and the second pier in a same stacking group are aligned with each other in a vertical direction when the first material cage is stacked on the second material cage (Figure 2: Shows how beams 6 and 7 are vertically aligned to one another as they are stacked); for each of the stacking groups, obtain a comparing result of the stacking group by comparing the distance between the first foot cup and the second pier with a distance threshold (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;”); and determine the first stacking result based on the comparing results of the stacking groups (Page 4, lines 39-44: “Step S105, according to the position p1, position p2, position p3, and position p4, and the pose conversion relationship T between the right camera and the left camera, calculate the distance Δx, the distance Δy, and the deviation of the upper cage relative to the lower cage. Angle Δyaw, wherein the first stacking result comes from calculations of positions on the left side p1 and p2).
Regarding Claim 19, Chen discloses A non-volatile computer-readable storage medium configured to store computer programs (Page 7, lines 58-60: “Correspondingly, the present invention also provides a computer-readable storage medium on which computer instructions are stored, characterized in that, when the instructions are executed by a processor, the above-mentioned depth camera-based cage stacking method is implemented.”)which, when executed by a computer, enable the computer to: obtain a material-cage image by photographing a first stacking apparatus of a first material cage and a second stacking apparatus of a second material cage; (Page 4, lines 25-26: “Step S101, collecting a first depth map containing the left side of the upper and lower cages through the left depth camera;” and Page 4, lines 32-33: “Step S103, acquiring a second depth map on the right side of the upper and lower cages through the right depth camera;”)perform first target detection on the material-cage image with a first detection model to recognize the first stacking apparatus of the first material cage and the second stacking apparatus of the second material cage;(Pages 4-5, lines 59-4:”referring to FIG. 2, this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage; the feature extracted from the second depth map 12 is the upper right front vertical beam 8 of the cage; the feature extracted from the second depth map is the lower cage Right front vertical beam 9. In this way, no additional marks can be used to assist the forklift in determining the pose.“)determine first location information of the first stacking apparatus and second location information of the second stacking apparatus, and determine a first stacking result based on the first location information and the second location information; (Page 4, lines 28-30: “Step S102: Extract the feature on the left side of the upper cage from the first depth map, calculate the position p1 of the feature relative to the left depth camera, and extract the feature on the left side of the lower cage from the first depth map , And calculate the position p2 of the feature relative to the left depth camera;” perform second target detection on the material-cage image with a second detection model to extract feature information of the material-cage image, (Pages 4-5, lines 59-4:”referring to FIG. 2, this embodiment preferably adopts the feature extracted from the first depth map 11 as the left front vertical beam 6 of the upper cage; and the feature extracted from the first depth map The feature is the left front vertical beam 7 of the lower cage; the feature extracted from the second depth map 12 is the upper right front vertical beam 8 of the cage; the feature extracted from the second depth map is the lower cage Right front vertical beam 9. In this way, no additional marks can be used to assist the forklift in determining the pose.“)and obtain a second stacking result based on feature information; (Page 4, lines 35-37: “Step S104, extract the feature on the right side of the upper cage from the second depth map, calculate the position p3 of the feature relative to the right depth camera, and extract the feature on the right side of the lower cage from the second depth map , And calculate the position p4 of the feature relative to the right depth camera;”)obtain stacking data by weighting and summing the first stacking result and the second stacking result with a stacking determining model; (Page 4, lines 50-55: “Because the depth camera is very close to the cage, the data quality is good, ensuring high accuracy of feature extraction, and directly detecting the relative deviation between the upper and lower cages, and performing feedback control algorithms to control the forklift AGV. Therefore, the measurement error of the depth camera and the calculation error caused by the external parameter calibration error can be suppressed (because these errors of the upper and lower baskets in the same camera can be offset), and high stacking repeatability can be ensured.”)and determine whether the first material cage is able to be stacked on the second material cage by comparing the stacking data with a stacking threshold.(Page 4, lines 39-44: “Step S105, according to the position p1, position p2, position p3, and position p4, and the pose conversion relationship T between the right camera and the left camera, calculate the distance Δx, the distance Δy, and the deviation of the upper cage relative to the lower cage. Angle Δyaw; In step S106, the forklift AGV is controlled to make the distance Δx, the distance Δy and the angle Δyaw all approach 0, and the stacking of the upper and lower baskets is completed.”)
Allowable Subject Matter
Claim 10 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: Regarding Claim 10, there is no prior art alone or in combination that teaches a method for determining material cage stacking that included the combination of recited limitations in Claim 10. The art alone or in combination did not teach wherein the method comprised obtaining two image samples, one of two cages which can be stacked, and one of two cages that cannot be stacked, and using these images to train the second detection model. The closest prior art of record, Chen (Chinese Patent Application CN 112660686A) teaches a similar method to that of Claim 10, but fails to teach the image samples used to train the second detection model. Additionally, no other references, or reasonable combination thereof, could be found which disclose or suggest these features in combination with other limitations in the claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
United States Patent US 10,562,714 B2 (Hamaguchi, Jun): This patent has been deemed pertinent due to its similarities between Claims 1-20. Hamaguchi teaches a similar detection device for container stacking comprising recording container information, reading of data from the code reader, using a calculation processor to detect position coordinate values, and determine stacking abnormalities as seen in Figure 2A.
German Patent Application DE 102016013497A1 (Kimoto, Yuuki): This patent application has been deemed pertinent due to its similarities between Claims 1-20. Kimoto teaches a similar article stacking process comprising a stowage pattern calculation device to calculate the combination of articles, which further includes position determination parts which determines positions at which the articles are stacked, as seen in Figure 1.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABBY ALLURA JORGENSEN whose telephone number is (571)270-7124. The examiner can normally be reached M-F 8-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gene Crawford can be reached at (571) 272-6911. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABBY A JORGENSEN/ Examiner, Art Unit 3651
/GENE O CRAWFORD/ Supervisory Patent Examiner, Art Unit 3651