DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Remarks
This final office action is a response to the reply received on 11/26/2025. Claims 1-20 are pending. Claims 1, and 17-20 have been amended.
Response to Arguments
Applicant’s additional arguments with respect to the claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Information Disclosure Statement
The information disclosure statement received on 12/12/2025 has been annotated and considered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 8, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yamada et. al (US 20190329409 A1 - herein referred to as Yamada(409) ) in view of Yamada et. al (US 20180276501 A1 - herein referred to as Yamada(501) ) and Huang et. al (US 20200331144 A1).
Regarding Claim 1, Yamada(409) discloses:
A picking system for picking an object disposed in a first region,(illustrated in at least figure 1)
a holder configured to hold the object; (holding device such as robot hand [¶0025])
processing circuitry configured to control a motion of the holder; and (Information processing apparatus 20 ; robot control apparatus ; controlling the operation of the robot ; "For example, the information processing apparatus 20 is implemented by a personal computer" [¶0030])
a sensor configured to acquire information of the object, (imaging device 2 ; camera ; generates image information ; visual information about the target object 41 [¶0028])
wherein the processing circuitry is configured to: acquire the information from the sensor; (the image information obtaining unit 201 obtains image information ; from the imaging device 2 [¶ 0036] and also see "The information processing apparatus 20 functions as an image information obtaining unit 201, a holding success/failure probability distribution determination unit 202, a holding position determination unit 203, a control unit 204, and a holding success/failure determination unit 205. Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
calculate one or more positions and postures of the holder for picking up the object based on the acquired information; ( "From the equations, the control unit 204 calculates the position and orientation of the manipulator 1 in holding the target object 41…The control unit 204 can thus determine the position and orientation of the manipulator 1 in holding the target object 41 from the image of the target object 41 captured by the imaging device 2." [¶0066])
generate a holding operation plan of holding the object by the holder, based on the acquired information; (holding position determination unit 203 ; determines a holding position or a holding position and orientation of the manipulator ; holding operation to be performed ; trajectory to the holding position [¶ 0040] and also see "The information processing apparatus 20 functions as an image information obtaining unit 201, a holding success/failure probability distribution determination unit 202, a holding position determination unit 203, a control unit 204, and a holding success/failure determination unit 205. Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
control a motion of the holder such that the holder performs a pickup operation and a transport operation on the object, based on the holding operation plan; (control unit 204 ; controls operation ; manipulator 1; holding device 3 [¶ 0041] and also see "The information processing apparatus 20 functions as an image information obtaining unit 201, a holding success/failure probability distribution determination unit 202, a holding position determination unit 203, a control unit 204, and a holding success/failure determination unit 205. Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
detect a failure in a generation of the holding operation plan or a failure in a holding operation on the object by the holder; (information processing apparatus 20 ; success/failure determination ; execution result of the holding operation [¶ 0032] and also see "The information processing apparatus 20 functions as an image information obtaining unit 201, a holding success/failure probability distribution determination unit 202, a holding position determination unit 203, a control unit 204, and a holding success/failure determination unit 205. Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
perform a second retry determination of deciding an operation for a retry in a case in which the failure detector detects the failure in the pickup operation on the object by the holder, the second retry determination being different from the first retry determination; (If the target object 41 fails to be held ; repeats processing ; determining a holding position different from the previous one ; executing the holding operation again [¶0032])
However, Yamada(409) fails to disclose the first retry determination.
Nevertheless, Yamada(501)--who is in the same field of endeavor--discloses: perform a first retry determination of deciding an operation for a retry in a case in which the processing circuitry detects the failure in the generation of the holding operation plan, (recognition apparatus 320 ; does not generate the position and orientation ; which target object 1 can be held ; determined that the target object 1 is not held ; flow returns to S232 to perform the imaging again. [¶0089]).
Furthermore, Yamada(409) discloses the second retry being possible in at least ¶0032, and Yamada(501) discloses the first retry being possible in at least ¶0089, and thus, the prior art teaches the case when the first or second retry are possible.
However, Yamada(409) and Yamada(501) do not explicitly disclose determining whether or not it is necessary to acquire information of the object when a retry is possible.
Nevertheless, Huang--who is directed towards robotic object picking--discloses: determine whether or not there is a selectable one among the one or more positions and postures that have already been calculated, (See at least ¶0027 via "The machine learning training data (21) includes a plurality of known grasp location labels (23) assigned to the object (36) positioned in a plurality of different object poses. In an example, the known grasp location labels (23) of the object (36) are first assigned (201) and/or stored (202) by, or at the direction of, at least one user (28) of system (1)." along with Figure 6 which illustrates that Steps 201/202 are the first steps, thus showing the grasp locations (position+posture) have already been calculated. Additionally, See ¶0029 via "In the embodiment, the result (33) of the first executing (211) step is first evaluated (213) according to at least one predetermined performance criteria (35)" and ¶0033 via "In an embodiment, method (200) may second mapping (217), by processor(s) (8), at least a second candidate grasp location (43) on the object (36) in the work space (48) in the first pose. The at least a second candidate grasp location (43) is different from the first candidate grasp location (29)…The second mapping (217) and first iterating (221) steps may be performed in method (200) in response to (e.g., as determined by first logical test (269)) the result (33) of the first executing (211) step not meeting the at least one predetermined performance criteria (35)" which illustrates that after the execution results are evaluated, and if criterea is or is not met, another grasp location is selected from the calculated grasp positions if needed--thus illustrating the determination of whether or not there is a selectable one or more positions/postures)
PNG
media_image1.png
506
604
media_image1.png
Greyscale
in a case where it is determined that there is a selectable position and posture that have already been calculated, determining that it is not necessary to acquire information of the object (See at least Figure 6 and ¶0031 via "Referring to FIGS. 1-6, in an embodiment, the first mapping (207) step of method (200) may be performed in the absence of estimating (231), by processor(s) (8), the first pose of the object (36) in the work space (48). In this case, such mapping is object (36) location- and object (36) orientation-invariant. Therefore, there is no need to estimate the object (36) pose in the work space (48)…Object pose estimation can be considered as an intermediate step for the robotic system to first localize the object, then find the grasp points based on the relative location on the object. The systems, methods, and software for controlling object picking and placement by robot systems disclosed herein skip this intermediate step. In the disclosed embodiments, the robotic (e.g., system (1)) can find the grasp points on object (36) without knowing the object (36) position and the relative location from the grasp points and the object (36)." **which illustrates the determining that it is not necessary to acquire information of the object when a selectable position and posture have been calculated.)
and generating a new holding operation plan based on the position and posture that have already been calculated (See at least Figure 6 which illustrates the second map 217, first iterate 221, and subsequent iterate 273. Also see at least ¶0037 via "In an embodiment, method (200) may include first determining (225), by processor(s) (8), an autonomous movement control scheme (45). In the embodiment, the autonomous movement control scheme (45) may be first determined (225) based on a position of the first candidate grasp location (29) on the object (36) in the work space (48)…, the first determined (225) autonomous movement control scheme (45) may be stored in, and read from, data structure (502) and/or elsewhere in memory (10) by, or at the direction of, processor(s) (8)" as well as ¶0033 via "In an embodiment, method (200) may second mapping (217), by processor(s) (8), at least a second candidate grasp location (43) on the object (36) in the work space (48) in the first pose. The at least a second candidate grasp location (43) is different from the first candidate grasp location (29)" **which illustrates the re-determining of the autonomous movement control scheme based on the selection of a different grasp candidate).
Therefore, It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to include the element of the first retry determination from Yamada(501) to the picking system of Yamada(409), in order to improve efficiency by performing testing to determine holding operation plans prior to exerting the energy to do so. Furthermore, it would have been obvious to make a determination to not acquire more information about an object that already has calculated grasp candidates such as in Huang, in order to improve the efficiency of the system by not performing unnecessary work: "Therefore, there is no need to estimate the object (36) pose in the work space (48), which provides significant improvements in efficiency as compared to known systems, methods, and software for controlling object picking and placement by robot systems." [Huang ¶0031].
Regarding Claim 2, Yamada(409), Yamada(501), and Huang disclose the picking system according to Claim 1.
Furthermore, Yamada(409) discloses: wherein the processing circuitry is further configured to, in the second retry determination, determine whether or not a position of the object within the first region has changed between before and after the pickup operation (information processing apparatus 20 ; history of holding positions ; where target object 41 has failed to be held ; prevent holding positions from being determined as a new holding position ; change in the state of the target object 41 ; successful holding, acquisition of visual information ; change in position due to failed holding [¶ 0082]).
Regarding Claim 8, Yamada(409), Yamada(501), and Huang disclose the picking system according to Claim 1.
Furthermore, Yamada(409) discloses: wherein the processing circuitry is further configured to: calculate one or more positions and postures of the holder for picking up the object based on the information; (equations ; control unit 204 calculates the position and orientation of the manipulator 1 ; holding the target object 41. [¶ 0066] and also see: ¶0035 for processing circuitry)
and calculate a route in which the holder travels, based on the selected position and posture (control unit 204 determines how to move ; based on the holding position ; holding position and orientation ; or the trajectory [¶ 0060] and also see: ¶0035 for processing circuitry).
However, Yamada(409) does not explicitly disclose, but Yamada(501) discloses: select a position and posture for generating the holding operation plan from the one or more positions and postures (possible to generate the position and orientation ; determined that the target object 1 can be held ; flow shifts to S237 [¶ 0089]).
Therefore, It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to include the element of the selecting of the position and posture of Yamada(501) to the position and posture calculation and the route calculation of Yamada(409), in order to make the picking system disclosed by Yamada(409), Yamada(501), and Huang more efficient by planning different aspects of the operation before actually executing it, which conserves energy and is necessary for efficient and accurate picks.
Regarding Claim 17, Yamada(409) discloses:
A picking device for picking an object, (illustrated in at least figure 1)
comprising: a holder configured to hold the object; and (holding device 3, such as a robot hand [¶ 0025])
processing circuitry configured to control a motion of the holder, (Information processing apparatus 20 ; robot control apparatus ; controlling the operation of the robot ; "For example, the information processing apparatus 20 is implemented by a personal computer" [¶ 0030])
wherein the processing circuitry is configured to: acquire information of the object; (the image information obtaining unit 201 obtains image information ; from the imaging device 2 [¶ 0036] and also see: The information processing apparatus 20 functions as an image information obtaining unit 201 ; Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
calculate one or more positions and postures of the holder for picking up the object based on the acquired information; ("From the equations, the control unit 204 calculates the position and orientation of the manipulator 1 in holding the target object 41…The control unit 204 can thus determine the position and orientation of the manipulator 1 in holding the target object 41 from the image of the target object 41 captured by the imaging device 2." [¶0066])
generate a holding operation plan of holding the object by the holder, based on the acquired information; (holding position determination unit 203 ; determines a holding position or a holding position and orientation of the manipulator ; holding operation to be performed ; trajectory to the holding position [¶ 0040] and also see: The information processing apparatus 20 functions as ; a holding position determination unit 203,; Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
control a motion of the holder such that the holder performs a pickup operation and a transport operation on the object, based on the holding operation plan; (control unit 204 ; controls operation ; manipulator 1; holding device 3 [¶ 0041] and also see: The information processing apparatus 20 functions as ; a control unit 204 ; Such functions can be implemented by the CPU 21 executing a computer program [¶0035])
detect a failure in a generation of the holding operation plan or a failure in a holding operation on the object by the holder; (information processing apparatus 20 ; success/failure determination ; execution result of the holding operation [¶ 0032] and also see: "The information processing apparatus 20 functions as ; holding success/failure determination unit 205 ; Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
perform a second retry determination of deciding an operation for a retry in a case in which the processing circuitry detects the failure in the pickup of the object by the holder, the second retry determination being different from the first retry determination; (If the target object 41 fails to be held ; repeats processing ; determining a holding position different from the previous one ; executing the holding operation again [¶ 0032])
However, Yamada(409) fails to disclose the first retry determination.
Nevertheless, Yamada(501) discloses: perform a first retry determination of deciding an operation for a retry in a case in which the processing circuitry detects the failure in the generation of the holding operation plan, and (recognition apparatus 320 ; does not generate the position and orientation ; which target object 1 can be held ; determined that the target object 1 is not held ; flow returns to S232 to perform the imaging again. [¶ 0089]).
Furthermore, Yamada(409) discloses the second retry being possible in at least ¶0032, and Yamada(501) discloses the first retry being possible in at least ¶0089, and thus, the prior art teaches the case when the first or second retry are possible.
However, Yamada(409) and Yamada(501) do not explicitly disclose determining whether or not it is necessary to acquire information of the object when a retry is possible.
Nevertheless, Huang--who is directed towards robotic object picking--discloses: determine whether or not there is a selectable one among the one or more positions and postures that have already been calculated, (See at least ¶0027 via "The machine learning training data (21) includes a plurality of known grasp location labels (23) assigned to the object (36) positioned in a plurality of different object poses. In an example, the known grasp location labels (23) of the object (36) are first assigned (201) and/or stored (202) by, or at the direction of, at least one user (28) of system (1)." along with Figure 6 which illustrates that Steps 201/202 are the first steps, thus showing the grasp locations (position+posture) have already been calculated. Additionally, See ¶0029 via "In the embodiment, the result (33) of the first executing (211) step is first evaluated (213) according to at least one predetermined performance criteria (35)" and ¶0033 via "In an embodiment, method (200) may second mapping (217), by processor(s) (8), at least a second candidate grasp location (43) on the object (36) in the work space (48) in the first pose. The at least a second candidate grasp location (43) is different from the first candidate grasp location (29)…The second mapping (217) and first iterating (221) steps may be performed in method (200) in response to (e.g., as determined by first logical test (269)) the result (33) of the first executing (211) step not meeting the at least one predetermined performance criteria (35)" which illustrates that after the execution results are evaluated, and if criterea is or is not met, another grasp location is selected from the calculated grasp positions if needed--thus illustrating the determination of whether or not there is a selectable one or more positions/postures)
PNG
media_image1.png
506
604
media_image1.png
Greyscale
in a case where it is determined that there is a selectable position and posture that have already been calculated, determining that it is not necessary to acquire information of the object (See at least Figure 6 and ¶0031 via "Referring to FIGS. 1-6, in an embodiment, the first mapping (207) step of method (200) may be performed in the absence of estimating (231), by processor(s) (8), the first pose of the object (36) in the work space (48). In this case, such mapping is object (36) location- and object (36) orientation-invariant. Therefore, there is no need to estimate the object (36) pose in the work space (48)…Object pose estimation can be considered as an intermediate step for the robotic system to first localize the object, then find the grasp points based on the relative location on the object. The systems, methods, and software for controlling object picking and placement by robot systems disclosed herein skip this intermediate step. In the disclosed embodiments, the robotic (e.g., system (1)) can find the grasp points on object (36) without knowing the object (36) position and the relative location from the grasp points and the object (36)." **which illustrates the determining that it is not necessary to acquire information of the object when a selectable position and posture have been calculated.)
and generating a new holding operation plan based on the position and posture that have already been calculated (See at least Figure 6 which illustrates the second map 217, first iterate 221, and subsequent iterate 273. Also see at least ¶0037 via "In an embodiment, method (200) may include first determining (225), by processor(s) (8), an autonomous movement control scheme (45). In the embodiment, the autonomous movement control scheme (45) may be first determined (225) based on a position of the first candidate grasp location (29) on the object (36) in the work space (48)…, the first determined (225) autonomous movement control scheme (45) may be stored in, and read from, data structure (502) and/or elsewhere in memory (10) by, or at the direction of, processor(s) (8)" as well as ¶0033 via "In an embodiment, method (200) may second mapping (217), by processor(s) (8), at least a second candidate grasp location (43) on the object (36) in the work space (48) in the first pose. The at least a second candidate grasp location (43) is different from the first candidate grasp location (29)" **which illustrates the re-determining of the autonomous movement control scheme based on the selection of a different grasp candidate).
Therefore, It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to include the element of the first retry determination from Yamada(501) to the picking system of Yamada(409), in order to improve efficiency by performing testing to determine holding operation plans prior to exerting the energy to do so. Furthermore, it would have been obvious to make a determination to not acquire more information about an object that already has calculated grasp candidates such as in Huang, in order to improve the efficiency of the system by not performing unnecessary work: "Therefore, there is no need to estimate the object (36) pose in the work space (48), which provides significant improvements in efficiency as compared to known systems, methods, and software for controlling object picking and placement by robot systems." [Huang ¶0031].
Regarding Claim 18, Yamada(409) discloses:
A control device for controlling a picking device for picking an object by a holder configured to hold the object, (See at least Information processing apparatus 20 ; robot control apparatus ; controlling the operation of the robot [¶0030])
comprising: processing circuitry configured to acquire information of the object; (the image information obtaining unit 201 obtains image information ; from the imaging device 2 [¶ 0036] and also see: The information processing apparatus 20 functions as an image information obtaining unit 201 ; Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
calculate one or more positions and postures of the holder for picking up the object based on the acquired information; ( "From the equations, the control unit 204 calculates the position and orientation of the manipulator 1 in holding the target object 41…The control unit 204 can thus determine the position and orientation of the manipulator 1 in holding the target object 41 from the image of the target object 41 captured by the imaging device 2." [¶0066])
generate a holding operation plan of holding the object by the holder, based on the acquired information; (holding position determination unit 203 ; determines a holding position or a holding position and orientation of the manipulator ; holding operation to be performed ; trajectory to the holding position [¶ 0040] and also see: The information processing apparatus 20 functions as ; a holding position determination unit 203,; Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
control a motion of the holder such that the holder performs a pickup operation and a transport operation on the object, based on the holding operation plan; (control unit 204 ; controls operation ; manipulator 1; holding device 3 [¶ 0041] and also see: The information processing apparatus 20 functions as ; a control unit 204 ; Such functions can be implemented by the CPU 21 executing a computer program [¶0035])
and perform a second retry determination of deciding an operation for a retry in a case in which a failure in a pickup operation on the object by the holder is detected, the second retry determination being different from the first retry determination; (If the target object 41 fails to be held ; repeats processing ; determining a holding position different from the previous one ; executing the holding operation again [¶ 0032])
However, Yamada(409) fails to disclose the first retry determination. Nevertheless, Yamada(501) discloses: perform a first retry determination of deciding an operation for a retry in a case in which a failure in a generation of the holding operation plan is detected,(recognition apparatus 320 ; does not generate the position and orientation ; which target object 1 can be held ; determined that the target object 1 is not held ; flow returns to S232 to perform the imaging again. [¶ 0089]).
Furthermore, Yamada(409) discloses the second retry being possible in at least ¶0032, and Yamada(501) discloses the first retry being possible in at least ¶0089, and thus, the prior art teaches the case when the first or second retry are possible.
However, Yamada(409) and Yamada(501) do not explicitly disclose determining whether or not it is necessary to acquire information of the object when a retry is possible.
Nevertheless, Huang--who is directed towards robotic object picking--discloses: determine whether or not there is a selectable one among the one or more positions and postures that have already been calculated, (See at least ¶0027 via "The machine learning training data (21) includes a plurality of known grasp location labels (23) assigned to the object (36) positioned in a plurality of different object poses. In an example, the known grasp location labels (23) of the object (36) are first assigned (201) and/or stored (202) by, or at the direction of, at least one user (28) of system (1)." along with Figure 6 which illustrates that Steps 201/202 are the first steps, thus showing the grasp locations (position+posture) have already been calculated. Additionally, See ¶0029 via "In the embodiment, the result (33) of the first executing (211) step is first evaluated (213) according to at least one predetermined performance criteria (35)" and ¶0033 via "In an embodiment, method (200) may second mapping (217), by processor(s) (8), at least a second candidate grasp location (43) on the object (36) in the work space (48) in the first pose. The at least a second candidate grasp location (43) is different from the first candidate grasp location (29)…The second mapping (217) and first iterating (221) steps may be performed in method (200) in response to (e.g., as determined by first logical test (269)) the result (33) of the first executing (211) step not meeting the at least one predetermined performance criteria (35)" which illustrates that after the execution results are evaluated, and if criterea is or is not met, another grasp location is selected from the calculated grasp positions if needed--thus illustrating the determination of whether or not there is a selectable one or more positions/postures)
PNG
media_image1.png
506
604
media_image1.png
Greyscale
in a case where it is determined that there is a selectable position and posture that have already been calculated, determining that it is not necessary to acquire information of the object (See at least Figure 6 and ¶0031 via "Referring to FIGS. 1-6, in an embodiment, the first mapping (207) step of method (200) may be performed in the absence of estimating (231), by processor(s) (8), the first pose of the object (36) in the work space (48). In this case, such mapping is object (36) location- and object (36) orientation-invariant. Therefore, there is no need to estimate the object (36) pose in the work space (48)…Object pose estimation can be considered as an intermediate step for the robotic system to first localize the object, then find the grasp points based on the relative location on the object. The systems, methods, and software for controlling object picking and placement by robot systems disclosed herein skip this intermediate step. In the disclosed embodiments, the robotic (e.g., system (1)) can find the grasp points on object (36) without knowing the object (36) position and the relative location from the grasp points and the object (36)." **which illustrates the determining that it is not necessary to acquire information of the object when a selectable position and posture have been calculated.)
and generating a new holding operation plan based on the position and posture that have already been calculated (See at least Figure 6 which illustrates the second map 217, first iterate 221, and subsequent iterate 273. Also see at least ¶0037 via "In an embodiment, method (200) may include first determining (225), by processor(s) (8), an autonomous movement control scheme (45). In the embodiment, the autonomous movement control scheme (45) may be first determined (225) based on a position of the first candidate grasp location (29) on the object (36) in the work space (48)…, the first determined (225) autonomous movement control scheme (45) may be stored in, and read from, data structure (502) and/or elsewhere in memory (10) by, or at the direction of, processor(s) (8)" as well as ¶0033 via "In an embodiment, method (200) may second mapping (217), by processor(s) (8), at least a second candidate grasp location (43) on the object (36) in the work space (48) in the first pose. The at least a second candidate grasp location (43) is different from the first candidate grasp location (29)" **which illustrates the re-determining of the autonomous movement control scheme based on the selection of a different grasp candidate).
Therefore, It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to include the element of the first retry determination from Yamada(501) to the picking system of Yamada(409), in order to improve efficiency by performing testing to determine holding operation plans prior to exerting the energy to do so. Furthermore, it would have been obvious to make a determination to not acquire more information about an object that already has calculated grasp candidates such as in Huang, in order to improve the efficiency of the system by not performing unnecessary work: "Therefore, there is no need to estimate the object (36) pose in the work space (48), which provides significant improvements in efficiency as compared to known systems, methods, and software for controlling object picking and placement by robot systems." [Huang ¶0031].
Regarding Claim 19, Yamada(409) discloses:
A non-transitory computer readable storage medium storing a program for controlling a picking device for picking an object by a holder configured to hold the object, (See Yamada(409) via non-transitory storage medium ; computer to perform an information processing method ; controlling operation of a robot ; hold and move a target object [claim 20])
wherein the program causes a processor of a computer to execute the steps of: acquiring information of the object; (the image information obtaining unit 201 obtains image information ; from the imaging device 2 [¶ 0036] and also see: The information processing apparatus 20 functions as an image information obtaining unit 201 ; Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
calculating one or more positions and postures of the holder for picking up the object based on the acquired information; ("From the equations, the control unit 204 calculates the position and orientation of the manipulator 1 in holding the target object 41…The control unit 204 can thus determine the position and orientation of the manipulator 1 in holding the target object 41 from the image of the target object 41 captured by the imaging device 2." [¶0066])
generating a holding operation plan of holding the object by the holder, based on the acquired information; (holding position determination unit 203 ; determines a holding position or a holding position and orientation of the manipulator ; holding operation to be performed ; trajectory to the holding position [¶ 0040] and also see: The information processing apparatus 20 functions as ; a holding position determination unit 203,; Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
controlling a motion of the holder such that the holder performs a pickup operation and a transport operation on the object, based on the holding operation plan; (control unit 204 ; controls operation ; manipulator 1; holding device 3 [¶ 0041] and also see: The information processing apparatus 20 functions as ; a control unit 204 ; Such functions can be implemented by the CPU 21 executing a computer program [¶0035])
and performing a second retry determination of deciding an operation for a retry in a case in which a failure in a pickup operation on the object by the holder is detected, the second retry determination being different from the first retry determination, and (If the target object 41 fails to be held ; repeats processing ; determining a holding position different from the previous one ; executing the holding operation again [¶ 0032])
However, Yamada(409) fails to disclose the first retry determination. Nevertheless, Yamada(501) discloses: performing a first retry determination of deciding an operation for a retry in a case in which a failure in a generation of the holding operation plan is detected, (recognition apparatus 320 ; does not generate the position and orientation ; which target object 1 can be held ; determined that the target object 1 is not held ; flow returns to S232 to perform the imaging again. [¶ 0089]).
Furthermore, Yamada(409) discloses the second retry being possible in at least ¶0032, and Yamada(501) discloses the first retry being possible in at least ¶0089, and thus, the prior art teaches the case when the first or second retry are possible.
However, Yamada(409) and Yamada(501) do not explicitly disclose determining whether or not it is necessary to acquire information of the object when a retry is possible.
Nevertheless, Huang--who is directed towards robotic object picking--discloses: determining whether or not there is a selectable one among the one or more positions and postures that have already been calculated, (See at least ¶0027 via "The machine learning training data (21) includes a plurality of known grasp location labels (23) assigned to the object (36) positioned in a plurality of different object poses. In an example, the known grasp location labels (23) of the object (36) are first assigned (201) and/or stored (202) by, or at the direction of, at least one user (28) of system (1)." along with Figure 6 which illustrates that Steps 201/202 are the first steps, thus showing the grasp locations (position+posture) have already been calculated. Additionally, See ¶0029 via "In the embodiment, the result (33) of the first executing (211) step is first evaluated (213) according to at least one predetermined performance criteria (35)" and ¶0033 via "In an embodiment, method (200) may second mapping (217), by processor(s) (8), at least a second candidate grasp location (43) on the object (36) in the work space (48) in the first pose. The at least a second candidate grasp location (43) is different from the first candidate grasp location (29)…The second mapping (217) and first iterating (221) steps may be performed in method (200) in response to (e.g., as determined by first logical test (269)) the result (33) of the first executing (211) step not meeting the at least one predetermined performance criteria (35)" which illustrates that after the execution results are evaluated, and if criterea is or is not met, another grasp location is selected from the calculated grasp positions if needed--thus illustrating the determination of whether or not there is a selectable one or more positions/postures)
PNG
media_image1.png
506
604
media_image1.png
Greyscale
in a case where it is determined that there is a selectable position and posture that have already been calculated, determining that it is not necessary to acquire information of the object (See at least Figure 6 and ¶0031 via "Referring to FIGS. 1-6, in an embodiment, the first mapping (207) step of method (200) may be performed in the absence of estimating (231), by processor(s) (8), the first pose of the object (36) in the work space (48). In this case, such mapping is object (36) location- and object (36) orientation-invariant. Therefore, there is no need to estimate the object (36) pose in the work space (48)…Object pose estimation can be considered as an intermediate step for the robotic system to first localize the object, then find the grasp points based on the relative location on the object. The systems, methods, and software for controlling object picking and placement by robot systems disclosed herein skip this intermediate step. In the disclosed embodiments, the robotic (e.g., system (1)) can find the grasp points on object (36) without knowing the object (36) position and the relative location from the grasp points and the object (36)." **which illustrates the determining that it is not necessary to acquire information of the object when a selectable position and posture have been calculated.)
and generating a new holding operation plan based on the position and posture that have already been calculated (See at least Figure 6 which illustrates the second map 217, first iterate 221, and subsequent iterate 273. Also see at least ¶0037 via "In an embodiment, method (200) may include first determining (225), by processor(s) (8), an autonomous movement control scheme (45). In the embodiment, the autonomous movement control scheme (45) may be first determined (225) based on a position of the first candidate grasp location (29) on the object (36) in the work space (48)…, the first determined (225) autonomous movement control scheme (45) may be stored in, and read from, data structure (502) and/or elsewhere in memory (10) by, or at the direction of, processor(s) (8)" as well as ¶0033 via "In an embodiment, method (200) may second mapping (217), by processor(s) (8), at least a second candidate grasp location (43) on the object (36) in the work space (48) in the first pose. The at least a second candidate grasp location (43) is different from the first candidate grasp location (29)" **which illustrates the re-determining of the autonomous movement control scheme based on the selection of a different grasp candidate).
Therefore, It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to include the element of the first retry determination from Yamada(501) to the picking system of Yamada(409), in order to improve efficiency by performing testing to determine holding operation plans prior to exerting the energy to do so. Furthermore, it would have been obvious to make a determination to not acquire more information about an object that already has calculated grasp candidates such as in Huang, in order to improve the efficiency of the system by not performing unnecessary work: "Therefore, there is no need to estimate the object (36) pose in the work space (48), which provides significant improvements in efficiency as compared to known systems, methods, and software for controlling object picking and placement by robot systems." [Huang ¶0031].
Regarding Claim 20, Yamada(409) discloses:
A method for picking an object by a holder configured to hold the object, comprising steps, performed by a processor of a computer, of: (illustrated in at least figures 1 and 4)
acquiring information of the object; (the image information obtaining unit 201 obtains image information ; from the imaging device 2 [¶ 0036] and also see: The information processing apparatus 20 functions as an image information obtaining unit 201 ; Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
calculating one or more positions and postures of the holder for picking up the object based on the acquired information; ( "From the equations, the control unit 204 calculates the position and orientation of the manipulator 1 in holding the target object 41…The control unit 204 can thus determine the position and orientation of the manipulator 1 in holding the target object 41 from the image of the target object 41 captured by the imaging device 2." [¶0066])
generating a holding operation plan of holding the object by the holder, based on the acquired information; (holding position determination unit 203 ; determines a holding position or a holding position and orientation of the manipulator ; holding operation to be performed ; trajectory to the holding position [¶ 0040] and also see: The information processing apparatus 20 functions as ; a holding position determination unit 203,; Such functions can be implemented by the CPU 21 executing a computer program" [¶0035])
controlling motion of the holder such that the holder performs a pickup operation and a transport operation on the object, performed based on the holding operation plan; (control unit 204 ; controls operation ; manipulator 1; holding device 3 [¶ 0041] and also see: The information processing apparatus 20 functions as ; a control unit 204 ; Such functions can be implemented by the CPU 21 executing a computer program [¶0035])
and performing a second retry determination of deciding an operation for a retry in a case in which a failure in a pickup operation on the object by the holder is detected, the second retry determination being different from the first retry determination; (If the target object 41 fails to be held ; repeats processing ; determining a holding position different from the previous one ; executing the holding operation again [¶ 0032])
However, Yamada(409) fails to disclose the first retry determination. Nevertheless, Yamada(501) discloses: performing a first retry determination of deciding an operation for a retry in a case in which a failure in a generation of the holding operation plan is detected, (recognition apparatus 320 ; does not generate the position and orientation ; which target object 1 can be held ; determined that the target object 1 is not held ; flow returns to S232 to perform the imaging again. [¶ 0089]).
Furthermore, Yamada(409) discloses the second retry being possible in at least ¶0032, and Yamada(501) discloses the first retry being possible in at least ¶0089, and thus, the prior art teaches the case when the first or second retry are possible.
However, Yamada(409) and Yamada(501) do not explicitly disclose determining whether or not it is necessary to acquire information of the object when a retry is possible.
Nevertheless, Huang--who is directed towards robotic object picking--discloses: determining whether or not there is a selectable one among the one or more positions and postures that have already been calculated, (See at least ¶0027 via "The machine learning training data (21) includes a plurality of known grasp location labels (23) assigned to the object (36) positioned in a plurality of different object poses. In an example, the known grasp location labels (23) of the object (36) are first assigned (201) and/or stored (202) by, or at the direction of, at least one user (28) of system (1)." along with Figure 6 which illustrates that Steps 201/202 are the first steps, thus showing the grasp locations (position+posture) have already been calculated. Additionally, See ¶0029 via "In the embodiment, the result (33) of the first executing (211) step is first evaluated (213) according to at least one predetermined performance criteria (35)" and ¶0033 via "In an embodiment, method (200) may second mapping (217), by processor(s) (8), at least a second candidate grasp location (43) on the object (36) in the work space (48) in the first pose. The at least a second candidate grasp location (43) is different from the first candidate grasp location (29)…The second mapping (217) and first iterating (221) steps may be performed in method (200) in response to (e.g., as determined by first logical test (269)) the result (33) of the first executing (211) step not meeting the at least one predetermined performance criteria (35)" which illustrates that after the execution results are evaluated, and if criterea is or is not met, another grasp location is selected from the calculated grasp positions if needed--thus illustrating the determination of whether or not there is a selectable one or more positions/postures)
PNG
media_image1.png
506
604
media_image1.png
Greyscale
in a case where it is determined that there is a selectable position and posture that have already been calculated, determining that it is not necessary to acquire information of the object (See at least Figure 6 and ¶0031 via "Referring to FIGS. 1-6, in an embodiment, the first mapping (207) step of method (200) may be performed in the absence of estimating (231), by processor(s) (8), the first pose of the object (36) in the work space (48). In this case, such mapping is object (36) location- and object (36) orientation-invariant. Therefore, there is no need to estimate the object (36) pose in the work space (48)…Object pose estimation can be considered as an intermediate step for the robotic system to first localize the object, then find the grasp points based on the relative location on the object. The systems, methods, and software for controlling object picking and placement by robot systems disclosed herein skip this intermediate step. In the disclosed embodiments, the robotic (e.g., system (1)) can find the grasp points on object (36) without knowing the object (36) position and the relative location from the grasp points and the object (36)." **which illustrates the determining that it is not necessary to acquire information of the object when a selectable position and posture have been calculated.)
and generating a new holding operation plan based on the position and posture that have already been calculated (See at least Figure 6 which illustrates the second map 217, first iterate 221, and subsequent iterate 273. Also see at least ¶0037 via "In an embodiment, method (200) may include first determining (225), by processor(s) (8), an autonomous movement control scheme (45). In the embodiment, the autonomous movement control scheme (45) may be first determined (225) based on a position of the first candidate grasp location (29) on the object (36) in the work space (48)…, the first determined (225) autonomous movement control scheme (45) may be stored in, and read from, data structure (502) and/or elsewhere in memory (10) by, or at the direction of, processor(s) (8)" as well as ¶0033 via "In an embodiment, method (200) may second mapping (217), by processor(s) (8), at least a second candidate grasp location (43) on the object (36) in the work space (48) in the first pose. The at least a second candidate grasp location (43) is different from the first candidate grasp location (29)" **which illustrates the re-determining of the autonomous movement control scheme based on the selection of a different grasp candidate).
Therefore, It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to include the element of the first retry determination from Yamada(501) to the picking system of Yamada(409), in order to improve efficiency by performing testing to determine holding operation plans prior to exerting the energy to do so. Furthermore, it would have been obvious to make a determination to not acquire more information about an object that already has calculated grasp candidates such as in Huang, in order to improve the efficiency of the system by not performing unnecessary work: "Therefore, there is no need to estimate the object (36) pose in the work space (48), which provides significant improvements in efficiency as compared to known systems, methods, and software for controlling object picking and placement by robot systems." [Huang ¶0031].
Claims 3-7, and 11-16 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yamada et. al (US 20190329409 A1 - Yamada(409) ), Yamada et. al (US 20180276501 A1 - Yamada(501) ), and Huang et. al (US 20200331144 A1) in view of Sieb et. al. (US 20210233258 A1).
Regarding Claim 3, Yamada(409), Yamada(501), and Huang disclose the picking system according to Claim 2.
Furthermore, Yamada(409) discloses: wherein the processing circuitry is further configured to, in the second retry determination, (If the target object 41 fails to be held ; repeats processing ; determining a holding position different from the previous one ; executing the holding operation again. [¶ 0032], the information processing apparatus 20 ; repeatedly updates the holding position ; causes the robot 10 to perform the holding operation until the target object 41 is held. [¶ 0080 & ¶ 0081], and also see ¶0035 regarding processing circuitry)
However, Yamada(409), Yamada(501), and Huang do not explicitly disclose, but Sieb--who is in the same field of endeavor--discloses:
determine whether or not the holder contacted the object, and determine that the position of the object within the first region has not changed between before and after the pickup operation in a case in which the processing circuitry determines that the holder did not contact the object (camera 130 ; capture a first image ; attempt to pick an item ; capture a second image ; determine if box 115 was successfully picked ; object correspondence ; determine that box 115 is still in the same spot ; robotic arm 105 failed to pick it up [¶ 0040]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to include the method of using imaging/object correspondence to determine if the object was contacted from Sieb to the second retry determination of Yamada(409) in order to enable the picking device disclosed by Yamada(409), Yamada(501), and Huang to more accurately determine whether the pick can be accomplished with the same position/orientation. Thus, conserving energy by determining if it is at all necessary to exert energy to recalculate or change the position/orientation before exerting the energy to do so automatically.
Regarding Claim 4, Yamada(409), Yamada(501), and Huang disclose the picking system according to Claim 2.
Furthermore, Yamada(409) discloses: wherein the processing circuitry is further configured to, in the second retry determination, (If the target object 41 fails to be held ; repeats processing ; determining a holding position different from the previous one ; executing the holding operation again. [¶ 0032], and also see ¶0035 regarding processing circuitry)
Furthermore, Yamada(409) discloses the processing circuitry in ¶0035. However, Yamada(409), Yamada(501), and Huang do not explicitly disclose, but Sieb discloses:
acquire an image of the object within the first region in a case in which the (See Sieb via Figure 4 ; initial image to determine object location ; secondary image to determine whether object moved after the pickup/holding operation and also: The second image taken of same scene as first image ; process of FIG. 4 ; detecting failed picks ; determine correspondences [¶ 0046]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to determine picking success/failure by taking additional images to determine movement of objects from Sieb to the second retry determination of Yamada(409) in order to enable the picking device disclosed by Yamada(409), Yamada(501), and Huang to more accurately determine whether the pick needs to be attempted again, and whether it can be accomplished with the same position/orientation. Thus, conserving energy by determining if it is at all necessary to exert energy to recalculate or change the position/orientation before exerting the energy to do so automatically.
Regarding Claim 5, Yamada(409), Yamada(501), and Huang in view of Sieb disclose the picking system according to Claim 4.
Furthermore, Yamada(409) discloses: and wherein the processing circuitry is further configured to, in the second retry determination (If the target object 41 fails to be held ; repeats processing ; determining a holding position different from the previous one ; executing the holding operation again [¶ 0032] and also see ¶0035 regarding processing circuitry).
However, Yamada(409), Yamada(501), and Huang do not explicitly disclose, but Sieb discloses: wherein the image is a two-dimensional image at least partially including the first region, (See at least 2D Images shown in figures 5-7 ; green dot ; corresponding objects ; predicted that they are in the same place ; blue dot ; no longer in the bin in image ; was present when image 605 was taken ; but is missing from image 610 [¶ 0052]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to utilize a 2D image such as Sieb in the picking system disclosed by Yamada(409), Yamada(501), and Huang in order to improve efficiency of the retry determination. By utilizing a 2D image for Sieb’s segmentation method, the objects are able to be tracked more efficiently, and it would allow the retry to be determined quicker, as additional factors, such as depth, do not have to be considered with a 2D image. Motivation to combine Sieb’s 2D imaging in the picking system disclosed by Yamada(409), Yamada(501), and Huang comes from knowledge well known in the art, but also from Sieb “the output of the segmentation model plays an important role in tracking process 800 because it provides a starting point for finding object correspondences based on segmentation results” [¶ 0054].
Regarding Claim 6, Yamada(409), Yamada(501), and Huang in view of Sieb disclose the picking system according to Claim 5.
Furthermore, Yamada(409) discloses: wherein the processing circuitry is further configured to: (See at least ¶0035 regarding processing circuitry)
and wherein the processing circuitry is further configured to, in the second retry determination, exclude the position and posture for picking up the object within the identified region, from a subject of the selection (See at least: target object 41 fails to be held ; information processing apparatus 20 ; repeats processing for determining a holding position different from the previous one ; executing the holding operation again [¶ 0032] and also see ¶0035 regarding processing circuitry)
However, Yamada(409) does not explicitly disclose, but Yamada(501) discloses: select a position and posture for generating the holding operation plan from the one or more positions and postures; and (See at least: generate the position and orientation ; target object 1 can be held ; it is determined that the target object 1 can be held ; flow shifts to S237 [¶ 0089]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to combine the teachings of Yamada(409) and Yamada(501), in the instant claim, for the same reasoning and or rationale as stated above with respect to Claim 1.
Regarding Claim 7, Yamada(409), Yamada(501), and Huang in view of Sieb disclose the picking system according to Claim 6.
Furthermore, Yamada(409) discloses: wherein the processing circuitry is further configured to, in the second retry determination, acquire distance image data at least partially including the first region (See at least: both a two-dimensional color image and a distance image may be used [¶ 0057] ; reliability of the distance image [¶ 0129] and also see ¶0035 regarding processing circuitry)
and wherein the processing circuitry is further configured to recalculate one or more positions and postures of the holder for picking up the object based on the distance image data (See at least: information processing apparatus 20 ; repeats processing for determining a holding position different from the previous one ; executing the holding operation again [¶ 0032]).
However, Yamada(409) does not explicitly disclose, but Yamada(501) discloses: in a case in which there is no position and posture which can be selected by the processing circuitry, (See at least: recognition apparatus 320 ; does not generate the position and orientation ; target object 1 can be held ; determined that the target object 1 is not held ; flow returns to S232 ; perform the imaging again [¶ 0089]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to combine the teachings of Yamada(409) and Yamada(501), in the instant claim, for the same reasoning and or rationale as stated above with respect to Claim 1.
Regarding Claim 11, Yamada(409), Yamada(501), and Huang disclose the picking system according to Claim 1.
Furthermore, Yamada(409) discloses the processing circuitry in ¶0035. However, Yamada(409), Yamada(501), and Huang do not explicitly disclose, but Sieb discloses: wherein the processing circuitry is further configured to perform a third retry determination for deciding an operation for a retry, in a case in which the processing circuitry detects a failure in the transport operation on the object by the holder, the third retry determination being different from the first retry determination and the second retry determination (See Sieb via Additional applications may include target placement correspondence ; repicking of dropped items [¶ 0032]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to determine whether to repick a dropped item such as in Sieb in the picking system disclosed by Yamada(409), Yamada(501), and Huang in order to improve accuracy of the picking device and to complete more picks successfully while mitigating human intervention. Additionally, Sieb teaches additional motivation for the determination of repicking a dropped item: “the system may avoid downtime or other problems” [¶ 0032].
Regarding Claim 12, Yamada(409), Yamada(501), and Huang in view of Sieb disclose the picking system according to Claim 11.
Furthermore, Yamada(409) discloses the processing circuitry in ¶0035, and Sieb discloses: wherein the processing circuitry is further configured to estimate a drop position in a case in which the object has dropped during the transport operation of the object (See at least: means for tracking objects across multiple steps of a process such as in various locations ; target placement correspondence ; repicking of dropped items [¶ 0032] ; drops an item ; has the ability to recognize that the item was dropped ; determine if the item is in a location where it can be repicked ; system may avoid downtime or other problems [¶ 0032]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to incorporate the element of continuous location tracking and the option to repick a dropped item from Sieb, to the picking system disclosed by Yamada(409), Yamada(501), and Huang for the same reasoning and or rationale as stated above with respect to Claim 11.
Regarding Claim 13, Yamada(409), Yamada(501), and Huang in view of Sieb disclose the picking system according to Claim 12.
Furthermore, Yamada(409) discloses the processing circuitry in ¶0035 and Sieb discloses: wherein the processing circuitry is further configured to, in the third retry determination, acquire information of the object within the first region in a case in which the processing circuitry estimates that the drop position is within the first region (See at least: drops an item ; has the ability to recognize that the item was dropped ; determine if the item is in a location where it can be repicked ; system may avoid downtime or other problems [¶ 0032] ; camera 135 may be used to track ; first image taken by camera 130 ; camera 135 may recognize that robotic arm 105 has picked up and is currently holding ; may recognize that box 115 has been successfully placed onto conveyor belt ; camera 135 may recognize that box 115 is neither in the possession of robotic arm 105 nor was it placed on conveyor belt 125 ; may have therefore been dropped [¶ 0041]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to incorporate the element of continuous location tracking, the option to repick a dropped item, and the use of cameras to determine status/location from Sieb, to the picking system disclosed by Yamada(409), Yamada(501), and Huang in view of Sieb for the same reasoning and or rationale as stated above with respect to Claim 11.
Regarding Claim 14, Yamada(409), Yamada(501), and Huang in view of Sieb disclose the picking system according to Claim 12.
Furthermore, Yamada(409) discloses the processing circuitry in ¶0035 and also: decide to continue the picking (See at least: CPU 21 determines ; a next target object 41 ; processing proceeds to step S2 ; CPU 21 repeats the processing of step S2 and the subsequent steps until there is no target object 41 ; imaging device 2 may capture an image of the point of supply of the target objects 41; presence or absence of the target object 41 may be determined [¶ 0079])
Additionally, Sieb discloses: wherein the processing circuitry is further configured to, in the third retry determination, and
in a case in which the processing circuitry estimates that the drop position is within a second region which is a transport destination of the object (See at least: drops an item ; has the ability to recognize that the item was dropped ; determine if the item is in a location where it can be repicked ; system may avoid downtime or other problems [¶ 0032] ; tracking system ; using a different camera or computer vision system ; identify the object in a different location ; verify successful placement in another location [¶ 0047]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to incorporate the element of continuous location tracking, the option to repick a dropped item, and the use of cameras to determine a status/location from Sieb, to the picking system disclosed by Yamada(409), Yamada(501), and Huang in view of Sieb for the same reasoning and or rationale as stated above with respect to Claim 11.
Regarding Claim 15, Yamada(409), Yamada(501), and Huang in view of Sieb disclose the picking system according to Claim 12.
Furthermore, Yamada(409) discloses the processing circuitry in ¶0035, and Sieb discloses: wherein the processing circuitry is further configured to acquire an output from a passage sensor configured to detect that the object passes at a certain height above the first region, and (See Sieb via camera 130 images the contents of bin 120 ; camera 135 images a region of conveyor belt 125 ; may comprise one or more cameras ; array of cameras for imaging [¶ 0038])
wherein the processing circuitry is further configured to, in the third retry determination, estimate that the object has dropped within the first region, based on at least the output from the passage sensor (See Sieb via camera 135 may be used to track ; first image taken by camera 130 ; camera 135 may recognize that robotic arm 105 has picked up and is currently holding ; may recognize that box 115 has been successfully placed onto conveyor belt ; camera 135 may recognize that box 115 is neither in the possession of robotic arm 105 nor was it placed on conveyor belt 125 ; may have therefore been dropped [¶ 0041]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to incorporate the element of continuous location tracking, the option to repick a dropped item, and the use of cameras to determine a status/location from Sieb, to the picking system disclosed by Yamada(409), Yamada(501), and Huang in view of Sieb for the same reasoning and or rationale as stated above with respect to Claim 11. Additionally, it is known that height would be included as the embodiment tracks dropped items and would need to adjust height in order to accurately repick a dropped item.
Regarding Claim 16, Yamada(409), Yamada(501), and Huang disclose the picking system according to Claim 1.
Furthermore, Yamada(409) discloses the processing circuitry in ¶0035. Additionally, Yamada(501) discloses: in a case in which the processing circuitry cannot generate a holding operation plan (See at least: S213 ; moves the manipulator 400 ; imaging position and orientation generated in S212 [¶ 0055]), processes S212 to S215 can be performed multiple times [¶ 0089]).
Furthermore, Sieb discloses: wherein the processing circuitry is further configured to determine whether or not it is possible to continue the picking (See at least: Keeping track of picking failures ; determining when and if to stop attempting to pick [¶ 0029]) ; Figure 12 ; illustrating an embodiment of a process to change a robotic arm grasp of an item [¶ 0086]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date to incorporate the element changing the grasp (motion parameters) from Sieb to the picking system disclosed by Yamada(409), Yamada(501), and Huang in order to determine wither picking can be continued. In order to change the grasp on an item, one with ordinary skill in the art can determine that the parameters can be changed for this to happen. The motivation to combine these references is that with respect to Claim 1, and furthermore so that the picking device has a higher success rate.
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yamada et. al (US 20190329409 A1 - Yamada(409) ), Yamada et. al (US 20180276501 A1 - Yamada(501) ), and Huang et. al (US 20200331144 A1)in view of Chavez (US 20200270071).
Regarding Claim 9, Yamada(409), Yamada(501), and Huang disclose the picking system according to Claim 8.
Furthermore, Yamada(409) discloses: wherein the processing circuitry is further configured to, in a case in which the processing circuitry detects a failure in the calculation of the route, (See at least: control unit 204 determines how to move ; based on the holding position ; holding position and orientation ; or the trajectory [¶ 0060]).
Additionally, Yamada(501) discloses: in the first retry determination, (See at least: recognition apparatus 320 ; does not generate the position and orientation ; which target object 1 can be held ; determined that the target object 1 is not held ; flow returns to S232 to perform the imaging again. [¶ 0089]).
However, Yamada(409), Yamada(501), and Huang do not explicitly disclose, but Chavez—who is in the same field of endeavor—discloses: determine whether or not it is possible to calculate a route different from the route, based on the selected position and posture (See at least: planning (or re-planning) ; generate a plan to pick/place items ; available sensor information [¶ 0038]) ; plan to pick ; consideration ; trajectory ; robotic arm [¶ 0081]) ; plan to pick; updated [¶ 0059]) ; trajectory able to be adjusted [¶ 0082]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date to incorporate the consideration of the trajectory before any movement is executed from Chavez to the picking system disclosed by Yamada(409), Yamada(501), and Huang. Although it is not explicitly stated, one with ordinary skill in the art would be able to determine that if a planned route has potential collision or obstacles, that route may be rejected. Thus, an alternate route is able to be updated or calculated.
Regarding Claim 10, Yamada(409), Yamada(501), and Huang in view of Chavez disclose the picking system according to Claim 9.
Furthermore, Yamada(409) discloses: wherein the processing circuitry is further configured to (See at least ¶0035) and determine whether or not it is possible to select a position and posture different from the selected position and posture (See at least: information processing apparatus 20 ; repeats processing for determining a holding position different from the previous one ; executing the holding operation again [¶ 0032]) ; multiple holding positions can be calculated [¶ 0081])
Additionally, Yamada(501) discloses: in the first retry determination (See at least: via recognition apparatus 320 ; does not generate the position and orientation ; which target object 1 can be held ; determined that the target object 1 is not held ; flow returns to S232 to perform the imaging again. [¶ 0089])
Furthermore, Chavez discloses: in a case in which the processing circuitry determines that it is not possible to calculate a route different from the route (See at least: planning (or re-planning) ; generate a plan to pick/place items ; available sensor information [¶ 0038] ; plan to pick ; consideration ; trajectory ; robotic arm [¶ 0081]) ; plan to pick; updated [¶ 0059] ; trajectory able to be adjusted [¶ 0082]).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date to incorporate the consideration calculating/updating different routes from Chavez to the picking system disclosed by Yamada(409), Yamada(501), and Huang for the same reasoning and rationale as that with respect to Claim 9.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAYLA RENEE DOROS whose telephone number is (703)756-1415. The examiner can normally be reached Generally: M-F (8-5) EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.R.D./Examiner, Art Unit 3657 /ABBY LIN/Supervisory Patent Examiner, Art Unit 3657