DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
1. Acknowledgement is made of applicant’s claim for priority to JP2023-216157 filed on 12/21/2023.
Information Disclosure Statement
2. The information disclosure statements (IDS) filed on 11/21/2024 and 12/11/2025 are being considered by the examiner.
Claim Rejections - 35 USC § 103
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 1-13 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ott et al. (WO 2023205209, hereinafter Ott) in view of Tawara (US 20220016784, hereinafter Tawara).
Regarding claim 1, Ott teaches a control assistance system comprising circuitry configured (see at least Fig. 1) to:
acquire a workpiece model indicating a shape of a workpiece (see at least Fig. 1 and [0062]: “Additionally, or alternatively, the model logic 171 may be configured to parse or process a CAD file (e.g., a CAD model) to determine an order of assembly, primitive components or shapes in the CAD model, changes and deformations to the primitive shapes or the entire assembly to adapt to the physical part, material property (such as reflectance, friction, etc.), or a combination thereof.”);
acquire a robot model indicating a robot having an end effector for holding the workpiece (see at least Figs. 1-2 and [0064]: “In some implementations, the perception logic 172 may use the sensor data 180 to generate a representation associated with the workspace 102 or one or more objects associated with an assembly task. For example, the sensor data 180 may include one or more images (e.g., 2D image data captured by the sensor 130 at a particular orientation relative to the first object 140 or the second object 142). The perception logic 172 may overlap or stitch together multiple images to generate a representation, such as a 2D image data or 3D image data associated with the representation.”);
establish, on the workpiece, two or more candidate positions that are two or more candidates for a holding position on the workpiece to be held by the end effector, based on the workpiece model (see at least [0008]: “As used herein, the term “pose” includes both position information (e.g., x, y, z coordinates) and orientation information (e.g., relative angle between the object and the ground or a holder on which the object is positioned), such as a position and orientation of an object within a coordinate system.”; [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof. To illustrate, to determine a mesh or a pose of the object, the picking 1624 may generate or determine mesh or pose information, such as 3D or point cloud information of the object. To determine the one or more candidate grasp poses, the picking 1624 may generate one or more candidate grasp poses associated with the part. For example, the grasp pose may include or be associated with a grasp pose of a tool, such as a gripper device, a clamp, a vacuum, etc., coupled to a robot device. The one or more candidate grasp poses may be generated randomly, at locations uniformly spaced across the object, or a combination thereof. Additionally, or alternatively, the candidate grasp poses may be determined based on a weld plan or a weld location associated with the object. Accordingly, a search space (or a number of grasp locations) of the object, a number of candidate grasp poses, or a combination thereof, may be reduced based on available information associated with robot system 1600.”);
virtually execute, for each of the two or more candidate positions, a picking process in which the end effector holds the workpiece at the candidate position and a subsequent process for the workpiece held at the candidate position, by a simulation based on the workpiece model and the robot model (see at least Figs. 1-2 and [0033]: “For example, the assembly task may include completion of: locating one or more target objects, determining a pose of the one or more target objects, picking a set of objects of the one or more target objects, determining a pose of the set of objects relative to a robot device after picking, positioning the set of objects at the desired relative pose, and coordinating movements of one or more robot devices, performing an assembly or manufacturing task or operation (e.g., welding, coupling, bolting, pinning, etc.), or a combination thereof.”; [0062]: “In some implementations, a CAD file (e.g., a CAD model) can be used to determine the necessary primitive assembly tasks for a particular assembly / manufacturing task. For example, CAD models can help in planning the assembly process, including tasks such as insertion, alignment, and placement of components. In other words, the CAD model can assist or be used in understanding the basic steps required to assemble a product or structure.”; [0067]: “To illustrate, the perception logic 172 may be configured to identify or select which tool of a plurality of available tools is best suited to pick-up or grasp an object based on a CAD file/model, 2D information, 3D information, or a combination thereof.”);
determine, as the holding position, one of at least one candidate position among the two or more candidate positions, wherein the at least one candidate position enables the picking process and the subsequent process to be completed in the simulation (see at least [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof…Additionally, or alternatively, the candidate grasp poses may be determined based on a weld plan or a weld location associated with the object.”; [0157]: “The picking 1624 may rank the one or more candidate grasp poses (or a portion thereof) based on the determined grasp score(s). Additionally, or alternatively, the picking 1624 may select the a grasp score, such as the grasp score having the highest value, and determine or plan a path of a tool (e.g., the gripper) to the grasp location associated with the selected grasp score. In some implementations, the picking 1624 may determine the path using the kinematic reachability analysis and collision analysis.”); and
control the robot placed in a real working space, based on the holding position (see at least Fig. 16 and [0171]: “The control device 1650 may be configured to control or perform one or more operations of the robot system 1600. For example, the control device 1650 may be configured to control or perform one or more operations of pre-processing 1610, the assembly 1620, the weld 1630, the post-processing 1640, or a combination thereof. In some implementations, the control device 1650 may be configured to control at least a portion of the robot system 1600 to perform coordinated motion to reach difficult areas or weld areas under different part orientations (e.g., IF, 2F, 1G, 2G, etc.), while assembling complicated parts.”; [0172]: “For example, the control device may determine the first order/process (e.g., the part assembly order 1652) based on the CAD model of the final product. Based on execution of the first order/process, the control device 1650 may acquire data and feedback. Based on the data and feedback, the control device may determine a second order/process to assemble the product.”).
Ott fails to explicitly teach set two or more candidate positions that are two or more candidates for a holding position to be held by the end effector, based on the workpiece model.
However, Tawara teaches an apparatus and system for robot image processing that sets two or more candidate positions that are two or more candidates for a holding position to be held by an end effector, based on a workpiece model (see at least [0139]: “The holding position thus set is a possible holding position held by the robot RBT. A plurality of possible holding positions held by the robot RBT can be each set in association with a corresponding search model of the workpiece WK pre-registered. For example, two possible holding positions can be set in association with one search model, and four possible holding positions can be set in association with another search model. The set possible holding position can be stored in the storage part 320 in association with the search model.”; [0140]: “Therefore, copying already registered possible holding position information and changing some position parameters set for this possible holding position to allow the possible holding position information to be saved as a new possible holding position makes it possible to register, without time and effort, a plurality of possible holding positions in a simplified manner. Further, similarly, it is possible to read out an existing possible holding position, appropriately modify the position parameter, and save the change.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ott to incorporate the teachings of Tawara and provide means to set two or more candidate positions that are two or more candidates for a holding position on a workpiece to be held by an end effector, based on a workpiece model, with a reasonable expectation of success, in order register, without time and effort, a plurality of possible holding positions in a simplified manner [0140].
Regarding claim 2, modified Ott teaches the limitations of claim 1. Ott further teaches wherein the subsequent process includes a placing process in which the robot places the held workpiece at a designated position, and an additional process for the workpiece in a state of being held by the robot at the designated position (see at least [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof. To illustrate, to determine a mesh or a pose of the object, the picking 1624 may generate or determine mesh or pose information, such as 3D or point cloud information of the object. To determine the one or more candidate grasp poses, the picking 1624 may generate one or more candidate grasp poses associated with the part. For example, the grasp pose may include or be associated with a grasp pose of a tool, such as a gripper device, a clamp, a vacuum, etc., coupled to a robot device. The one or more candidate grasp poses may be generated randomly, at locations uniformly spaced across the object, or a combination thereof. Additionally, or alternatively, the candidate grasp poses may be determined based on a weld plan or a weld location associated with the object. Accordingly, a search space (or a number of grasp locations) of the object, a number of candidate grasp poses, or a combination thereof, may be reduced based on available information associated with robot system 1600.”), and
wherein the circuitry is configured to virtually execute the picking process, the placing process, and the additional process (see at least [0157]: “The picking 1624 may rank the one or more candidate grasp poses (or a portion thereof) based on the determined grasp score(s). Additionally, or alternatively, the picking 1624 may select the a grasp score, such as the grasp score having the highest value, and determine or plan a path of a tool (e.g., the gripper) to the grasp location associated with the selected grasp score. In some implementations, the picking 1624 may determine the path using the kinematic reachability analysis and collision analysis. If the picking 1624 determines that a path of the gripper to grasp the object based on the candidate grasp pose is possible or valid, the candidate grasp pose is selected. In some implementations, the path may include a path of the gripper to the object, a path of the gripper grasped to the object from a location/position where the gripper grasps the object to a location/position where a weld operation is to be performed on the object, or a combination thereof. If the picking 1624 determines that the path is not possible or is invalid, the picking 1624 selects another candidate grasp based on the determined grasp score(s). For example, the picking 1624 may select a next highest grasp score and determine whether a path associated with the grasp location of the next highest grasp score is possible or valid.”.
Regarding claim 3, modified Ott teaches the limitations of claim 1. Ott further teaches wherein the circuitry is configured to identify at least one candidate position where the robot operates normally and no interference is detected, as the at least one candidate position that enables the picking process and the subsequent process to be completed (see at least [0154]: “The picking 1624 may, for at least one of the one or more candidate grasp poses, check one or more constraints, such as one or more geometric constraints…In some implementations, the picking 1624 may check the one or more constraints for each of the one or more candidate grasp poses.”; [0155]: “The assembly feasibility constraint may be associated with verification that a grasp location or a tool location/position does not or will not interfere with another object, such as another object to which the object is to be coupled (e.g., welded). The weld feasibility constraint may be associated with verification that a grasp location or a tool location/position is not too close (e.g., within a threshold distance) from a weld area and/or does not interfere with (e.g., restrict or limit) mobility of a welding torch.”).
Regarding claim 4, modified Ott teaches the limitations of claim 1. Ott further teaches wherein the circuitry is configured to generate a virtual shape of the workpiece based on the workpiece model (see at least Fig. 1 and [0062]: “Additionally, or alternatively, the model logic 171 may be configured to parse or process a CAD file (e.g., a CAD model) to determine an order of assembly, primitive components or shapes in the CAD model, changes and deformations to the primitive shapes or the entire assembly to adapt to the physical part, material property (such as reflectance, friction, etc.), or a combination thereof.”), and establish the two or more candidate positions on the virtual shape, thereby establishing the two or more candidate positions on the workpiece (see at least [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof. To illustrate, to determine a mesh or a pose of the object, the picking 1624 may generate or determine mesh or pose information, such as 3D or point cloud information of the object. To determine the one or more candidate grasp poses, the picking 1624 may generate one or more candidate grasp poses associated with the part. For example, the grasp pose may include or be associated with a grasp pose of a tool, such as a gripper device, a clamp, a vacuum, etc., coupled to a robot device. The one or more candidate grasp poses may be generated randomly, at locations uniformly spaced across the object, or a combination thereof. Additionally, or alternatively, the candidate grasp poses may be determined based on a weld plan or a weld location associated with the object. Accordingly, a search space (or a number of grasp locations) of the object, a number of candidate grasp poses, or a combination thereof, may be reduced based on available information associated with robot system 1600.”).
Ott fails to explicitly teach setting the two or more candidate positions.
However, Tawara teaches an apparatus and system for robot image processing that sets two or more candidate positions (see at least [0139]: “The holding position thus set is a possible holding position held by the robot RBT. A plurality of possible holding positions held by the robot RBT can be each set in association with a corresponding search model of the workpiece WK pre-registered. For example, two possible holding positions can be set in association with one search model, and four possible holding positions can be set in association with another search model. The set possible holding position can be stored in the storage part 320 in association with the search model.”; [0140]: “Therefore, copying already registered possible holding position information and changing some position parameters set for this possible holding position to allow the possible holding position information to be saved as a new possible holding position makes it possible to register, without time and effort, a plurality of possible holding positions in a simplified manner. Further, similarly, it is possible to read out an existing possible holding position, appropriately modify the position parameter, and save the change.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ott to incorporate the teachings of Tawara and provide means to set two or more candidate positions, with a reasonable expectation of success, in order register, without time and effort, a plurality of possible holding positions in a simplified manner [0140].
Regarding claim 5, modified Ott teaches the limitations of claim 1. Ott further teaches wherein the circuitry is configured to:
acquire a device model indicating a different device different from the robot (see at least Figs. 1-2 and [0064]: “In some implementations, the perception logic 172 may use the sensor data 180 to generate a representation associated with the workspace 102 or one or more objects associated with an assembly task.”; [0100]: “As shown, workspace 102 includes the first robot device 110, the first tool 120, the second robot device 212, the second tool 222, the first sensor 234, and the manufacturing tool 126.”);
virtually execute, as the subsequent process, a process performed by the robot and the different device in cooperation with each other on the workpiece held at the candidate position, by the simulation based further on the device model (see at least Fig. 2 and [0070]: “In some implementations, the perception logic 172 may be configured to perform or imitate an objects joining technique to bring two or more objects, such as the first object 140 and the second object 142, into a spatial relationship (e.g., to a fixed relative pose). The objects joining technique may determine how to orient the objects in space relative to each other and manage the mobility and reachability constraints related to one or more robot devices, one or more tools, or a combination thereof, to position the objects in a fixed pose and perform a manufacturing task.”; [0100]: “The workspace 102 of the robot system 200 may include one or more devices or components of the robot system 200. As shown, workspace 102 includes the first robot device 110, the first tool 120, the second robot device 212, the second tool 222, the first sensor 234, and the manufacturing tool 126.”); and
identify the at least one candidate position that enables the picking process and the subsequent process to be completed without detecting interference for both the robot and the different device (see at least [0070]: “The objects joining technique may include defining a hypothetical space, such as a bounding box, the workspace 102 in which the objects may be assembled. This hypothetical space may be a cube (or other shape), of a certain length, width, and height. The perception logic 172 may perform the objects joining technique based on or in conjunction with a kinematic reachability analysis and collision analysis perform by the kinematic reachability analysis and collision logic 174. For example, the controller 308 may perform a reachability analysis and collision analysis on one or more robot devices or objects that may need to enter the hypothetical space to determine whether the one or more robot device or the objects can exist in one or more states in the hypothetical space (alone or together with the other object). The perception logic 172 or the kinematic reachability analysis and collision logic 174 may consider a pose of one of the objects as a reference pose and then perform the reachability analysis and collision analysis on the other object with respect to the reference pose of the one object. Additionally, or alternatively, the reachability analysis and collision analysis may be performed based on the CAD model of the assembled objects.”).
Regarding claim 6, modified Ott teaches the limitations of claim 5. Ott further teaches wherein the different device is a different robot (see at least Figs. 1-2 and [0064]: “In some implementations, the perception logic 172 may use the sensor data 180 to generate a representation associated with the workspace 102 or one or more objects associated with an assembly task.”; [0100]: “As shown, workspace 102 includes the first robot device 110, the first tool 120, the second robot device 212, the second tool 222, the first sensor 234, and the manufacturing tool 126.”), and
wherein the circuitry is configured to virtually execute the subsequent process including a placing process in which the robot places the held workpiece at a designated position and a cooperative process in which the different robot works on the workpiece in a state of being held by the robot at the designated position (see at least [0070]: “In some implementations, the perception logic 172 may be configured to perform or imitate an objects joining technique to bring two or more objects, such as the first object 140 and the second object 142, into a spatial relationship (e.g., to a fixed relative pose). The objects joining technique may determine how to orient the objects in space relative to each other and manage the mobility and reachability constraints related to one or more robot devices, one or more tools, or a combination thereof, to position the objects in a fixed pose and perform a manufacturing task. The objects joining technique may include defining a hypothetical space, such as a bounding box, the workspace 102 in which the objects may be assembled. This hypothetical space may be a cube (or other shape), of a certain length, width, and height. The perception logic 172 may perform the objects joining technique based on or in conjunction with a kinematic reachability analysis and collision analysis perform by the kinematic reachability analysis and collision logic 174…The perception logic 172 or the kinematic reachability analysis and collision logic 174 may consider a pose of one of the objects as a reference pose and then perform the reachability analysis and collision analysis on the other object with respect to the reference pose of the one object. Additionally, or alternatively, the reachability analysis and collision analysis may be performed based on the CAD model of the assembled objects.”).
Regarding claim 7, modified Ott teaches the limitations of claim 6. Ott further teaches wherein the circuitry is configured to virtually execute a work in a plurality of working areas on the workpiece by the different robot, in the cooperative process (see at least [0070]: “In some implementations, the perception logic 172 may be configured to perform or imitate an objects joining technique to bring two or more objects, such as the first object 140 and the second object 142, into a spatial relationship (e.g., to a fixed relative pose). The objects joining technique may determine how to orient the objects in space relative to each other and manage the mobility and reachability constraints related to one or more robot devices, one or more tools, or a combination thereof, to position the objects in a fixed pose and perform a manufacturing task…The perception logic 172 or the kinematic reachability analysis and collision logic 174 may consider a pose of one of the objects as a reference pose and then perform the reachability analysis and collision analysis on the other object with respect to the reference pose of the one object. Additionally, or alternatively, the reachability analysis and collision analysis may be performed based on the CAD model of the assembled objects.”).
Regarding claim 8, modified Ott teaches the limitations of claim 7. Ott further teaches wherein the circuitry is configured to:
virtually execute, as the placing process, a process in which the robot places the held workpiece at the designated position on a different workpiece (see at least [0070]: “In some implementations, the perception logic 172 may be configured to perform or imitate an objects joining technique to bring two or more objects, such as the first object 140 and the second object 142, into a spatial relationship (e.g., to a fixed relative pose). The objects joining technique may determine how to orient the objects in space relative to each other and manage the mobility and reachability constraints related to one or more robot devices, one or more tools, or a combination thereof, to position the objects in a fixed pose and perform a manufacturing task.”); and
virtually execute, as the cooperative process, a process in which the different robot fixes the workpiece on the different workpiece (see at least [0070]: “The objects joining technique may include defining a hypothetical space, such as a bounding box, the workspace 102 in which the objects may be assembled...The perception logic 172 may perform the objects joining technique based on or in conjunction with a kinematic reachability analysis and collision analysis perform by the kinematic reachability analysis and collision logic 174. For example, the controller 308 may perform a reachability analysis and collision analysis on one or more robot devices or objects that may need to enter the hypothetical space to determine whether the one or more robot device or the objects can exist in one or more states in the hypothetical space (alone or together with the other object)…Additionally, or alternatively, the reachability analysis and collision analysis may be performed based on the CAD model of the assembled objects.”).
Regarding claim 9, modified Ott teaches the limitations of claim 1. Ott further teaches wherein the circuitry is configured to control the robot placed in the real working space so as to perform a real picking process of holding the workpiece present in the real working space at the holding position by the end effector (see at least [0072]: “In some implementations, the fit up logic 173 is configured to perform a registration process…To illustrate, the fit up logic 173 may perform the registration process using the point cloud of a CAD model of the first object 140 and a representation (e.g., a 2D or 3D representation) of the first object 140 (generated using the sensor data 180) by sampling the CAD model point cloud and the representation.”; Fig. 16 and [0172]: “For example, the control device may determine the first order/process (e.g., the part assembly order 1652) based on the CAD model of the final product. Based on execution of the first order/process, the control device 1650 may acquire data and feedback. Based on the data and feedback, the control device may determine a second order/process to assemble the product.”).
Regarding claim 10, modified Ott teaches the limitations of claim 9. Ott further teaches wherein the circuitry is configured to control the robot so as to perform the real picking process and a real subsequent process on the workpiece held at the holding position (see at least Fig. 16 and [0171]: “The control device 1650 may be configured to control or perform one or more operations of the robot system 1600. For example, the control device 1650 may be configured to control or perform one or more operations of pre-processing 1610, the assembly 1620, the weld 1630, the post-processing 1640, or a combination thereof. In some implementations, the control device 1650 may be configured to control at least a portion of the robot system 1600 to perform coordinated motion to reach difficult areas or weld areas under different part orientations (e.g., IF, 2F, 1G, 2G, etc.), while assembling complicated parts.”; [0172]: “For example, the control device may determine the first order/process (e.g., the part assembly order 1652) based on the CAD model of the final product. Based on execution of the first order/process, the control device 1650 may acquire data and feedback. Based on the data and feedback, the control device may determine a second order/process to assemble the product.”).
Regarding claim 11, modified Ott teaches the limitations of claim 1. Ott further teaches wherein the circuitry is configured to acquire, in a case where the workpiece to be held by the end effector is designated by a user, the workpiece model of the designated workpiece (see at least Fig. 1 and [0062]: “Additionally, or alternatively, the model logic 171 may be configured to parse or process a CAD file (e.g., a CAD model) to determine an order of assembly, primitive components or shapes in the CAD model, changes and deformations to the primitive shapes or the entire assembly to adapt to the physical part, material property (such as reflectance, friction, etc.), or a combination thereof.”; [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof. To illustrate, to determine a mesh or a pose of the object, the picking 1624 may generate or determine mesh or pose information, such as 3D or point cloud information of the object.”; [0234]: “It is noted that semi-autonomous or autonomous robots may have these above-described abilities only in part, and where some user given or selected parameters may be required, or user involvement may be necessary in other ways, such as placing the objects on the positioned s), or providing annotations on models (e.g., computer aided design (CAD) models)”).
Regarding claim 12, modified Ott teaches the limitations of claim 1. Ott further teaches wherein the circuitry is configured to acquire, in a case where the robot is designated by a user, the robot model of the designated robot (see at least Figs. 1-2 and [0064]: “In some implementations, the perception logic 172 may use the sensor data 180 to generate a representation associated with the workspace 102 or one or more objects associated with an assembly task. For example, the sensor data 180 may include one or more images (e.g., 2D image data captured by the sensor 130 at a particular orientation relative to the first object 140 or the second object 142). The perception logic 172 may overlap or stitch together multiple images to generate a representation, such as a 2D image data or 3D image data associated with the representation.”; [0234]: “It is noted that semi-autonomous or autonomous robots may have these above-described abilities only in part, and where some user given or selected parameters may be required, or user involvement may be necessary in other ways, such as placing the objects on the positioned s), or providing annotations on models (e.g., computer aided design (CAD) models)”).
Regarding claim 13, modified Ott teaches the limitations of claim 1. Ott further teaches wherein the circuitry is configured to:
temporarily establish a plurality of the candidate positions on the workpiece based on the workpiece model (see at least [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof.”);
select the two or more candidate positions from the plurality of candidate positions based on each of the plurality of candidate positions and a shape of a contact surface of the end effector capable of contacting the workpiece (see at least [0068]: “In some implementations, the perception logic 172 may be configured to analyze an object to determine one or more properties of the object, such as the object’s size, shape, etc. For example, the perception logic 172 may analyze the object based on a CAD file/model, 2D information, 3D information, or a combination thereof. The perception logic 172 may be configured to identify one or more tools (e.g., 120) which may be used to pick-up or grasp the object. To illustrate, the perception logic 172 may be configured to identify or select which tool of a plurality of available tools is best suited to pick-up or grasp an object based on a CAD file/model, 2D information, 3D information, or a combination thereof. For example, if an object is small and has an irregular shape, a gripper with soft, flexible fingers may be selected. Alternatively, if the object is large and heavy, a gripper with a strong, rigid grip may be selected.”; [0152]: “To illustrate, to determine a mesh or a pose of the object, the picking 1624 may generate or determine mesh or pose information, such as 3D or point cloud information of the object. To determine the one or more candidate grasp poses, the picking 1624 may generate one or more candidate grasp poses associated with the part. For example, the grasp pose may include or be associated with a grasp pose of a tool, such as a gripper device, a clamp, a vacuum, etc., coupled to a robot device. The one or more candidate grasp poses may be generated randomly, at locations uniformly spaced across the object, or a combination thereof. Additionally, or alternatively, the candidate grasp poses may be determined based on a weld plan or a weld location associated with the object.”); and
establish the selected two or more candidate positions for the simulation (see at least [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof.”; [0154]: “The picking 1624 may, for at least one of the one or more candidate grasp poses, check one or more constraints, such as one or more geometric constraints. For example, the picking 1624 may check the one or more constraints to confirm that proper contact may be achieved between the tool (e.g., the gripper device) and the object. In some implementations, the picking 1624 may check the one or more constraints for each of the one or more candidate grasp poses.”).
Ott fails to explicitly teach setting a plurality of the candidate positions.
However, Tawara teaches an apparatus and system for robot image processing that sets a plurality of candidate positions (see at least [0139]: “The holding position thus set is a possible holding position held by the robot RBT. A plurality of possible holding positions held by the robot RBT can be each set in association with a corresponding search model of the workpiece WK pre-registered. For example, two possible holding positions can be set in association with one search model, and four possible holding positions can be set in association with another search model. The set possible holding position can be stored in the storage part 320 in association with the search model.”; [0140]: “Therefore, copying already registered possible holding position information and changing some position parameters set for this possible holding position to allow the possible holding position information to be saved as a new possible holding position makes it possible to register, without time and effort, a plurality of possible holding positions in a simplified manner. Further, similarly, it is possible to read out an existing possible holding position, appropriately modify the position parameter, and save the change.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ott to incorporate the teachings of Tawara and provide means to sets a plurality of candidate positions, with a reasonable expectation of success, in order register, without time and effort, a plurality of possible holding positions in a simplified manner [0140].
Regarding claim 19, Ott teaches a processor-executable method (see at least Fig. 1) comprising:
acquiring a workpiece model indicating a shape of a workpiece (see at least Fig. 1 and [0062]: “Additionally, or alternatively, the model logic 171 may be configured to parse or process a CAD file (e.g., a CAD model) to determine an order of assembly, primitive components or shapes in the CAD model, changes and deformations to the primitive shapes or the entire assembly to adapt to the physical part, material property (such as reflectance, friction, etc.), or a combination thereof.”);
acquiring a robot model indicating a robot having an end effector for holding the workpiece (see at least Figs. 1-2 and [0064]: “In some implementations, the perception logic 172 may use the sensor data 180 to generate a representation associated with the workspace 102 or one or more objects associated with an assembly task. For example, the sensor data 180 may include one or more images (e.g., 2D image data captured by the sensor 130 at a particular orientation relative to the first object 140 or the second object 142). The perception logic 172 may overlap or stitch together multiple images to generate a representation, such as a 2D image data or 3D image data associated with the representation.”);
establishing, on the workpiece, two or more candidate positions that are two or more candidates for a holding position on the workpiece to be held by the end effector, based on the workpiece model (see at least [0008]: “As used herein, the term “pose” includes both position information (e.g., x, y, z coordinates) and orientation information (e.g., relative angle between the object and the ground or a holder on which the object is positioned), such as a position and orientation of an object within a coordinate system.”; [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof. To illustrate, to determine a mesh or a pose of the object, the picking 1624 may generate or determine mesh or pose information, such as 3D or point cloud information of the object. To determine the one or more candidate grasp poses, the picking 1624 may generate one or more candidate grasp poses associated with the part. For example, the grasp pose may include or be associated with a grasp pose of a tool, such as a gripper device, a clamp, a vacuum, etc., coupled to a robot device. The one or more candidate grasp poses may be generated randomly, at locations uniformly spaced across the object, or a combination thereof. Additionally, or alternatively, the candidate grasp poses may be determined based on a weld plan or a weld location associated with the object. Accordingly, a search space (or a number of grasp locations) of the object, a number of candidate grasp poses, or a combination thereof, may be reduced based on available information associated with robot system 1600.”);
virtually executing, for each of the two or more candidate positions, a picking process in which the end effector holds the workpiece at the candidate position and a subsequent process for the workpiece held at the candidate position, by a simulation based on the workpiece model and the robot model (see at least Figs. 1-2 and [0033]: “For example, the assembly task may include completion of: locating one or more target objects, determining a pose of the one or more target objects, picking a set of objects of the one or more target objects, determining a pose of the set of objects relative to a robot device after picking, positioning the set of objects at the desired relative pose, and coordinating movements of one or more robot devices, performing an assembly or manufacturing task or operation (e.g., welding, coupling, bolting, pinning, etc.), or a combination thereof.”; [0062]: “In some implementations, a CAD file (e.g., a CAD model) can be used to determine the necessary primitive assembly tasks for a particular assembly / manufacturing task. For example, CAD models can help in planning the assembly process, including tasks such as insertion, alignment, and placement of components. In other words, the CAD model can assist or be used in understanding the basic steps required to assemble a product or structure.”; [0067]: “To illustrate, the perception logic 172 may be configured to identify or select which tool of a plurality of available tools is best suited to pick-up or grasp an object based on a CAD file/model, 2D information, 3D information, or a combination thereof.”);
determining, as the holding position, one of at least one candidate position among the two or more candidate positions, wherein the at least one candidate position enables the picking process and the subsequent process to be completed in the simulation (see at least [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof…Additionally, or alternatively, the candidate grasp poses may be determined based on a weld plan or a weld location associated with the object.”; [0157]: “The picking 1624 may rank the one or more candidate grasp poses (or a portion thereof) based on the determined grasp score(s). Additionally, or alternatively, the picking 1624 may select the a grasp score, such as the grasp score having the highest value, and determine or plan a path of a tool (e.g., the gripper) to the grasp location associated with the selected grasp score. In some implementations, the picking 1624 may determine the path using the kinematic reachability analysis and collision analysis.”); and
controlling the robot placed in a real working space, based on the holding position (see at least Fig. 16 and [0171]: “The control device 1650 may be configured to control or perform one or more operations of the robot system 1600. For example, the control device 1650 may be configured to control or perform one or more operations of pre-processing 1610, the assembly 1620, the weld 1630, the post-processing 1640, or a combination thereof. In some implementations, the control device 1650 may be configured to control at least a portion of the robot system 1600 to perform coordinated motion to reach difficult areas or weld areas under different part orientations (e.g., IF, 2F, 1G, 2G, etc.), while assembling complicated parts.”; [0172]: “For example, the control device may determine the first order/process (e.g., the part assembly order 1652) based on the CAD model of the final product. Based on execution of the first order/process, the control device 1650 may acquire data and feedback. Based on the data and feedback, the control device may determine a second order/process to assemble the product.”).
Ott fails to explicitly teach setting two or more candidate positions that are two or more candidates for a holding position to be held by the end effector, based on the workpiece model.
However, Tawara teaches an apparatus and system for robot image processing that sets two or more candidate positions that are two or more candidates for a holding position to be held by an end effector, based on a workpiece model (see at least [0139]: “The holding position thus set is a possible holding position held by the robot RBT. A plurality of possible holding positions held by the robot RBT can be each set in association with a corresponding search model of the workpiece WK pre-registered. For example, two possible holding positions can be set in association with one search model, and four possible holding positions can be set in association with another search model. The set possible holding position can be stored in the storage part 320 in association with the search model.”; [0140]: “Therefore, copying already registered possible holding position information and changing some position parameters set for this possible holding position to allow the possible holding position information to be saved as a new possible holding position makes it possible to register, without time and effort, a plurality of possible holding positions in a simplified manner. Further, similarly, it is possible to read out an existing possible holding position, appropriately modify the position parameter, and save the change.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ott to incorporate the teachings of Tawara and provide means to set two or more candidate positions that are two or more candidates for a holding position on a workpiece to be held by an end effector, based on a workpiece model, with a reasonable expectation of success, in order register, without time and effort, a plurality of possible holding positions in a simplified manner [0140].
Regarding claim 20, Ott teaches a non-transitory computer-readable storage medium storing processor-executable instructions (see at least Fig. 1) to:
acquire a workpiece model indicating a shape of a workpiece (see at least Fig. 1 and [0062]: “Additionally, or alternatively, the model logic 171 may be configured to parse or process a CAD file (e.g., a CAD model) to determine an order of assembly, primitive components or shapes in the CAD model, changes and deformations to the primitive shapes or the entire assembly to adapt to the physical part, material property (such as reflectance, friction, etc.), or a combination thereof.”);
acquire a robot model indicating a robot having an end effector for holding the workpiece (see at least Figs. 1-2 and [0064]: “In some implementations, the perception logic 172 may use the sensor data 180 to generate a representation associated with the workspace 102 or one or more objects associated with an assembly task. For example, the sensor data 180 may include one or more images (e.g., 2D image data captured by the sensor 130 at a particular orientation relative to the first object 140 or the second object 142). The perception logic 172 may overlap or stitch together multiple images to generate a representation, such as a 2D image data or 3D image data associated with the representation.”);
establish, on the workpiece, two or more candidate positions that are two or more candidates for a holding position on the workpiece to be held by the end effector, based on the workpiece model (see at least [0008]: “As used herein, the term “pose” includes both position information (e.g., x, y, z coordinates) and orientation information (e.g., relative angle between the object and the ground or a holder on which the object is positioned), such as a position and orientation of an object within a coordinate system.”; [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof. To illustrate, to determine a mesh or a pose of the object, the picking 1624 may generate or determine mesh or pose information, such as 3D or point cloud information of the object. To determine the one or more candidate grasp poses, the picking 1624 may generate one or more candidate grasp poses associated with the part. For example, the grasp pose may include or be associated with a grasp pose of a tool, such as a gripper device, a clamp, a vacuum, etc., coupled to a robot device. The one or more candidate grasp poses may be generated randomly, at locations uniformly spaced across the object, or a combination thereof. Additionally, or alternatively, the candidate grasp poses may be determined based on a weld plan or a weld location associated with the object. Accordingly, a search space (or a number of grasp locations) of the object, a number of candidate grasp poses, or a combination thereof, may be reduced based on available information associated with robot system 1600.”);
virtually execute, for each of the two or more candidate positions, a picking process in which the end effector holds the workpiece at the candidate position and a subsequent process for the workpiece held at the candidate position, by a simulation based on the workpiece model and the robot model (see at least Figs. 1-2 and [0033]: “For example, the assembly task may include completion of: locating one or more target objects, determining a pose of the one or more target objects, picking a set of objects of the one or more target objects, determining a pose of the set of objects relative to a robot device after picking, positioning the set of objects at the desired relative pose, and coordinating movements of one or more robot devices, performing an assembly or manufacturing task or operation (e.g., welding, coupling, bolting, pinning, etc.), or a combination thereof.”; [0062]: “In some implementations, a CAD file (e.g., a CAD model) can be used to determine the necessary primitive assembly tasks for a particular assembly / manufacturing task. For example, CAD models can help in planning the assembly process, including tasks such as insertion, alignment, and placement of components. In other words, the CAD model can assist or be used in understanding the basic steps required to assemble a product or structure.”; [0067]: “To illustrate, the perception logic 172 may be configured to identify or select which tool of a plurality of available tools is best suited to pick-up or grasp an object based on a CAD file/model, 2D information, 3D information, or a combination thereof.”);
determine, as the holding position, one of at least one candidate position among the two or more candidate positions, wherein the at least one candidate position enables the picking process and the subsequent process to be completed in the simulation (see at least [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof…Additionally, or alternatively, the candidate grasp poses may be determined based on a weld plan or a weld location associated with the object.”; [0157]: “The picking 1624 may rank the one or more candidate grasp poses (or a portion thereof) based on the determined grasp score(s). Additionally, or alternatively, the picking 1624 may select the a grasp score, such as the grasp score having the highest value, and determine or plan a path of a tool (e.g., the gripper) to the grasp location associated with the selected grasp score. In some implementations, the picking 1624 may determine the path using the kinematic reachability analysis and collision analysis.”); and
control the robot placed in a real working space, based on the holding position (see at least Fig. 16 and [0171]: “The control device 1650 may be configured to control or perform one or more operations of the robot system 1600. For example, the control device 1650 may be configured to control or perform one or more operations of pre-processing 1610, the assembly 1620, the weld 1630, the post-processing 1640, or a combination thereof. In some implementations, the control device 1650 may be configured to control at least a portion of the robot system 1600 to perform coordinated motion to reach difficult areas or weld areas under different part orientations (e.g., IF, 2F, 1G, 2G, etc.), while assembling complicated parts.”; [0172]: “For example, the control device may determine the first order/process (e.g., the part assembly order 1652) based on the CAD model of the final product. Based on execution of the first order/process, the control device 1650 may acquire data and feedback. Based on the data and feedback, the control device may determine a second order/process to assemble the product.”).
Ott fails to explicitly teach setting two or more candidate positions that are two or more candidates for a holding position to be held by the end effector, based on the workpiece model.
However, Tawara teaches an apparatus and system for robot image processing that sets two or more candidate positions that are two or more candidates for a holding position to be held by an end effector, based on a workpiece model (see at least [0139]: “The holding position thus set is a possible holding position held by the robot RBT. A plurality of possible holding positions held by the robot RBT can be each set in association with a corresponding search model of the workpiece WK pre-registered. For example, two possible holding positions can be set in association with one search model, and four possible holding positions can be set in association with another search model. The set possible holding position can be stored in the storage part 320 in association with the search model.”; [0140]: “Therefore, copying already registered possible holding position information and changing some position parameters set for this possible holding position to allow the possible holding position information to be saved as a new possible holding position makes it possible to register, without time and effort, a plurality of possible holding positions in a simplified manner. Further, similarly, it is possible to read out an existing possible holding position, appropriately modify the position parameter, and save the change.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ott to incorporate the teachings of Tawara and provide means to set two or more candidate positions that are two or more candidates for a holding position on a workpiece to be held by an end effector, based on a workpiece model, with a reasonable expectation of success, in order register, without time and effort, a plurality of possible holding positions in a simplified manner [0140].
Claim Rejections - 35 USC § 103
5. Claims 14-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ott et al. (WO 2023205209, hereinafter Ott) and Tawara (US 20220016784, hereinafter Tawara) in view of Narita et al. (US 20220219338, hereinafter Narita).
Regarding claim 14, modified Ott teaches the limitations of claim 13. Ott further teaches wherein the circuitry is configured to:
calculate, for each of the plurality of candidate positions, a contact between the contact surface and the workpiece, wherein the contact indicating the contact surface contacts the workpiece (see at least [0154]: “For example, the picking 1624 may check the one or more constraints to confirm that proper contact may be achieved between the tool (e.g., the gripper device) and the object. In some implementations, the picking 1624 may check the one or more constraints for each of the one or more candidate grasp poses.”; [0155]: “The one or more constraints may include a tool-object collision constraint, a grasp contact constraint, an assembly feasibility constraint, a weld accessibility constraint, or a combination thereof, as illustrative, non-limiting examples…The grasp contact constraint may be associated with verification that the grippers (e.g., fingers) of the tool make adequate contact with the object. The assembly feasibility constraint may be associated with verification that a grasp location or a tool location/position does not or will not interfere with another object, such as another object to which the object is to be coupled (e.g., welded).”); and
select the two or more candidate positions from the plurality of candidate positions based on the contact of each of the plurality of candidate positions (see at least [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof. To illustrate, to determine a mesh or a pose of the object, the picking 1624 may generate or determine mesh or pose information, such as 3D or point cloud information of the object. To determine the one or more candidate grasp poses, the picking 1624 may generate one or more candidate grasp poses associated with the part. For example, the grasp pose may include or be associated with a grasp pose of a tool, such as a gripper device, a clamp, a vacuum, etc., coupled to a robot device.”).
Ott fails to explicitly teach calculating a degree of contact between the contact surface and the workpiece, wherein the degree of contact is an index indicating how much of the contact surface contacts the workpiece and selecting based on the degree of contact.
However, Narita teaches an apparatus and system for robot gripping that calculates a degree of contact between a contact surface and a workpiece, wherein the degree of contact is an index indicating how much of the contact surface contacts the workpiece and selecting based on the degree of contact (see at least [0088] “C of FIG. 4 illustrates how the sticking ratio (slip region/sticking region) in the contact surface changes. Regions illustrated in dark gray indicate the sticking region, and regions illustrated in light gray indicate the slip region. As a shear force F.sub.X (unit: Newton (N)), which is a force acting in a shear direction, increases, the slip region spreads from a periphery of the contact surface, and when the sticking ratio reaches 0%, the entire region transitions to the slip region. Therefore, it can be said that, in order to grip the object without slipping, the gripping force is only required to be adjusted to such an extent that the sticking ratio does not reach 0%.”; [0138]: “In the case of Method 1, the change in the sticking ratio is predicted from the information regarding the curvature or shape of the contact surface or the like, and the gripping force is controlled.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ott to incorporate the teachings of Tawara and provide means to calculate a degree of contact between a contact surface and a workpiece, wherein the degree of contact is an index indicating how much of the contact surface contacts the workpiece and selecting based on the degree of contact, with a reasonable expectation of success, in order establish a point to grip the object without slipping [0088].
Regarding claim 15, modified Ott teaches the limitations of claim 14. Ott further teaches wherein the circuitry is configured to calculate, for each of the plurality of candidate positions, the contact in a case where a center of the contact surface in contact with the workpiece is positioned at the candidate position (see at least [0156]: “The picking 1624 may rank the one or more candidate grasp poses. In some implementations, the picking 1624 may rank the one or more candidate grasp poses that satisfy or pass the constraint check. The picking 1624 may, for a candidate grasp pose of the one or more candidate grasp poses, determine a grasp score. In some implementations, the picking 1624 may determine a grasp score and/or rank the one or more candidate grasp poses based on a grasp location, a centroid of an object, an amount of a gripper (or a finger) contact with the object, or a combination thereof.”).
Ott fails to explicitly teach calculate the degree of contact.
However, Narita teaches an apparatus and system for robot gripping that calculates the degree of contact (see at least [0088] “C of FIG. 4 illustrates how the sticking ratio (slip region/sticking region) in the contact surface changes. Regions illustrated in dark gray indicate the sticking region, and regions illustrated in light gray indicate the slip region. As a shear force F.sub.X (unit: Newton (N)), which is a force acting in a shear direction, increases, the slip region spreads from a periphery of the contact surface, and when the sticking ratio reaches 0%, the entire region transitions to the slip region. Therefore, it can be said that, in order to grip the object without slipping, the gripping force is only required to be adjusted to such an extent that the sticking ratio does not reach 0%.”; [0138]: “In the case of Method 1, the change in the sticking ratio is predicted from the information regarding the curvature or shape of the contact surface or the like, and the gripping force is controlled.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ott to incorporate the teachings of Tawara and provide means to calculate the degree of contact, with a reasonable expectation of success, in order establish a point to grip the object without slipping [0088].
Regarding claim 16, modified Ott teaches the limitations of claim 14. Ott further teaches wherein the circuitry is configured to, for each of the plurality of candidate positions:
generate a plurality of sample points on the contact surface (see at least [0154]: “For example, the picking 1624 may check the one or more constraints to confirm that proper contact may be achieved between the tool (e.g., the gripper device) and the object. In some implementations, the picking 1624 may check the one or more constraints for each of the one or more candidate grasp poses.”);
calculate, for each of the plurality of sample points, a distance from the sample point to a surface of the workpiece (see at least [0156]: “The picking 1624 may rank the one or more candidate grasp poses. In some implementations, the picking 1624 may rank the one or more candidate grasp poses that satisfy or pass the constraint check. The picking 1624 may, for a candidate grasp pose of the one or more candidate grasp poses, determine a grasp score. In some implementations, the picking 1624 may determine a grasp score and/or rank the one or more candidate grasp poses based on a grasp location, a centroid of an object, an amount of a gripper (or a finger) contact with the object, or a combination thereof.”); and
calculate the contact based on the distance of each of the plurality of sample points (see at least [0155]: “The one or more constraints may include a tool-object collision constraint, a grasp contact constraint, an assembly feasibility constraint, a weld accessibility constraint, or a combination thereof, as illustrative, non-limiting examples…The grasp contact constraint may be associated with verification that the grippers (e.g., fingers) of the tool make adequate contact with the object. The assembly feasibility constraint may be associated with verification that a grasp location or a tool location/position does not or will not interfere with another object, such as another object to which the object is to be coupled (e.g., welded).”).
Ott fails to explicitly teach calculate the degree of contact.
However, Narita teaches an apparatus and system for robot gripping that calculates the degree of contact (see at least [0088] “C of FIG. 4 illustrates how the sticking ratio (slip region/sticking region) in the contact surface changes. Regions illustrated in dark gray indicate the sticking region, and regions illustrated in light gray indicate the slip region. As a shear force F.sub.X (unit: Newton (N)), which is a force acting in a shear direction, increases, the slip region spreads from a periphery of the contact surface, and when the sticking ratio reaches 0%, the entire region transitions to the slip region. Therefore, it can be said that, in order to grip the object without slipping, the gripping force is only required to be adjusted to such an extent that the sticking ratio does not reach 0%.”; [0138]: “In the case of Method 1, the change in the sticking ratio is predicted from the information regarding the curvature or shape of the contact surface or the like, and the gripping force is controlled.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ott to incorporate the teachings of Tawara and provide means to calculate the degree of contact, with a reasonable expectation of success, in order establish a point to grip the object without slipping [0088].
Regarding claim 17, modified Ott teaches the limitations of claim 14. Ott further teaches wherein the circuitry is configured to automatically determine one of the at least one candidate position as the holding position based on the contact of each of the at least one candidate position that enables the picking process and the subsequent process to be completed in the simulation (see at least [0156]: “The picking 1624 may rank the one or more candidate grasp poses. In some implementations, the picking 1624 may rank the one or more candidate grasp poses that satisfy or pass the constraint check. The picking 1624 may, for a candidate grasp pose of the one or more candidate grasp poses, determine a grasp score.”: [0157]: “Additionally, or alternatively, the picking 1624 may select the a grasp score, such as the grasp score having the highest value, and determine or plan a path of a tool (e.g., the gripper) to the grasp location associated with the selected grasp score. In some implementations, the picking 1624 may determine the path using the kinematic reachability analysis and collision analysis. If the picking 1624 determines that a path of the gripper to grasp the object based on the candidate grasp pose is possible or valid, the candidate grasp pose is selected. In some implementations, the path may include a path of the gripper to the object, a path of the gripper grasped to the object from a location/position where the gripper grasps the object to a location/position where a weld operation is to be performed on the object, or a combination thereof.”).
Ott fails to explicitly teach determine one of the at least one candidate position as the holding position based on the degree of contact.
However, Narita teaches an apparatus and system for robot gripping that determines one of the at least one candidate position as a holding position based on a degree of contact (see at least [0088] “C of FIG. 4 illustrates how the sticking ratio (slip region/sticking region) in the contact surface changes. Regions illustrated in dark gray indicate the sticking region, and regions illustrated in light gray indicate the slip region. As a shear force F.sub.X (unit: Newton (N)), which is a force acting in a shear direction, increases, the slip region spreads from a periphery of the contact surface, and when the sticking ratio reaches 0%, the entire region transitions to the slip region. Therefore, it can be said that, in order to grip the object without slipping, the gripping force is only required to be adjusted to such an extent that the sticking ratio does not reach 0%.”; [0138]: “In the case of Method 1, the change in the sticking ratio is predicted from the information regarding the curvature or shape of the contact surface or the like, and the gripping force is controlled.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ott to incorporate the teachings of Tawara and provide means to determine one of the at least one candidate position as a holding position based on a degree of contact., with a reasonable expectation of success, in order establish a point to grip the object without slipping [0088].
Claim Rejections - 35 USC § 103
6. Claim 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ott et al. (WO 2023205209, hereinafter Ott) and Tawara (US 20220016784, hereinafter Tawara) in view of Oda et al. (JP 2021013996, hereinafter Oda).
Regarding claim 18, modified Ott teaches the limitations of claim 1. Ott further teaches wherein the circuitry is configured to:
display the at least one candidate position that enables the picking process and the subsequent process to be completed in the simulation, on a display device of a user (see at least [0125]: “For example, the controller 308 may further interact with a user interface (UI) (not expressly shown in FIG. 3) by providing a graphical interface on the UI by which a user may interact with the robot system 300 and provide inputs to the robot system 300 and by which the controller 308 may interact with the user, such as by providing and/or receiving various types of information to and/or from a user (e.g., identified seams that are candidates for welding, possible weld plans, welding parameter options or selections, etc.). The UI may be any type of interface, including a touchscreen interface, a voice-activated interface, a keypad interface, a combination thereof, etc.”); and
determine one candidate position selected from the at least one candidate position, as the holding position (see at least [0152]: “To pick-up or grasp the object, the picking 1624 may including determining a mesh or a pose of the object, determining one or more candidate grasp poses, checking one or more constraints, ranking the one or more grasp poses, selecting a grasp pose, determining or verifying a travel path of the grasped object, or a combination thereof…Additionally, or alternatively, the candidate grasp poses may be determined based on a weld plan or a weld location associated with the object.”; [0157]: “The picking 1624 may rank the one or more candidate grasp poses (or a portion thereof) based on the determined grasp score(s).”).
Ott fails to explicitly teach determine one candidate position selected by the user from the at least one candidate position, as the holding position.
However, Oda teaches a method and system for a manufacturing robot that determines one candidate position selected by a user from at least one candidate position, as a holding position (see at least page 7: “Through such a user interface device 400, it is possible to notify the user of the progress of various arithmetic processes, or to set various parameters for controlling the processes described later. Further, the user interface device 400 can be used as a user designation of a holding position candidate described later, various user notifications, an input means of a user command regarding the progress of processing, and the like.”; pages 9-10: “Alternatively, a plurality of holding position candidates for the object may be generated based on the holding position specified by the user via the user interface device 400. In that case, based on the design information of the actual object, the availability of the holding position specified by the user is determined via the user interface, and if the holding position specified by the user cannot be used, the user is notified to that effect. It may be configured… You can then check if the user has specified such an inappropriate holding position.”).
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ott to incorporate the teachings of Oda and provide means determine one candidate position selected by a user from at least one candidate position, as a holding position, with a reasonable expectation of success, in order to check if the user has specified an inappropriate holding position [page 10].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Marchese et al. (US 10919151) teaches a system and method for controlling a robotic picking device that ranks candidate contact points of an object to optimize picking the object with an end effector.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIEN MINH LE whose telephone number is (571)272-3903. The examiner can normally be reached Monday to Friday (8:30am-5:30pm eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached on (571)272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.M.L./Examiner, Art Unit 3656
/KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656