DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
This office action is in response to amendments filed 01/07/2026. Claims 1-3, 5-13 are pending.
Applicant’s arguments and amendments to the claims with respect to interpretation of Claims 10 under 35 USC 112(f) have been fully considered and are persuasive. The interpretations of Claims 10 under 35 USC 112(f) have been withdrawn.
Applicant’s arguments and amendments to the claims with respect to prior art rejections of Claims 1-3, 5-11 under 35 USC 102/103 have been fully considered and are persuasive. The rejections of Claims 1-3, 5-11 under 35 USC 102/103 have been withdrawn. However, upon further consideration, a new rejection is made in view of Ikeguchi (the new grounds for rejection is a 103 rejection for all claims relying on the combination of Vu and Ikeguchi)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vu et al (US 20220088787, hereinafter Vu) in view of Ikeguchi et al (US 20240402674, hereinafter Ikeguchi).
Regarding Claim 1, Vu teaches:
a collision prediction method for predicting a collision between a multi-joint robot and a nearby object (see Claim 18), the collision prediction method comprising:
a first acquisition step of acquiring configuration information of the robot and posture information of the robot (see at least "For example, these data may be provided by the robot 402 or the robot controller 407 via a safety-rated communication protocol providing access to safety-rated data. The 3D pose of the robot may then be determined by combining provided joint positions with a static 3D model of each link and any end effectors to obtain the 3D shape of the entire robot 402.” In par. 0092) ;
a second acquisition step of acquiring point cloud data including the robot and the nearby object (see at least "In many cases, however, it is desirable to monitor an area in which there are at least some objects during normal operation, such as one or more machines and workpieces on which the machine is operating. In these cases, analysis module 342 may be configured to identify intruding objects that are unexpected or that may be humans. One suitable approach to such classification is to cluster individual occupied voxels into objects that can be analyzed at a higher level." in par. 0042 and “To achieve this, analysis module 342 may implement any of several conventional, well-known clustering techniques such as Euclidean clustering, K-means clustering and Gibbs-sampling clustering. Any of these or similar algorithms can be used to identify clusters of occupied voxels from 3D point cloud data” in par. 0043) ;
a classification step of classifying the point cloud data acquired in the second acquisition step into point cloud data corresponding to the robot and point cloud data corresponding to the nearby object, based on the configuration information of the robot and the posture information of the robot (see at least "In other cases, however, it may be necessary to further classify objects into one or more of four categories: (1) elements of the machinery being controlled by system 112, (2) the workpiece or workpieces that the machinery is operating on, and (3) other foreign objects, including people, that may be moving in unpredictable ways and that can be harmed by the machinery. It may or may not be necessary to conclusively classify people versus other unknown foreign objects. It may be necessary to definitively identify elements of the machinery as such, because by definition these will always be in a state of “collision” with the machinery itself and thus will cause the system to erroneously stop the machinery if detected and not properly classified. Similarly, machinery typically comes into contact with workpieces, but it is typically hazardous for machinery to come into contact with people. Therefore, analysis module 342 should be able to distinguish between workpieces and unknown foreign objects, especially people.df" in par. 0045) ; and
a prediction step of predicting a collision between the robot and the nearby object, based on a classification result obtained in the classification step (see at least "A further refinement is for SADM 425 to modulate maximum velocity proportionally to the minimum possible time to collision—that is, to project the robot's current state forward in time, project the intrusions toward the robot trajectory, and identify the nearest potential collision or violation of an object-specified protective separation distance. This refinement has the advantage that the robot will move more quickly away from an obstacle than toward it, which maximizes throughput while still correctly preserving safety." in par. 0099) .
Vu does not appear to explicitly teach all of the following, but Ikeguchi does teach:
wherein in the prediction step, at least one of a first prediction method, a second prediction method, is executable,
the first prediction method being a method of predicting the collision by setting at least one of the point cloud data corresponding to the robot or the point cloud data corresponding to the nearby object as a voxel and determining an overlap, based on the voxel (see at least “The decimation unit 113 demarcates the point cloud data obtained by the distance measurement unit 15 using cubes with one side having a length of 0.1 mm (FIG. 3A). The decimation unit 113 then decimates the point cloud data by reducing the number of points contained in each cube to one (FIG. 3B).” in par. 0063 and "The determination unit 115 determines whether or not any of the point cloud data Da, Db, and Dc extracted by the extraction unit 114 is located inside the robot region Ra. In FIG. 5B, the point cloud data Da is positioned inside the robot region Ra. In this case, the determination unit 115 determines that the manipulator 3 will interfere with an object present in the workspace." in par. 0075) , and
the second prediction method being a method of predicting the collision, based on a difference in the number of points between current point cloud data corresponding to the robot and previous point cloud data corresponding to the robot.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Vu to incorporate the teachings of Ikeguchi wherein the regions corresponding to the robot and objects nearby are decimated into cuboids and then checked for points of objects inside the robot region (overlap) to determine interference. The motivation to incorporate the teachings of Ikeguchi would be to more enhance the accuracy of interference determination (see par. 0070)
Regarding Claim 2, Vu as modified by Ikeguchi (references to Vu) teaches:
the collision prediction method according to claim 1, wherein
in the classification step, at least one of a first classification method, a second classification method, a third classification method, or a fourth classification method is executable,
the first classification method being a method of performing clustering on the point cloud data acquired in the second acquisition step (see at least " One simple way clustering can be used is to eliminate small groups of occupied or potentially occupied voxels that are too small to possibly contain a person. Such small clusters may arise from occupation and occlusion analysis, as described above, and can otherwise cause control system 112 to incorrectly identify a hazard. Clusters can be tracked over time by simply associating identified clusters in each image frame with nearby clusters in previous frames or using more sophisticated image-processing techniques. The shape, size, or other features of a cluster can be identified and tracked from one frame to the next. Such features can be used to confirm associations between clusters from frame to frame, or to identify the motion of a cluster. This information can be used to enhance or enable some of the classification techniques described below. Additionally, tracking clusters of points can be employed to identify incorrect and thus potentially hazardous situations." in par. 0044) and
classifying the point cloud data into the point cloud data corresponding to the robot and the point cloud data corresponding to the nearby object, based on a result of the clustering and a position of the robot, the position of the robot being identified by the configuration information and the posture information acquired in the first acquisition step (see at least “In some cases, it may be sufficient filter out clusters below a certain size and to identify cluster transitions that indicate error states. In other cases, however, it may be necessary to further classify objects into one or more of four categories: (1) elements of the machinery being controlled by system 112, (2) the workpiece or workpieces that the machinery is operating on, and (3) other foreign objects, including people, that may be moving in unpredictable ways and that can be harmed by the machinery.” In par. 0045 and "For example, these data may be provided by the robot 402 or the robot controller 407 via a safety-rated communication protocol providing access to safety-rated data. The 3D pose of the robot may then be determined by combining provided joint positions with a static 3D model of each link and any end effectors to obtain the 3D shape of the entire robot 402.” In par. 0092) ;
the second classification method being a method of acquiring first point cloud data corresponding to the robot from the point cloud data acquired in the second acquisition step by using the first classification method (see at least “In some cases, it may be sufficient filter out clusters below a certain size and to identify cluster transitions that indicate error states. In other cases, however, it may be necessary to further classify objects into one or more of four categories: (1) elements of the machinery being controlled by system 112, (2) the workpiece or workpieces that the machinery is operating on, and (3) other foreign objects, including people, that may be moving in unpredictable ways and that can be harmed by the machinery.” In par. 0045 and "For example, these data may be provided by the robot 402 or the robot controller 407 via a safety-rated communication protocol providing access to safety-rated data. The 3D pose of the robot may then be determined by combining provided joint positions with a static 3D model of each link and any end effectors to obtain the 3D shape of the entire robot 402.” In par. 0092),
comparing the first point cloud data with second point cloud data stored in advance (see at least "In some cases, the robot may provide an interface to obtain joint positions that are not safety-rated, in which case the joint positions can be verified against images from sensors 102 (using, for example, safety-rated software). For example, received joint positions may be combined with static 3D models of each link and any end effectors to generate a 3D model of the entire robot 402. This 3D image can be used to remove any objects in the sensing data that are part of the robot itself If the joint positions are correct, this will fully eliminate all object data attributed to the robot 402. If, however, the joint positions are incorrect, the true position of robot 402 will diverge from the model, and some parts of the detected robot will not be removed. Those points will then appear as a foreign object in the new cycle. In the previous cycle, it can be assumed that the joint positions were correct because otherwise robot 402 would have been halted. Since the base joint of the robot does not move, at least one of the divergent points must be close to the robot. The detection of an unexpected entity close to robot 402 can then be used to trigger an error condition, which will cause control system 112 (see FIG. 1) to transition robot 402 to a safe state. Alternatively, sensor data can be used to identify the position of the robot using a correlation algorithm, such as described above in the section on registration, and this detected position can be compared with the joint position reported by the robot. " in par. 0093) ,
removing, from the first point cloud data, a point cloud to obtain point cloud data, and classifying the obtained point cloud data as the point cloud data corresponding to the robot (see at least "This 3D image can be used to remove any objects in the sensing data that are part of the robot itself If the joint positions are correct, this will fully eliminate all object data attributed to the robot 402." in par. 0093) ,
the third classification method being a method of predicting a position of a tip portion of the robot in the point cloud data acquired in the second acquisition step, based on the configuration information and the posture information, and classifying, as the point cloud data corresponding to the robot, a point cloud corresponding to the tip portion of the robot among point clouds overlapping with a three-dimensional model of the robot when the three-dimensional model of the robot is arranged at the predicted position (see at least "The 3D pose of the robot may then be determined by combining provided joint positions with a static 3D model of each link and any end effectors to obtain the 3D shape of the entire robot 402." in par. 0092 and “Alternatively, sensor data can be used to identify the position of the robot using a correlation algorithm, such as described above in the section on registration, and this detected position can be compared with the joint position reported by the robot. If the joint position information provided by robot 402 has been validated in this manner, it can be used to validate joint velocity information, which can then be used to predict future joint positions. If these positions are inconsistent with previously validated actual joint positions, the program can similarly trigger an error condition.” In par. 0093) ,
the fourth classification method being a method of acquiring first point cloud data corresponding to the robot from the point cloud data acquired in the second acquisition step by using the first classification method, comparing the first point cloud data with past point cloud data corresponding to the robot and past point cloud data corresponding to the nearby object, and classifying the first point cloud data as closer point cloud data of the past point cloud data corresponding to the robot and the past point cloud data corresponding to the nearby object (see at least "In some cases, the robot may provide an interface to obtain joint positions that are not safety-rated, in which case the joint positions can be verified against images from sensors 102 (using, for example, safety-rated software). For example, received joint positions may be combined with static 3D models of each link and any end effectors to generate a 3D model of the entire robot 402. This 3D image can be used to remove any objects in the sensing data that are part of the robot itself If the joint positions are correct, this will fully eliminate all object data attributed to the robot 402. If, however, the joint positions are incorrect, the true position of robot 402 will diverge from the model, and some parts of the detected robot will not be removed. Those points will then appear as a foreign object in the new cycle. In the previous cycle, it can be assumed that the joint positions were correct because otherwise robot 402 would have been halted. Since the base joint of the robot does not move, at least one of the divergent points must be close to the robot. The detection of an unexpected entity close to robot 402 can then be used to trigger an error condition, which will cause control system 112 (see FIG. 1) to transition robot 402 to a safe state. Alternatively, sensor data can be used to identify the position of the robot using a correlation algorithm, such as described above in the section on registration, and this detected position can be compared with the joint position reported by the robot. If the joint position information provided by robot 402 has been validated in this manner, it can be used to validate joint velocity information, which can then be used to predict future joint positions. If these positions are inconsistent with previously validated actual joint positions, the program can similarly trigger an error condition." in par. 0093) .
Vu does not appear to explicitly teach all of the following, but Ikeguchi does teach:
removing, from the first point cloud data, a point cloud for which a shortest distance from the second point cloud data is equal to or less than a predetermined threshold to obtain point cloud data, and classifying the obtained point cloud data as the point cloud data corresponding to the robot (see at least “The robot region corresponding to the 3D model of the manipulator 3 may be set based on the information about the shape and size of the manipulator 3 corresponding to the type of the manipulator 3, the information about the orientation of the manipulator 3, and the information about the shape and size of the tool corresponding to the type of the tool attached to the manipulator 3, such as a welding torch. The type of the manipulator 3, and the orientation information of the manipulator 3 (e.g., angle information of each axis) are preferably obtained from the robot controller 2.” In par. 0069 and "The determination region that encompasses the robot region should preferably be set in a range that can absorb possible errors in the point cloud data obtained by the distance measurement unit 15, to a size with some allowance to accommodate the manipulator 3 with a welding torch mounted thereon. This way, the influence of possible errors in the point cloud data can be removed from the range before setting the determination region, which helps enhance the accuracy of determination for interference. The determination region may be provided for each link of the manipulator 3. " in par. 0070)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Vu to incorporate the teachings of Ikeguchi wherein the point cloud corresponding to the robot is extracted by matching a stored model of the robot plus a predetermined range to account for errors in the distance measurement data. The motivation to incorporate the teachings of Ikeguchi would be to more enhance the accuracy of interference determination (see par. 0070)
Regarding Claim 3, Vu as modified by Ikeguchi (references to Vu) teaches:
the collision prediction method according to claim 2,
Vu further teaches: further comprising a determination step of determining whether the nearby object has moved (see at least " The set of relevant entities can include all entities in the workspace, including both static background such as walls and tables, and moving entities such as workpieces and human workers. Either from prior configuration or run-time detection, sensors 102 and analysis module 342 may be able to infer which entities could possibly be moving. In this case, any of the algorithms described above can be refined to leave additional margins to account for entities that might be moving, but to eliminate those margins for entities that are known to be static, so as not to reduce throughput unnecessarily but still automatically eliminate the possibility of collisions with static parts of the work cell." in par. 0100) , wherein
in the classification step, the first classification method is used when it is determined that the nearby object has moved (see at least “In some cases, it may be sufficient filter out clusters below a certain size and to identify cluster transitions that indicate error states. In other cases, however, it may be necessary to further classify objects into one or more of four categories: (1) elements of the machinery being controlled by system 112, (2) the workpiece or workpieces that the machinery is operating on, and (3) other foreign objects, including people, that may be moving in unpredictable ways and that can be harmed by the machinery.” In par. 0045 and "For example, these data may be provided by the robot 402 or the robot controller 407 via a safety-rated communication protocol providing access to safety-rated data. The 3D pose of the robot may then be determined by combining provided joint positions with a static 3D model of each link and any end effectors to obtain the 3D shape of the entire robot 402.” In par. 0092) , and
the second classification method is used when it is determined that the nearby object has not moved (see at least " This 3D image can be used to remove any objects in the sensing data that are part of the robot itself If the joint positions are correct, this will fully eliminate all object data attributed to the robot 402. If, however, the joint positions are incorrect, the true position of robot 402 will diverge from the model, and some parts of the detected robot will not be removed. Those points will then appear as a foreign object in the new cycle. In the previous cycle, it can be assumed that the joint positions were correct because otherwise robot 402 would have been halted. Since the base joint of the robot does not move, at least one of the divergent points must be close to the robot. The detection of an unexpected entity close to robot 402 can then be used to trigger an error condition, which will cause control system 112 (see FIG. 1) to transition robot 402 to a safe state." in par. 0093) .
Regarding Claim 5, Vu as modified by Ikeguchi (references to Vu) teaches:
the collision prediction method according to claim 1, further comprising a preprocessing step of performing predetermined preprocessing on the point cloud data acquired in the second acquisition step, wherein
the predetermined preprocessing includes a process of removing point cloud data located out of a movable range of the robot (see at least “in particular, at each cycle, object analysis module 415 identifies the precise 3D location and extent of all entities in workspace 400 that are either within the robot's reach or that could move into the robot's reach at conservative expected velocities….That is, a workspace may have a plurality of entry points, not all of which are necessarily monitored; an operator may, for example, enter the workspace 400 via an unguarded entry point (which may also be unmonitored). When the sensors 102 detect the presence of an object (which could be the entering operator), object-analysis module 415 assumes the object is a human and computes a trajectory and based thereon, a minimum estimated time before the entering operator could come within an unsafe distance of robot 402.” In par. 0062 " The separation of tracking and safeguarding also allows for the tracking of objects and humans outside the safeguarded space, which we define as the volume that is being actively monitored for objects and humans. In general, the safeguarded space corresponds to the space occupied by the workcell. This eliminates the need for tracking entry points for objects smaller than a human, as the tracking may be performed outside the safeguarded volume, knowing where the humans are outside the safeguarded volume. Once a human enters the safeguarded volume, it is known to be a human as it has been tracked as such by the workcell's vision system or one associated with an adjacent workcell and in communication with the workcell's vision system" in par. 0091) , and
the classification step classifies the point cloud data on which the predetermined preprocessing is performed in the preprocessing step (see at least "The separation of tracking and safeguarding also allows for the tracking of objects and humans outside the safeguarded space, which we define as the volume that is being actively monitored for objects and humans. In general, the safeguarded space corresponds to the space occupied by the workcell. This eliminates the need for tracking entry points for objects smaller than a human, as the tracking may be performed outside the safeguarded volume, knowing where the humans are outside the safeguarded volume. Once a human enters the safeguarded volume, it is known to be a human as it has been tracked as such by the workcell's vision system or one associated with an adjacent workcell and in communication with the workcell's vision system" in par. 0091 ) .
Regarding Claim 6, Vu as modified by Ikeguchi (references to Vu) teaches:
the collision prediction method according to claim 1, further comprising
an installation step of installing a simulation point cloud for point cloud data classified as the point cloud data corresponding to the robot in the classification step (see at least “Reliable state analysis typically requires an accurate model of each robot link. This model can be obtained a priori, e.g., from 3D CAD files provided by the robot manufacturer or generated by industrial engineers for a specific project.” In par. 0095 and " In this case, it is possible for RSDM 420 to create the model itself, e.g., using sensors 102. This may be done in a separate training mode where robot 402 runs through a set of motions, e.g., the motions that are intended for use in the given application and/or a set of motions designed to provide sensors 102 with appropriate views of each link and end effector. It is possible, but not necessary, to provide some basic information about the robot a priori, such as the lengths and rotational axes of each link. During this training mode, RSDM 420 generates a 3D model of each link, complete with all necessary attachments. This model can then be used by RSDM 420 in conjunction with sensor images to determine the robot state." in par. 0096) ,
wherein the prediction step predicts a collision between the robot and the nearby object by using point cloud data including the simulation point cloud installed in the installation step (see at least " A refinement of this technique is for SADM 425 to control maximum velocity proportionally to the square root of the minimum distance, which reflects the fact that in a constant-deceleration scenario, velocity changes proportionally to the square root of the distance traveled, resulting in a smoother and more efficient, but still equally safe, result. A further refinement is for SADM 425 to modulate maximum velocity proportionally to the minimum possible time to collision—that is, to project the robot's current state forward in time, project the intrusions toward the robot trajectory, and identify the nearest potential collision or violation of an object-specified protective separation distance. This refinement has the advantage that the robot will move more quickly away from an obstacle than toward it, which maximizes throughput while still correctly preserving safety. Since the robot's future trajectory depends not just on its current velocity but on subsequent commands, SADM 425 may consider all points reachable by robot 402 within a certain reaction time given its current joint positions and velocities, and cause control signals to be issued based on the minimum collision time among any of these states. Yet a further refinement is for SADM 425 to take into account the entire planned trajectory of the robot when making this calculation, rather than simply the instantaneous joint velocities." in par. 0099) , and
the simulation point cloud is installed in a straight line connecting a position of a tip of the robot and a position of a joint of the robot (see at least " The 3D pose of the robot may then be determined by combining provided joint positions with a static 3D model of each link and any end effectors to obtain the 3D shape of the entire robot 402." in par. 0092) .
Regarding Claim 7, Vu as modified by Ikeguchi (references to Vu) teaches:
the collision prediction method according to claim 1, further comprising:
a step of acquiring information on a movement direction of a predetermined point on the robot (see at least " Since the robot's future trajectory depends not just on its current velocity but on subsequent commands, SADM 425 may consider all points reachable by robot 402 within a certain reaction time given its current joint positions and velocities, and cause control signals to be issued based on the minimum collision time among any of these states. Yet a further refinement is for SADM 425 to take into account the entire planned trajectory of the robot when making this calculation, rather than simply the instantaneous joint velocities. " in par. 0099 ) ; and
an extraction step of setting a predefined range for the point cloud data corresponding to the robot, the predefined range being defined in advance to extend along a movement direction of the robot with respect to the predetermined point, and extracting point cloud data corresponding to the nearby object within the predefined range wherein the prediction step predicts a collision between the robot and the nearby object by using the point cloud data corresponding to the nearby object extracted in the extraction step and the point cloud data corresponding to the robot. (see at least " A further refinement is for SADM 425 to modulate maximum velocity proportionally to the minimum possible time to collision—that is, to project the robot's current state forward in time, project the intrusions toward the robot trajectory, and identify the nearest potential collision or violation of an object-specified protective separation distance. This refinement has the advantage that the robot will move more quickly away from an obstacle than toward it, which maximizes throughput while still correctly preserving safety. Since the robot's future trajectory depends not just on its current velocity but on subsequent commands, SADM 425 may consider all points reachable by robot 402 within a certain reaction time given its current joint positions and velocities, and cause control signals to be issued based on the minimum collision time among any of these states." in par. 0099)
Regarding Claim 8, Vu as modified by Ikeguchi (references to Vu) teaches:
the collision prediction method according to claim 1, wherein
the posture information includes at least one of a position of a tip portion of the robot, an angle of the tip portion of the robot, a position of a joint portion of the robot, or an angle of the joint portion of the robot (see at least " Similarly, the instantaneous joint positions associated with the pose and the velocity of the end effector can be used to predict a future end-effector state. The state of an end effector may also include the size and/or weight of a workpiece, which can affect kinematics and maintenance of a safe distance from inanimate entities and humans." in par. 0061 and “The 3D pose of the robot may then be determined by combining provided joint positions with a static 3D model of each link and any end effectors to obtain the 3D shape of the entire robot 402.” In par. 0092) .
Regarding Claim 9, Vu as modified by Ikeguchi (references to Vu) teaches:
the collision prediction method according to claim 1, wherein
the configuration information includes a three-dimensional model indicating a shape of the robot (see at least " For example, received joint positions may be combined with static 3D models of each link and any end effectors to generate a 3D model of the entire robot 402. " in par. 0093) .
Regarding Claim 10, Vu as modified by Ikeguchi (references to Vu) also teaches:
a collision prediction device for predicting a collision between a multi-joint robot and a nearby object (see at least " FIG. 3 illustrates, in greater detail, a representative embodiment of control system 112, which may be implemented on a general-purpose computer. The control system 112 includes a central processing unit (CPU) 305, system memory 310, and one or more non-volatile mass storage devices (such as one or more hard disks and/or optical storage units) 312. " in par. 0024) ) , the collision prediction device comprising:
processing circuitry for implementing the method of Claim 1 (see Claim 1 analysis for rejection of the method)
Regarding Claim 11, Vu as modified by Ikeguchi (references to Vu) also teaches:
the collision prediction device according to claim 10 (see Claim 10 analysis)
Vu does not appear to explicitly teach all of the following, but Ikeguchi does teach:
A welding system (see at least "The welding robot system 100 includes the interference determination device 1, a robot controller 2, and a manipulator (robot) 3, for example. " in par. 0035) comprising:
a welding robot, the welding robot comprising the robot (see at least "The manipulator 3 is a welding robot, which performs arc welding to a workpiece that is the welding target in accordance with the operation conditions set in the robot controller 2. " in par. 0037) .
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Vu to incorporate the teachings of Ikeguchi wherein the system is a welding robot system with an interference determination device, in order to arrive at the system taught by Vu used to monitor a welding robot. The motivation to incorporate the teachings of Ikeguchi would be to more efficiently detect collisions between a welding robot and the surrounding environment (see par. 0050)
Regarding Claim 12, Vu as modified by Ikeguchi (references to Vu) also teaches:
Claim 12. (New) The collision prediction method according to claim 1,
wherein the classification step includes acquiring first point cloud data corresponding to the robot from the point cloud data acquired in the second acquisition step by using the first classification method, (see at least “In some cases, it may be sufficient filter out clusters below a certain size and to identify cluster transitions that indicate error states. In other cases, however, it may be necessary to further classify objects into one or more of four categories: (1) elements of the machinery being controlled by system 112, (2) the workpiece or workpieces that the machinery is operating on,” in par. 0045)
comparing the first point cloud data with second point cloud data stored in advance, removing, from the first point cloud data, a point cloud for which a shortest distance from the second point cloud data is equal to or less than a predetermined threshold to obtain point cloud data, and classifying the obtained point cloud data as the point cloud data corresponding to the robot. (see at least “In some cases, it may be sufficient filter out clusters below a certain size and to identify cluster transitions that indicate error states.” In par. 0045 and “An offsetting tool may be used to adjust the boundary of a represented entity by a configurable number of voxels. This boundary offsetting step may be performed to account, for example, for multiple variants, part-to-part tolerance stack up or variance, or to influence the sensitivity of object analysis module 415 to the sensed entity or entities. Alternatively, conventional techniques of image data augmentation can be employed to translate and rotate by small amounts the entity images captured by the different sensors, thereby establishing the range of acceptable variations in position and orientation.” In par. 0065)
Regarding Claim 13, Vu as modified by Ikeguchi (references to Vu) also teaches:
Claim 13. (New) The collision prediction method according to claim 1,
wherein the classification step includes predicting a position of a tip portion of the robot in the point cloud data acquired in the second acquisition step. based on the configuration information and the posture information, and classifying, as the point cloud data corresponding to the robot, a point cloud corresponding to the tip portion of the robot among point clouds overlapping with a three-dimensional model of the robot when the three- dimensional model of the robot is arranged at the predicted position. (see at least “In some cases, it may be sufficient filter out clusters below a certain size and to identify cluster transitions that indicate error states. In other cases, however, it may be necessary to further classify objects into one or more of four categories: (1) elements of the machinery being controlled by system 112, (2) the workpiece or workpieces that the machinery is operating on, and (3) other foreign objects, including people, that may be moving in unpredictable ways and that can be harmed by the machinery.” In par. 0045 and “In cases where the machinery changes shape, elements of the machinery can be identified and classified, e.g., by supplying analysis module 342 with information about these elements (e.g., as scalable 3D representations), and in some cases (such as industrial robot arms) providing a source of instantaneous information about the state of the machinery. Analysis module 342 may be “trained” by operating machinery, conveyors, etc. in isolation under observation by the sensors 102, allowing analysis module 342 to learn their precise regions of operation resulting from execution of the full repertoire of motions and poses. Analysis module 342 may classify the resulting spatial regions as occupied.” In par. 0046 and "For example, these data may be provided by the robot 402 or the robot controller 407 via a safety-rated communication protocol providing access to safety-rated data. The 3D pose of the robot may then be determined by combining provided joint positions with a static 3D model of each link and any end effectors to obtain the 3D shape of the entire robot 402.” In par. 0092)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN M KATZ whose telephone number is (571)272-2776. The examiner can normally be reached Mon-Thurs. 8:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DYLAN M KATZ/Examiner, Art Unit 3657