DETAILED ACTION
This is a Non-Final Office Action on the merits in response to communications filed by Applicant on December 4th, 2025. Claims 1-21 are currently pending and examined below
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments to the claims filed of November 4th, 2025 have been entered. Claims 1-3, 5, 9-10, 12-17, and 19-21 are currently amended and pending, claim 4 is as previously presented, and claims 6-8, 11, and 18 are original, unamended, and pending.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Claim 1 - image pickup unit
Claim 14 - image pickup unit
Claim 16 - image pickup unit
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-7 and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2018/0029232 A1 ("Ouchi") in view of JP 2013180380 A (“Hashimoto”) in further view of JP 2012011498 A (“Matsuzaki”).
Regarding claim 1, Ouchi teaches a robot apparatus comprising (Ouchi: Figure 1 robot system 1, ¶ 0044, “The robot system 1 is provided with a robot 20 and a control apparatus 30. In addition, the control apparatus 30 is provided with a robot control device 40 and an information processing device 50.”):
a robot (Ouchi: Figure 1 robot 20, ¶ 0044, “The robot system 1 is provided with a robot 20 and a control apparatus 30. In addition, the control apparatus 30 is provided with a robot control device 40 and an information processing device 50.”);
and a controller configured to control the robot (Ouchi: Figure 1 robot 20, ¶ 0044, “The robot system 1 is provided with a robot 20 and a control apparatus 30. In addition, the control apparatus 30 is provided with a robot control device 40 and an information processing device 50.”),
and wherein the controller controls the robot (Ouchi: ¶ 0052, “The force detection information is used in force control, which is control based on force detection information, out of types of control of the robot 20 carried out by the robot control device 40. The force control is control to operate at least one of the end effector E and the manipulator M such that the external force indicated by the force detection information realizes a state where a predetermined termination condition is satisfied.”, ¶ 0093, “In an example illustrated in FIG. 4, the task category information Cl is information indicating a task category into which tasks that include operation of the robot 20 pressing an object gripped by the robot 20 to another object through force control is classified.”, ¶ 0099, “In the example illustrated in FIG. 4, the task category information C2 is information indicating a task category into which tasks that include operation of the robot 20 inserting an object gripped by the robot 20 into another object through force control is classified.”, ¶ 0105, “In an example illustrated in FIG. 4, the task category information C3 is information indicating a task category that includes operation of the robot 20 tightening a lid (cap), which is gripped by the robot 20, to a main body of a PET bottle, to which the lid is tightened, through force control.”, ¶ 0166, “Herein, the parameter is an impedance parameter since the force control is impedance control in this example. That is, the parameter is each of a virtual inertia coefficient, a virtual viscosity coefficient, and a virtual elasticity coefficient.”. The cited passages clearly show that the robot is configured to perform force control using an impedance parameter in order to perform various tasks. One of ordinary skill in the art would clearly see that an impedance parameter is a type of dynamical characteristic of the system.)
Ouchi does not teach an image pickup unit;
wherein the controller obtains a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion,
wherein the controller obtains, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured,
and wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set.
Hashimoto, in the same field of endeavor, teaches an image pickup unit (Hashimoto: Figure 1 imaging device 30, ¶ 0021, “The robot 20 includes a support base 20a fixed to the ground, a manipulator part (manipulator) 20b connected to the support base 20a so as to be rotatable and rotatable, a gripping part 20c connected to the manipulator part 20b, a force sensor 20d, and an imaging device 30.”);
wherein the controller obtains a goal image in which a virtual attractive force are set in a feature portion (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0045, “The load displacement conversion unit 120 acquires load information (for example, force applied to the component 200) input from the load calculation unit 40 at a predetermined sampling rate (1 kHz as an example in the present embodiment). Then, based on the positional relationship signal input from the visual control unit 110 and the load information acquired from the load calculation unit 40, the load displacement conversion unit 120 moves the relative position to which the robot 20 is moved according to the force.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages show that the force is determined based on the positional relationship of the object in the captured image and the assembled component in the goal image and the load. Additionally, impedance control is used to control the robot and said impedance control used the positional relationship and determined load. The impedance control therefore acts as a virtual attractive force.),
wherein the controller obtains, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages clearly teaches that a current image containing a feature portion in the goal image is captured.),
and wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0045, “The load displacement conversion unit 120 acquires load information (for example, force applied to the component 200) input from the load calculation unit 40 at a predetermined sampling rate (1 kHz as an example in the present embodiment). Then, based on the positional relationship signal input from the visual control unit 110 and the load information acquired from the load calculation unit 40, the load displacement conversion unit 120 moves the relative position to which the robot 20 is moved according to the force.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages clearly shows that the robot is controlled such that the virtual attractive force (i.e. the impedance control such that the feature in the current image approaches the feature in the goal image.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the robot apparatus taught in Ouchi with the method of determining force from the difference between two images taught in Hashimoto with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because such a method of control allows the robot to continue to operate even if it comes into contact with an obstacle and prevents damage to the object the robot is gripping (Hashimoto: ¶ 0082, “Thereby, even if the component (object) 200 contacts the obstacle, the control device 10 generates a control signal so that the component (object) 200 follows the obstacle. Therefore, the object follows the obstacle. Thus, the robot 20 can be controlled. Thereby, the robot 20 can assemble the component (object) 200 by moving the component (object) 200 to the final target position without destroying the component (object) 200.”).
Ouchi in view of Hashimoto does not teach wherein the controller obtains a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion,
and wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set.
Matsuzaki, in the same field of endeavor, teaches wherein the controller obtains a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion (Matsuzaki: ¶ 0023, “Next, the repulsive force vector field generation processing unit 5 that performs a process of generating a virtual repulsive force vector field using the environmental data stored in the environment data storage unit 4 will be described. The repulsive force vector field is generated by setting a three-dimensional lattice on the work space and defining a repulsive force vector 21 at each lattice point. The repulsive force vector 21 sets the obstacle 20 in the work space as a generation source. The direction of the repulsive force vector 21 set at each lattice point is the normal direction of the obstacle surface, that is, the direction perpendicular to the obstacle surface, and the magnitude of the repulsive force vector 21 is set so as to increase as it approaches the obstacle. When there are a plurality of obstacles 20 on the work space, for example, the repulsive force vector 21 generated from the obstacle 20 closest to each lattice point is finally set as the repulsive force vector 21 of the lattice point.”, ¶ 0028, “The representative point 31 is set to the outer shape and the inside of the robot arm 2 in order to avoid interference with the obstacle 20. For example, in the case of the three-dimensional shape model 30 of the robot arm 2 as shown in FIG. 3, the representative points 31 are set evenly on the outer surface of the rectangular parallelepiped or the cylindrical portion that is a component of the model. In order to avoid complication of the figure, some representative points 31 are omitted in FIG. In addition, when the hand 70 that is a part of the moving body has a work tool or the like, the representative point 31 is also set on the hand 70 when there is a size that may cause interference with an obstacle”. The cited passages clearly teaches that the system sets a repulsive force at a feature point.),
wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set (Matsuzaki: ¶ 0023, “Next, the repulsive force vector field generation processing unit 5 that performs a process of generating a virtual repulsive force vector field using the environmental data stored in the environment data storage unit 4 will be described. The repulsive force vector field is generated by setting a three-dimensional lattice on the work space and defining a repulsive force vector 21 at each lattice point. The repulsive force vector 21 sets the obstacle 20 in the work space as a generation source. The direction of the repulsive force vector 21 set at each lattice point is the normal direction of the obstacle surface, that is, the direction perpendicular to the obstacle surface, and the magnitude of the repulsive force vector 21 is set so as to increase as it approaches the obstacle. When there are a plurality of obstacles 20 on the work space, for example, the repulsive force vector 21 generated from the obstacle 20 closest to each lattice point is finally set as the repulsive force vector 21 of the lattice point.”, ¶ 0028, “The representative point 31 is set to the outer shape and the inside of the robot arm 2 in order to avoid interference with the obstacle 20. For example, in the case of the three-dimensional shape model 30 of the robot arm 2 as shown in FIG. 3, the representative points 31 are set evenly on the outer surface of the rectangular parallelepiped or the cylindrical portion that is a component of the model. In order to avoid complication of the figure, some representative points 31 are omitted in FIG. In addition, when the hand 70 that is a part of the moving body has a work tool or the like, the representative point 31 is also set on the hand 70 when there is a size that may cause interference with an obstacle”, ¶ 0030, “Next, the repulsive force in the magnitude and direction of the repulsive force vector field 21 generated by the repulsive force vector field generation processing unit 5 is applied to the representative point 31 of the three-dimensional shape model 30 of the robot arm 2 described above for each link of the robot arm 2.”. The cited passages clearly show that a repulsive force is set such that the robot avoids the feature at which a repulsive force is set.).
Ouchi in view of Hashimoto teaches a robot apparatus comprising: a robot; an image pickup unit; and a controller configured to control the robot; wherein the controller obtains a goal image in which a virtual attractive force are set in a feature portion, wherein the controller obtains, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured, and wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set. Ouchi in view of Hashimoto does not teach wherein the controller obtains a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set. Matsuzaki teaches wherein the controller obtains a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set. A person of ordinary skill in the art would have had the technological capabilities required to have modified the system taught in Ouchi in view of Hashimoto with wherein the controller obtains a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set taught in Matsuzaki. Furthermore, the system taught in Ouchi in view of Matsuzaki is already configured to set a virtual attractive force in the goal image and control the robot such that the feature point in the current image approaches the feature point in the goal image. As such one of ordinary skill in the art would have been able add the repulsive force at a feature point and avoiding said feature point based on the repulsive force in the control of the robot as taught in Mastsuzaki. Even though the feature portions of the obstacle and control object are implemented using 3D models at predetermined locations of the robot, using the method with images taken from an imaging device would not change the functionality of the method or introduce new functionality from such a combination. No inventive effort would have been required. The combination would have yielded the predictable result of a robot apparatus comprising: wherein the controller obtains a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the robot apparatus taught in Ouchi in view of Hashimoto with wherein the controller obtains a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set taught in Matsuzaki with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results.
Regarding claim 2, Ouchi in view of Hashimoto teaches wherein the goal image is an image obtained by imaging the robot that is in a predetermined posture (Hashimoto: ¶ 0031, “Here, the goal image is an image when the object is moved to the target position, and in the present embodiment, as an example, the goal image is an image in a state where the component 200 is assembled to the assembly component 210. Specifically, for example, the goal image generation unit 14 acquires and holds in advance camera images after assembling the component 200 in all states of the position and orientation that the assembled component 210 can take”),
and wherein the controller makes a posture of the robot closer to the predetermined posture by the control (Hashimoto: ¶ 0039, “The determination unit 19 determines whether or not the component 20 gripped by the robot 20 has reached the target position. Specifically, for example, the determination unit 19 determines whether or not the calculated similarity is greater than a predetermined threshold. The determination unit 19 determines that the current position and orientation of the component 200 has reached the target position and orientation when the calculated similarity is greater than a predetermined threshold.”. See ¶ 0043 and 0045 for the description of the force control.).
Regarding claim 3, Ouchi in view of Hashimoto in further view of Matsuzaki teaches wherein the feature portion in the goal image includes a first feature portion corresponding to a control target object (Matsuzaki: ¶ 0028, “The representative point 31 is set to the outer shape and the inside of the robot arm 2 in order to avoid interference with the obstacle 20.”),
and a second feature portion corresponding to an obstacle (Matsuzaki: ¶ 0023, “The
direction of the repulsive force vector 21 set at each lattice point is the normal direction of the obstacle surface, that is, the direction perpendicular to the obstacle surface,”. As can be seen from the cited paragraph, multiple points on the surface of the obstacle are created.),
and wherein the controller obtains a third feature portion corresponding to the control target object from the current image as the feature portion in the current image (Matsuzaki: ¶ 0028, “The representative point 31 is set to the outer shape and the inside of the robot arm 2 in order to avoid interference with the obstacle 20.”),
and obtains a virtual force direction acting between the third feature portion and the first feature portion and a virtual force direction acting between the third feature portion and the second feature portion (Matsuzaki: ¶ 0030, “Next, the repulsive force in the magnitude and direction of the repulsive force vector field 21 generated by the repulsive force vector field generation processing unit 5 is applied to the representative point 31 of the three-dimensional shape model 30 of the robot arm 2 described above for each link of the robot arm 2.”. The cited paragraph show that information about the force between the feature points is obtained.).
Regarding claim 4, Ouchi in view of Hashimoto in further view of Matsuzaki teaches wherein the control target object is the robot or an object held by the robot (Ouchi: ¶ 0054, “The position correlated in advance with the end effector E is, for example, the position of the centroid of a target object 01 (not illustrated) gripped by the end effector E. The control point T is, for example, a tool center point (TCP). Instead of the TCP, the control point T may be other virtual points including a virtual point correlated with a part of the arm A.”).
Regarding claim 5, Ouchi in view of Hashimoto in further view of Matsuzaki teaches wherein the virtual attractive force is set to the first feature portion in the goal image (Matsuzaki: ¶ 0061, “A direction attractive force vector 61 is applied to the hand 70 of the robot arm 2. At this time, the magnitude of the attractive force vector 61 is calculated and set from a component proportional to the positional deviation amount, for example.”, ¶ 0063, “Further, an attractive force directed to the target point 60 acts on the hand 70 of the robot arm 2 so that the operator is guided to the target position 60 without interfering with the robot arm 2 to the target position.”),
and the virtual repulsive force is set to the second feature portion in the goal image (Matsuzaki: ¶ 0030, “Next, the repulsive force in the magnitude and direction of the repulsive force vector field 21 generated by the repulsive force vector field generation processing unit 5 is applied to the representative point 31 of the three-dimensional shape model 30 of the robot arm 2 described above for each link of the robot arm 2.”).
Regarding claim 6, Ouchi in view of Hashimoto in further view of Matsuzaki teaches wherein the controller sets the virtual attractive force by using a first parameter (Matsuzaki: ¶ 0061, “At this time, the magnitude of the attractive force vector 61 is calculated and set from a component proportional to the positional deviation”. The cited passage shows that the positional deviation is used to set the attractive force.),
and sets the virtual repulsive force by using a second parameter (Matsuzaki: ¶ 0023, “The direction of the repulsive force vector 21 set at each lattice point is the normal direction of the obstacle surface, that is, the direction perpendicular to the obstacle surface, and the magnitude of the repulsive force vector 21 is set so as to increase as it approaches the obstacle.”. The cited passage shows that the distance between the robot and the obstacle is used to set the repulsive force.).
Regarding claim 7, Ouchi in view of Hashimoto in further view of Matsuzaki teaches wherein the controller displays a first user interface image for receiving setting of the first parameter and the second parameter on a display portion (Ouchi: Figure 9 operational screen P3, ¶ 0169, “In the task indicated by the task information selected by the user on the operational screen P2 illustrated in FIG. 5, the region JG5 is a region to display a plurality of input fields, into which each of parameters according to the task (in this example, parameters of force control) is input by the user.”).
Regarding claim 14, Ouchi a method for controlling a robot apparatus, the method comprising:
controlling the robot (Ouchi: ¶ 0052, “The force detection information is used in force control, which is control based on force detection information, out of types of control of the robot 20 carried out by the robot control device 40. The force control is control to operate at least one of the end effector E and the manipulator M such that the external force indicated by the force detection information realizes a state where a predetermined termination condition is satisfied.”, ¶ 0093, “In an example illustrated in FIG. 4, the task category information Cl is information indicating a task category into which tasks that include operation of the robot 20 pressing an object gripped by the robot 20 to another object through force control is classified.”, ¶ 0099, “In the example illustrated in FIG. 4, the task category information C2 is information indicating a task category into which tasks that include operation of the robot 20 inserting an object gripped by the robot 20 into another object through force control is classified.”, ¶ 0105, “In an example illustrated in FIG. 4, the task category information C3 is information indicating a task category that includes operation of the robot 20 tightening a lid (cap), which is gripped by the robot 20, to a main body of a PET bottle, to which the lid is tightened, through force control.”, ¶ 0166, “Herein, the parameter is an impedance parameter since the force control is impedance control in this example. That is, the parameter is each of a virtual inertia coefficient, a virtual viscosity coefficient, and a virtual elasticity coefficient.”. The cited passages clearly show that the robot is configured to perform force control using an impedance parameter in order to perform various tasks. One of ordinary skill in the art would clearly see that an impedance parameter is a type of dynamical characteristic of the system.).
Ouchi does not teach obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion,
obtaining, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured,
controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set.
Hashimoto, in the same field of endeavor, obtaining a goal image in which a virtual attractive force are set in a feature portion (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0045, “The load displacement conversion unit 120 acquires load information (for example, force applied to the component 200) input from the load calculation unit 40 at a predetermined sampling rate (1 kHz as an example in the present embodiment). Then, based on the positional relationship signal input from the visual control unit 110 and the load information acquired from the load calculation unit 40, the load displacement conversion unit 120 moves the relative position to which the robot 20 is moved according to the force.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages show that the force is determined based on the positional relationship of the object in the captured image and the assembled component in the goal image and the load. Additionally, impedance control is used to control the robot and said impedance control used the positional relationship and determined load. The impedance control therefore acts as a virtual attractive force.),
obtaining, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages clearly teaches that a current image containing a feature portion in the goal image is captured.),
controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0045, “The load displacement conversion unit 120 acquires load information (for example, force applied to the component 200) input from the load calculation unit 40 at a predetermined sampling rate (1 kHz as an example in the present embodiment). Then, based on the positional relationship signal input from the visual control unit 110 and the load information acquired from the load calculation unit 40, the load displacement conversion unit 120 moves the relative position to which the robot 20 is moved according to the force.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages clearly shows that the robot is controlled such that the virtual attractive force (i.e. the impedance control such that the feature in the current image approaches the feature in the goal image.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the robot control method taught in Ouchi with the method of determining force from the difference between two images taught in Hashimoto with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because such a method of control allows the robot to continue to operate even if it comes into contact with an obstacle and prevents damage to the object the robot is gripping (Hashimoto: ¶ 0082, “Thereby, even if the component (object) 200 contacts the obstacle, the control device 10 generates a control signal so that the component (object) 200 follows the obstacle. Therefore, the object follows the obstacle. Thus, the robot 20 can be controlled. Thereby, the robot 20 can assemble the component (object) 200 by moving the component (object) 200 to the final target position without destroying the component (object) 200.”).
Ouchi in view of Hashimoto does not teach obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion,
controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set.
Matsuzaki, in the same field of endeavor, teaches obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion (Matsuzaki: ¶ 0023, “Next, the repulsive force vector field generation processing unit 5 that performs a process of generating a virtual repulsive force vector field using the environmental data stored in the environment data storage unit 4 will be described. The repulsive force vector field is generated by setting a three-dimensional lattice on the work space and defining a repulsive force vector 21 at each lattice point. The repulsive force vector 21 sets the obstacle 20 in the work space as a generation source. The direction of the repulsive force vector 21 set at each lattice point is the normal direction of the obstacle surface, that is, the direction perpendicular to the obstacle surface, and the magnitude of the repulsive force vector 21 is set so as to increase as it approaches the obstacle. When there are a plurality of obstacles 20 on the work space, for example, the repulsive force vector 21 generated from the obstacle 20 closest to each lattice point is finally set as the repulsive force vector 21 of the lattice point.”, ¶ 0028, “The representative point 31 is set to the outer shape and the inside of the robot arm 2 in order to avoid interference with the obstacle 20. For example, in the case of the three-dimensional shape model 30 of the robot arm 2 as shown in FIG. 3, the representative points 31 are set evenly on the outer surface of the rectangular parallelepiped or the cylindrical portion that is a component of the model. In order to avoid complication of the figure, some representative points 31 are omitted in FIG. In addition, when the hand 70 that is a part of the moving body has a work tool or the like, the representative point 31 is also set on the hand 70 when there is a size that may cause interference with an obstacle”. The cited passages clearly teaches that the system sets a repulsive force at a feature point.),
controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set (Matsuzaki: ¶ 0023, “Next, the repulsive force vector field generation processing unit 5 that performs a process of generating a virtual repulsive force vector field using the environmental data stored in the environment data storage unit 4 will be described. The repulsive force vector field is generated by setting a three-dimensional lattice on the work space and defining a repulsive force vector 21 at each lattice point. The repulsive force vector 21 sets the obstacle 20 in the work space as a generation source. The direction of the repulsive force vector 21 set at each lattice point is the normal direction of the obstacle surface, that is, the direction perpendicular to the obstacle surface, and the magnitude of the repulsive force vector 21 is set so as to increase as it approaches the obstacle. When there are a plurality of obstacles 20 on the work space, for example, the repulsive force vector 21 generated from the obstacle 20 closest to each lattice point is finally set as the repulsive force vector 21 of the lattice point.”, ¶ 0028, “The representative point 31 is set to the outer shape and the inside of the robot arm 2 in order to avoid interference with the obstacle 20. For example, in the case of the three-dimensional shape model 30 of the robot arm 2 as shown in FIG. 3, the representative points 31 are set evenly on the outer surface of the rectangular parallelepiped or the cylindrical portion that is a component of the model. In order to avoid complication of the figure, some representative points 31 are omitted in FIG. In addition, when the hand 70 that is a part of the moving body has a work tool or the like, the representative point 31 is also set on the hand 70 when there is a size that may cause interference with an obstacle”, ¶ 0030, “Next, the repulsive force in the magnitude and direction of the repulsive force vector field 21 generated by the repulsive force vector field generation processing unit 5 is applied to the representative point 31 of the three-dimensional shape model 30 of the robot arm 2 described above for each link of the robot arm 2.”. The cited passages clearly show that a repulsive force is set such that the robot avoids the feature at which a repulsive force is set.).
Ouchi in view of Hashimoto teaches a method for controlling a robot apparatus, the method comprising: obtaining a goal image in which a virtual attractive force are set in a feature portion, obtaining, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured, and controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set. Ouchi in view of Hashimoto does not teach obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set. Matsuzaki teaches obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set. A person of ordinary skill in the art would have had the technological capabilities required to have modified the method taught in Ouchi in view of Hashimoto obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set taught in Matsuzaki. Furthermore, the method taught in Ouchi in view of Matsuzaki is already configured to set a virtual attractive force in the goal image and control the robot such that the feature point in the current image approaches the feature point in the goal image. As such one of ordinary skill in the art would have been able add the repulsive force at a feature point and avoiding said feature point based on the repulsive force in the control of the robot as taught in Mastsuzaki. Even though the feature portions of the obstacle and control object are implemented using 3D models at predetermined locations of the robot, using the method with images taken from an imaging device would not change the functionality of the method or introduce new functionality from such a combination. No inventive effort would have been required. The combination would have yielded the predictable result of a method for controlling a robot apparatus, the method comprising: obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the method taught in Ouchi in view of Hashimoto obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set taught in Matsuzaki with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results.
Regrading claim 15, Ouchi teaches output information for controlling the robot (Ouchi: ¶ 0052, “The force detection information is used in force control, which is control based on force detection information, out of types of control of the robot 20 carried out by the robot control device 40. The force control is control to operate at least one of the end effector E and the manipulator M such that the external force indicated by the force detection information realizes a state where a predetermined termination condition is satisfied.”).
Ouchi does not teach an image processing apparatus configured to obtain information, the image processing apparatus comprising:
a controller configured to obtain a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion,
obtain, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured,
output information for controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set.
Hashimoto, in the same field of endeavor, an image processing apparatus configured to obtain information, the image processing apparatus comprising(Hashimoto: Figure 1 imaging device 30, ¶ 0021, “The robot 20 includes a support base 20a fixed to the ground, a manipulator part (manipulator) 20b connected to the support base 20a so as to be rotatable and rotatable, a gripping part 20c connected to the manipulator part 20b, a force sensor 20d, and an imaging device 30.”):
a controller configured to: obtain a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0045, “The load displacement conversion unit 120 acquires load information (for example, force applied to the component 200) input from the load calculation unit 40 at a predetermined sampling rate (1 kHz as an example in the present embodiment). Then, based on the positional relationship signal input from the visual control unit 110 and the load information acquired from the load calculation unit 40, the load displacement conversion unit 120 moves the relative position to which the robot 20 is moved according to the force.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages show that the force is determined based on the positional relationship of the object in the captured image and the assembled component in the goal image and the load. Additionally, impedance control is used to control the robot and said impedance control used the positional relationship and determined load. The impedance control therefore acts as a virtual attractive force.),
obtain, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages clearly teaches that a current image containing a feature portion in the goal image is captured.),
output information for controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0045, “The load displacement conversion unit 120 acquires load information (for example, force applied to the component 200) input from the load calculation unit 40 at a predetermined sampling rate (1 kHz as an example in the present embodiment). Then, based on the positional relationship signal input from the visual control unit 110 and the load information acquired from the load calculation unit 40, the load displacement conversion unit 120 moves the relative position to which the robot 20 is moved according to the force.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages clearly shows that the robot is controlled such that the virtual attractive force (i.e. the impedance control such that the feature in the current image approaches the feature in the goal image.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the robot control method taught in Ouchi with the method of determining force from the difference between two images taught in Hashimoto with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because such a method of control allows the robot to continue to operate even if it comes into contact with an obstacle and prevents damage to the object the robot is gripping (Hashimoto: ¶ 0082, “Thereby, even if the component (object) 200 contacts the obstacle, the control device 10 generates a control signal so that the component (object) 200 follows the obstacle. Therefore, the object follows the obstacle. Thus, the robot 20 can be controlled. Thereby, the robot 20 can assemble the component (object) 200 by moving the component (object) 200 to the final target position without destroying the component (object) 200.”).
Ouchi in view of Hashimoto does not teach obtain a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion,
output information for controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set.
Matsuzaki, in the same field of endeavor, teaches obtain a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion (Matsuzaki: ¶ 0023, “Next, the repulsive force vector field generation processing unit 5 that performs a process of generating a virtual repulsive force vector field using the environmental data stored in the environment data storage unit 4 will be described. The repulsive force vector field is generated by setting a three-dimensional lattice on the work space and defining a repulsive force vector 21 at each lattice point. The repulsive force vector 21 sets the obstacle 20 in the work space as a generation source. The direction of the repulsive force vector 21 set at each lattice point is the normal direction of the obstacle surface, that is, the direction perpendicular to the obstacle surface, and the magnitude of the repulsive force vector 21 is set so as to increase as it approaches the obstacle. When there are a plurality of obstacles 20 on the work space, for example, the repulsive force vector 21 generated from the obstacle 20 closest to each lattice point is finally set as the repulsive force vector 21 of the lattice point.”, ¶ 0028, “The representative point 31 is set to the outer shape and the inside of the robot arm 2 in order to avoid interference with the obstacle 20. For example, in the case of the three-dimensional shape model 30 of the robot arm 2 as shown in FIG. 3, the representative points 31 are set evenly on the outer surface of the rectangular parallelepiped or the cylindrical portion that is a component of the model. In order to avoid complication of the figure, some representative points 31 are omitted in FIG. In addition, when the hand 70 that is a part of the moving body has a work tool or the like, the representative point 31 is also set on the hand 70 when there is a size that may cause interference with an obstacle”. The cited passages clearly teaches that the system sets a repulsive force at a feature point.),
output information for controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set (Matsuzaki: ¶ 0023, “Next, the repulsive force vector field generation processing unit 5 that performs a process of generating a virtual repulsive force vector field using the environmental data stored in the environment data storage unit 4 will be described. The repulsive force vector field is generated by setting a three-dimensional lattice on the work space and defining a repulsive force vector 21 at each lattice point. The repulsive force vector 21 sets the obstacle 20 in the work space as a generation source. The direction of the repulsive force vector 21 set at each lattice point is the normal direction of the obstacle surface, that is, the direction perpendicular to the obstacle surface, and the magnitude of the repulsive force vector 21 is set so as to increase as it approaches the obstacle. When there are a plurality of obstacles 20 on the work space, for example, the repulsive force vector 21 generated from the obstacle 20 closest to each lattice point is finally set as the repulsive force vector 21 of the lattice point.”, ¶ 0028, “The representative point 31 is set to the outer shape and the inside of the robot arm 2 in order to avoid interference with the obstacle 20. For example, in the case of the three-dimensional shape model 30 of the robot arm 2 as shown in FIG. 3, the representative points 31 are set evenly on the outer surface of the rectangular parallelepiped or the cylindrical portion that is a component of the model. In order to avoid complication of the figure, some representative points 31 are omitted in FIG. In addition, when the hand 70 that is a part of the moving body has a work tool or the like, the representative point 31 is also set on the hand 70 when there is a size that may cause interference with an obstacle”, ¶ 0030, “Next, the repulsive force in the magnitude and direction of the repulsive force vector field 21 generated by the repulsive force vector field generation processing unit 5 is applied to the representative point 31 of the three-dimensional shape model 30 of the robot arm 2 described above for each link of the robot arm 2.”. The cited passages clearly show that a repulsive force is set such that the robot avoids the feature at which a repulsive force is set.).
Ouchi in view of Hashimoto teaches an image processing apparatus configured to obtain information, the image processing apparatus comprising: a controller configured to obtain a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, obtain, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured, output information for controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set. Ouchi in view of Hashimoto does not teach obtain a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and output information for controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set. Matsuzaki teaches obtain a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and output information for controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set. A person of ordinary skill in the art would have had the technological capabilities required to have modified the method taught in Ouchi in view of Hashimoto obtain a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and output information for controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set taught in Matsuzaki. Furthermore, the method taught in Ouchi in view of Matsuzaki is already configured to set a virtual attractive force in the goal image and control the robot such that the feature point in the current image approaches the feature point in the goal image. As such one of ordinary skill in the art would have been able add the repulsive force at a feature point and avoiding said feature point based on the repulsive force in the control of the robot as taught in Mastsuzaki. Even though the feature portions of the obstacle and control object are implemented using 3D models at predetermined locations of the robot, using the method with images taken from an imaging device would not change the functionality of the method or introduce new functionality from such a combination. No inventive effort would have been required. The combination would have yielded the predictable result of an image processing apparatus configured to obtain information, the image processing apparatus comprising: obtain a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and output information for controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the method taught in Ouchi in view of Hashimoto obtain a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and output information for controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set taught in Matsuzaki with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results.
Regrading claim 16, Ouchi teaches outputting information for controlling the robot (Ouchi: ¶ 0052, “The force detection information is used in force control, which is control based on force detection information, out of types of control of the robot 20 carried out by the robot control device 40. The force control is control to operate at least one of the end effector E and the manipulator M such that the external force indicated by the force detection information realizes a state where a predetermined termination condition is satisfied.”).
Ouchi does not teach an image processing method for obtaining information about force, the image processing method comprising:
obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion,
obtaining, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured,
outputting information controlling the robot to avoid, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, the feature portion in the goal image in which the virtual repulsive force is set.
Hashimoto, in the same field of endeavor, teaches an image processing method for obtaining information about force, the image processing method comprising (Hashimoto: Figure 1 imaging device 30, ¶ 0021, “The robot 20 includes a support base 20a fixed to the ground, a manipulator part (manipulator) 20b connected to the support base 20a so as to be rotatable and rotatable, a gripping part 20c connected to the manipulator part 20b, a force sensor 20d, and an imaging device 30.”):
obtaining a goal image in which a virtual attractive force are set in a feature portion (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0045, “The load displacement conversion unit 120 acquires load information (for example, force applied to the component 200) input from the load calculation unit 40 at a predetermined sampling rate (1 kHz as an example in the present embodiment). Then, based on the positional relationship signal input from the visual control unit 110 and the load information acquired from the load calculation unit 40, the load displacement conversion unit 120 moves the relative position to which the robot 20 is moved according to the force.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages show that the force is determined based on the positional relationship of the object in the captured image and the assembled component in the goal image and the load. Additionally, impedance control is used to control the robot and said impedance control used the positional relationship and determined load. The impedance control therefore acts as a virtual attractive force.),
obtaining, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages clearly teaches that a current image containing a feature portion in the goal image is captured.),
outputting information for controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set (Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0045, “The load displacement conversion unit 120 acquires load information (for example, force applied to the component 200) input from the load calculation unit 40 at a predetermined sampling rate (1 kHz as an example in the present embodiment). Then, based on the positional relationship signal input from the visual control unit 110 and the load information acquired from the load calculation unit 40, the load displacement conversion unit 120 moves the relative position to which the robot 20 is moved according to the force.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).”. The cited passages clearly shows that the robot is controlled such that the virtual attractive force (i.e. the impedance control such that the feature in the current image approaches the feature in the goal image.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the robot control method taught in Ouchi with the method of determining force from the difference between two images taught in Hashimoto with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because such a method of control allows the robot to continue to operate even if it comes into contact with an obstacle and prevents damage to the object the robot is gripping (Hashimoto: ¶ 0082, “Thereby, even if the component (object) 200 contacts the obstacle, the control device 10 generates a control signal so that the component (object) 200 follows the obstacle. Therefore, the object follows the obstacle. Thus, the robot 20 can be controlled. Thereby, the robot 20 can assemble the component (object) 200 by moving the component (object) 200 to the final target position without destroying the component (object) 200.”).
Ouchi in view of Hashimoto does not teach obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion,
outputting information for controlling the robot to avoid, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, the feature portion in the goal image in which the virtual repulsive force is set.
Matsuzaki, in the same field of endeavor, teaches obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion (Matsuzaki: ¶ 0023, “Next, the repulsive force vector field generation processing unit 5 that performs a process of generating a virtual repulsive force vector field using the environmental data stored in the environment data storage unit 4 will be described. The repulsive force vector field is generated by setting a three-dimensional lattice on the work space and defining a repulsive force vector 21 at each lattice point. The repulsive force vector 21 sets the obstacle 20 in the work space as a generation source. The direction of the repulsive force vector 21 set at each lattice point is the normal direction of the obstacle surface, that is, the direction perpendicular to the obstacle surface, and the magnitude of the repulsive force vector 21 is set so as to increase as it approaches the obstacle. When there are a plurality of obstacles 20 on the work space, for example, the repulsive force vector 21 generated from the obstacle 20 closest to each lattice point is finally set as the repulsive force vector 21 of the lattice point.”, ¶ 0028, “The representative point 31 is set to the outer shape and the inside of the robot arm 2 in order to avoid interference with the obstacle 20. For example, in the case of the three-dimensional shape model 30 of the robot arm 2 as shown in FIG. 3, the representative points 31 are set evenly on the outer surface of the rectangular parallelepiped or the cylindrical portion that is a component of the model. In order to avoid complication of the figure, some representative points 31 are omitted in FIG. In addition, when the hand 70 that is a part of the moving body has a work tool or the like, the representative point 31 is also set on the hand 70 when there is a size that may cause interference with an obstacle”. The cited passages clearly teaches that the system sets a repulsive force at a feature point.),
outputting information for controlling the robot to avoid, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, the feature portion in the goal image in which the virtual repulsive force is set (Matsuzaki: ¶ 0023, “Next, the repulsive force vector field generation processing unit 5 that performs a process of generating a virtual repulsive force vector field using the environmental data stored in the environment data storage unit 4 will be described. The repulsive force vector field is generated by setting a three-dimensional lattice on the work space and defining a repulsive force vector 21 at each lattice point. The repulsive force vector 21 sets the obstacle 20 in the work space as a generation source. The direction of the repulsive force vector 21 set at each lattice point is the normal direction of the obstacle surface, that is, the direction perpendicular to the obstacle surface, and the magnitude of the repulsive force vector 21 is set so as to increase as it approaches the obstacle. When there are a plurality of obstacles 20 on the work space, for example, the repulsive force vector 21 generated from the obstacle 20 closest to each lattice point is finally set as the repulsive force vector 21 of the lattice point.”, ¶ 0028, “The representative point 31 is set to the outer shape and the inside of the robot arm 2 in order to avoid interference with the obstacle 20. For example, in the case of the three-dimensional shape model 30 of the robot arm 2 as shown in FIG. 3, the representative points 31 are set evenly on the outer surface of the rectangular parallelepiped or the cylindrical portion that is a component of the model. In order to avoid complication of the figure, some representative points 31 are omitted in FIG. In addition, when the hand 70 that is a part of the moving body has a work tool or the like, the representative point 31 is also set on the hand 70 when there is a size that may cause interference with an obstacle”, ¶ 0030, “Next, the repulsive force in the magnitude and direction of the repulsive force vector field 21 generated by the repulsive force vector field generation processing unit 5 is applied to the representative point 31 of the three-dimensional shape model 30 of the robot arm 2 described above for each link of the robot arm 2.”. The cited passages clearly show that a repulsive force is set such that the robot avoids the feature at which a repulsive force is set.).
Ouchi in view of Hashimoto teaches an image processing method for obtaining information about force, the image processing method comprising: obtaining a goal image in which a virtual attractive force are set in a feature portion, obtaining, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured, and controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set. Ouchi in view of Hashimoto does not teach obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and outputting information for controlling the robot to avoid, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, the feature portion in the goal image in which the virtual repulsive force is set. Matsuzaki teaches obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and outputting information for controlling the robot to avoid, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, the feature portion in the goal image in which the virtual repulsive force is set. A person of ordinary skill in the art would have had the technological capabilities required to have modified the method taught in Ouchi in view of Hashimoto obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and controlling the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set outputting information for controlling the robot to avoid, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, the feature portion in the goal image in which the virtual repulsive force is set taught in Matsuzaki. Furthermore, the method taught in Ouchi in view of Matsuzaki is already configured to set a virtual attractive force in the goal image and control the robot such that the feature point in the current image approaches the feature point in the goal image. As such one of ordinary skill in the art would have been able add the repulsive force at a feature point and avoiding said feature point based on the repulsive force in the control of the robot as taught in Mastsuzaki. Even though the feature portions of the obstacle and control object are implemented using 3D models at predetermined locations of the robot, using the method with images taken from an imaging device would not change the functionality of the method or introduce new functionality from such a combination. No inventive effort would have been required. The combination would have yielded the predictable result of an image processing method for obtaining information about force, the image processing method comprising: obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and outputting information for controlling the robot to avoid, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, the feature portion in the goal image in which the virtual repulsive force is set.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the method taught in Ouchi in view of Hashimoto obtaining a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion, and outputting information for controlling the robot to avoid, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, the feature portion in the goal image in which the virtual repulsive force is set taught in Matsuzaki with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results.
Regarding claim 17, Ouchi in view of Hashimoto teaches a method for manufacturing a product by using the robot apparatus according to claim 1, the method comprising: outputting information for controlling the robot to avoid, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, the feature portion in the goal image in which the virtual repulsive force is set thereby causing the robot to bring a control target object into contact with another object (Ouchi: ¶ 0052, “The force detection information is used in force control, which is control based on force detection information, out of types of control of the robot 20 carried out by the robot control device 40. The force control is control to operate at least one of the end effector E and the manipulator M such that the external force indicated by the force detection information realizes a state where a predetermined termination condition is satisfied.”, ¶ 0093, “In an example illustrated in FIG. 4, the task category information Cl is information indicating a task category into which tasks that include operation of the robot 20 pressing an object gripped by the robot 20 to another object through force control is classified.”, ¶ 0099, “In the example illustrated in FIG. 4, the task category information C2 is information indicating a task category into which tasks that include operation of the robot 20 inserting an object gripped by the robot 20 into another object through force control is classified.”, ¶ 0166, “Herein, the parameter is an impedance parameter since the force control is impedance control in this example. That is, the parameter is each of a virtual inertia coefficient, a virtual viscosity coefficient, and a virtual elasticity coefficient.” Hashimoto: ¶ 0043, “Further, the visual control unit 110 detects the positional relationship between the component (object) 200 and the assembly component 210 based on the captured image. Specifically, for example, the visual control unit 110 uses a component (object) based on a current image obtained by imaging the current object and a goal image obtained by moving the object to a target position. The positional relationship between 200 and the part to be assembled 210 is detected. Then, the visual control unit 110 outputs a positional relationship signal indicating the detected positional relationship to a load displacement conversion unit 120 described later of the compliant motion control unit 150. Thereby, the load displacement conversion part 120 mentioned later determines the axis which prescribes | regulates the operation | movement of impedance control based on a positional relationship signal.”, ¶ 0045, “The load displacement conversion unit 120 acquires load information (for example, force applied to the component 200) input from the load calculation unit 40 at a predetermined sampling rate (1 kHz as an example in the present embodiment). Then, based on the positional relationship signal input from the visual control unit 110 and the load information acquired from the load calculation unit 40, the load displacement conversion unit 120 moves the relative position to which the robot 20 is moved according to the force.”, ¶ 0077, “Next, a goal image after the assembly of the component 200 is generated from the detected position and orientation of the component 210 to be assembled (step S104). The visual control unit 110 generates a first operation instruction signal by visual servoing based on the difference between the goal image and the current image (step S105). Next, the load displacement conversion unit 120 generates a second operation instruction signal by impedance control from the positions and postures of the component 200 and the assembly component 210 (step S106).” As can be seen from the cited passages, the robotic system is configured to perform various tasks that involve bringing an object gripped by the robot into contact with another object using the dynamical characteristics.).
Regarding claim 18, Ouchi in view of Hashimoto teaches a non-transitory computer-readable recording medium storing a program for causing a computer to execute the method according to claim 14 (Hashimoto: ¶ 0110, “Further, a program for executing each process of the control devices (10, 10b, 10c) of each embodiment is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system. , The above-described various processes relating to the control devices (10, 10b, 10c) may be performed.”).
Regarding claim 19, Ouchi in view of Hashimoto teaches wherein the controller displays at least one of types of an virtual force direction or parameters of an obtained virtual force direction on a display portion (Ouchi: Figure 4 region JG5, ¶ 0166, “The tab TB4 is a tab to display a GUI that receives a parameter of force control in the region TBR in the task indicated by task information selected by the user on the operational screen P2 illustrated in FIG. 5. Herein, the parameter is an impedance parameter since the force control is impedance control in this example. That is, the parameter is each of a virtual inertia coefficient, a virtual viscosity coefficient, and a virtual elasticity coefficient. The movement of the robot 20 through the force control is determined by these impedance parameters.”, ¶ 0169, “In the task indicated by the task information selected by the user on the operational screen P2 illustrated in FIG. 5, the region JG5 is a region to display a plurality of input fields, into which each of parameters according to the task (in this example, parameters of force control) is input by the user.”. The controller is clearly configured to display both the type and parameter of the virtual dynamical characteristics, wherein such quantities include a virtual inertia coefficient, a virtual viscosity coefficient, and a virtual elasticity coefficient.).
Regarding claim 20, Ouchi in view of Hashimoto in further view of Matsuzaki teaches wherein the controller displays, in the predetermined image or the captured image, the attractive force acting between the third feature portion and the first feature portion as a first figure, and wherein the controller displays, in the goal image or the current image, the repulsive force acting between the third feature portion and the second feature portion as a second figure different from the first figure (Ouchi: Figures 6-9 and 11 region FM, ¶ 0156, “The display control unit 61 acquires force detection information as a response of this instruction from the robot control device 40. The display control unit 61 displays, in the region FM, a graph showing time changes of each of external forces indicated by the acquired force detection information. That is, the region FM is a region to display this graph.”, ¶ 0158, “In addition, on the operational screen P3 illustrated in FIG. 7, an image according to the tab TB2 is displayed in the region MMl. In a case where the user clicks (taps) the tab TB2, the display control unit 61 displays the image according to the tab TB2 instead of the image which has been displayed in the region MMl until then. In the example illustrated in FIG. 7, in the region MMl, an image showing a state where the target object 01 is inserted into the insertion portion 021 is displayed as the image according to the tab TB2.”, Matsuzaki: ¶ 0030, “Next, the repulsive force in the magnitude and direction of the repulsive force vector field 21 generated by the repulsive force vector field generation processing unit 5 is applied to the representative point 31 of the three-dimensional shape model 30 of the robot arm 2 described above for each link of the robot arm 2.”, ¶ 0061, “At this time, the magnitude of the attractive force vector 61 is calculated and set from a component proportional to the positional deviation”.).
Ouchi in view of Hashimoto teaches display a force to a user along with the goal image. Matsuzaki teaches determining the magnitude and direction of attractive and repulsive forces between a robot and its environment. A person of ordinary skill in the art would have been able to modify the display method taught in Ouchi in view of Hashimoto to display the attractive and repulsive forces taught in Matsuzaki. The modification would not have introduced or changed the functionality of either method. No inventive effort would have been required as the modification only requires the simple substitution of variables being displayed. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, that the combination of Ouchi in view of Hashimoto in further view of Matsuzaki teaches wherein the controller displays, in the predetermined image or the captured image, an attractive force acting between the third feature portion and the first feature portion as a first figure, and wherein the controller displays, in the goal image or the current image, a repulsive force acting between the third feature portion and the second feature portion as a second figure different from the first figure.
Claim(s) 8, 9, and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2018/0029232 A1 ("Ouchi") in view of JP 2013180380 A (“Hashimoto”) in further view of JP 2012011498 A (“Matsuzaki”) in further view of US 2021/0059762 A1 (“NG”).
Regarding claim 8, Ouchi in view of Hashimoto in further view of Matsuzaki wherein the controller displays a second user interface image (Ouchi: Figure 5 operational screen P2, ¶ 0118, “The operational screen P2 includes task information S1, task information S2, task information S3, a plurality of buttons including each of the button Bl, a button B4, and a button BS, and the file name input field CF1 as GUIs.”).
Ouchi in view of Hashimoto in further view of Matsuzaki does not teach wherein the controller displays a second user interface image for receiving setting of the first feature portion and the second feature portion on a display portion.
NG, in the same field of endeavor, teaches wherein the controller displays a second user interface image for receiving setting of the first feature portion and the second feature portion on a display portion (NG: ¶ 0117, “Once a suitable image section of the kidney is located, the surgeon interactively selects/labels one or more landmark features e.g. 602, 604, 606, 608 on the ultrasound image 600 and the one or more landmarks are highlighted by the image feedback unit on the display screen.”. The cited passage shows that the user can select the feature using a user interface.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed, to have combine the robot apparatus taught in Ouchi in view of Hashimoto in further view of Matsuzaki with the user interface for setting the feature portions taught in NG with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because it would have been obvious to try. Allowing the user to set feature is common in many applications and would have been known to one of ordinary skill in the art. One of ordinary skill would have been able to see the advantage in allowing the user to apply their expertise in setting the feature. Furthermore, such a method of setting feature portions through a user interface would have been within the technological capabilities for a person of ordinary skill in the art.
Regarding claim 9, Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG teaches wherein the controller displays the goal image in the second user interface image (Ouchi: Figure 7 region MM1, ¶ 0158, “In the example illustrated in FIG. 7, in the region MMl, an image showing a state where the target object 01 is inserted into the insertion portion 021 is displayed as the image according to the tab TB2.”. One of ordinary skill in the art would see that the image of the object having been inserted into the insertion portion corresponds to the predetermined image.).
Regarding claim 12, Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG teaches wherein the controller displays, on the goal image displayed in the second user interface image, figures respectively corresponding to the first feature portion and the second feature portion that have been already set (NG: Figure 6 landmark features 602, 604, 606, 608, ¶ 0117, “Once a suitable image section of the kidney is located, the surgeon interactively selects/labels one or more landmark features e.g. 602, 604, 606, 608 on the ultrasound image 600 and the one or more landmarks are highlighted by the image feedback unit on the display screen.”).
Claim(s) 10 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2018/0029232 A1 ("Ouchi") in view of JP 2013180380 A (“Hashimoto”) in further view of JP 2012011498 A (“Matsuzaki”) in further view of US 2021/0059762 A1 (“NG”) in further view of US 9595095 B2 (“Aiso”).
Regarding claim 10, Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG does not teach wherein the controller displays, in the second user interface image, a button for obtaining a plurality of feature candidates serving as candidates of the first feature portion and the second feature portion from the goal image.
Aiso, in the same field of endeavor, teaches wherein the controller displays, in the second user interface image, a button for obtaining a plurality of feature candidates serving as candidates of the first feature portion and the second feature portion from the goal image (Aiso: Column 7 lines 35-48, “The feature extraction part 222 reads out model image data newly stored in the model generation processing, which will be described later, from the data memory unit 215. The feature extraction part 222 extracts a feature, e.g. an edge with respect to each small area formed by segmentation of the image represented by the read out model image data.”, Column 11 lines 5-11, “In response to the press down of the instruction button 2, the feature extraction part 222 generates base model data based on the base model image data, and generates additional model data based on the additional model image data. Then, the model integration part 223 performs model integration processing with respect to the base model data and the additional model data.”. As can be seen from the cited paragraphs, pressing the button causes the feature extraction part to extract the features from the model image data.)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the robot apparatus taught in Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG with the use of a button that obtains a plurality of features taught in Aiso with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because obtaining a plurality of features allows the robot to reliably detect an object when there are multiple object (Aiso: Column 1 lines 57-65, “Existence of surrounding objects: In pattern matching processing, images representing targets are registered as models. In this regard, a registered area (window) may contain images of other objects than the targets. An advantage of some aspects of the invention is to provide a robot system that may reliably detect objects.”).
Regarding claim 11, Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG in further view of Aiso teaches wherein in a case where the button is operated, the controller displays, on the predetermined image displayed in the second user interface image, figures respectively corresponding to the plurality of feature candidates such that a user is capable of selecting the first feature portion and the second feature portion to be set from among the plurality of feature candidates (NG: ¶ 0112, “As for landmark localisation, one or more landmark features may be identified and labelled on the model for subsequent use in a registration step (compare 106 of FIG. 1). As shown in FIG. 4, the 3D model of the kidney 400 comprises saddle ridge 402, peak 404, saddle valley 406 and pit 408 landmarks.”, ¶ 0117, “Once a suitable image section of the kidney is located, the surgeon interactively selects/labels one or more landmark features e.g. 602, 604, 606, 608 on the ultrasound image 600 and the one or more landmarks are highlighted by the image feedback unit on the display screen.”. Aiso: Column 7 lines 35-48, “The feature extraction part 222 reads out model image data newly stored in the model generation processing, which will be described later, from the data memory unit 215. The feature extraction part 222 extracts a feature, e.g. an edge with respect to each small area formed by segmentation of the image represented by the read out model image data.”, Column 11 lines 5-11, “In response to the press down of the instruction button 2, the feature extraction part 222 generates base model data based on the base model image data, and generates additional model data based on the additional model image data. Then, the model integration part 223 performs model integration processing with respect to the base model data and the additional model data.”.).
Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG teaches where features are extracted and then displayed and labeled on an image. Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG further teaches where the user can set features through a user interface. Aiso teaches obtaining a plurality of features from an image in response to the user pressing a button on the user interface. A person of ordinary skill in the art would have had the technological capabilities to have combine the method of extracting and displaying features on an image and setting features through user input taught in Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG with the method of obtaining a plurality of features when the user presses a button on a user interface as taught in Aiso. No inventive effort would have been required. Furthermore, the resulting combine method would have yielded the predictable result of allowing the user to set features from a plurality of features displayed on an image at the push of a button on the user interface. No new functionality would arise from the combination. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, that the combination of elements taught by Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG in further view of Aiso teaches all of the limitations of claim 11.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2018/0029232 A1 ("Ouchi") in view of JP 2013180380 A (“Hashimoto”) in further view of JP 2012011498 A (“Matsuzaki”) in further view of US 2021/0059762 A1 (“NG”) in further view of US 2013/30238131 A1 (“Kondo”).
Regarding claim 13, Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG does not teach wherein the controller displays the goal image and the current image in an overlapping state in the second user interface image.
Kondo, in the same field of endeavor, teaches wherein the controller displays the goal image and the current image in an overlapping state in the second user interface image (Kondo: ¶ 0088, “FIG. 8A illustrates a state in which the detailed contour of the object to be held extracted in the region specified by the user is superimposed upon the captured image displayed on the display screen.”. The cited paragraph teaches overlaying two images, where one image is a contour and the other image is of the environment with the object. The cited paragraph also teaches displaying the overlaid image.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed, to have combine the robot apparatus taught in Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG with the method of overlaying two images and displaying the result taught in Kondo with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because it is a simple substation to the input of the algorithm. One of ordinary skill in the art would have the technological capabilities to see that the algorithm used to overlay two images does not change when applied to a different set of images. Furthermore, one of ordinary skill in the art would have the ability to modify the robot apparatus taught in Ouchi in view of Hashimoto in further view of Matsuzaki in further view of NG with the method of overlaying two images taught in Kondo without inventive effort.
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2018/0029232 A1 ("Ouchi") in view of JP 2013180380 A (“Hashimoto”) in further view of JP 2012011498 A (“Matsuzaki”) in further view of US 11877816 B2 (“Goswani”).
Regarding claim 21, Ouchi in view of Hashimoto in further view of Matsuzaki does not teach wherein the controller is configured to allow a user to set whether the goal image overlaps with the current image on a display.
Goswani, in the same field of endeavor, teaches wherein the controller is configured to allow a user to set whether the goal image overlaps with the current image on a display (Goswani: Figures 15 and 16, Column lines, “Referring to the example of FIG. 15, illustrated is a display 500 includes an image of the tool 14 captured by the imaging tool 15 and a marker 1502 before an operator S performs an alignment step. The marker 1502 may be provided using a two-dimensional (2D) or a three-dimensional (3D) model of the tool 14, and thus may represent an actual solid model of the tool 14. The model of the tool 14 may be provided using, for example, computer aided design (CAD) data or other 2D or 3D solid modeling data representing the tool 14 (e.g., tool 14 of FIG. 4). In the example of FIG. 15, the marker 1502 is a virtual representation of the tool 14, and includes a virtual shaft 1504, a virtual wrist 1506, and a virtual end effector 1508. When the marker 1502 is in a virtual representation of a tool, the marker 1502 is also referred to as a virtual tool 1502 herein. The virtual tool 1502 and its position, orientation, and size as shown in the display 500 may be determined by the pose of the master control device and the alignment relationship between the master control device and the display 500. In an embodiment, the virtual tool 1502 is manipulatable at each joint (e.g., at the virtual wrist 1506) by the master control device, so that the pose of the tool 14 may be mimicked by the virtual tool 1502 by the operator S using the master control device in the alignment step. In the example of FIG. 15, the size of the virtual tool 1502 is smaller than the actual tool 14. The virtual tool 1502 may be represented in a number of different ways. In an example, the virtual tool 1502 is a semi-transparent or translucent image of the tool 14. In another example, the virtual tool 1502 is a wire diagram image of the tool 14. In yet another example, the virtual tool 1502 is an image that appears solid (i.e., not transparent/translucent), but such a solid virtual tool 1502 may make viewing of the actual tool 14 in the display 500 difficult.”, Column lines, “ Referring to the example of FIG. 16, illustrated is a display 500 after the operator S has performed the alignment step to move the master control device to overlay the virtual tool 1502 with the actual tool 14. The virtual tool 1502 has been moved based on the change in the alignment relationship between the master control device and the display. In other words, the position, orientation, and size of the virtual tool 1502 as shown in the display 500 of FIG. 16 correspond to the new pose of the master control device and the new alignment relationship between the master control device and the display after the operator S performs the alignment step.”. The cited passages clearly teach a display that allows a user to overlap a predetermined image (i.e. the marker which is a virtual representation of the robot) with a captured image (i.e. the image of the robot as it currently is).).
Ouch in view of Hashimoto teaches a robot apparatus. Ouchi in view of Hashimoto in further view of Matsuzaki does not teach wherein the controller is configured to allow a user to set whether the goal image overlaps with the current image on a display. Goswani teaches wherein the controller is configured to allow a user to set whether the goal image overlaps with the current image on a display. A person of ordinary skill in the art would have had the technological capabilities required to have combine the robot apparatus taught in Ouchi in view of Hashimoto in further view of Matsuzaki with wherein the controller is configured to allow a user to set whether the goal image overlaps with the current image on a display taught in Goswani. Furthermore, the robot apparatus taught in Ouchi in view of Hashimoto in further view of Matsuzaki is already configured with a display that displays images to a user and allows a user to controller certain aspects of the system. Modifying the robot apparatus taught in Ouchi in view of Hashimoto in further view of Matsuzaki to allow the user to overlap two images as taught in Goswani would not change or introduce new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of a robot apparatus wherein the controller is configured to allow a user to set whether the goal image overlaps with the current image on a display.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the robot apparatus taught in Ouchi in view of Hashimoto in further view of Matsuzaki with wherein the controller is configured to allow a user to set whether the goal image overlaps with the current image on a display taught in Goswani with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results.
Response to Arguments
Applicant's arguments filed November 4th, 2025 have been fully considered but they are not persuasive.
Regarding Applicant’s arguments on pages 9-11, Applicant argues that the prior art on record does not teach the limitations of the amended independent claims 1, 14, 15, and 16. Specifically applicant argues that the secondary reference Hashimoto does not teach the limitation “wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set.” and that the secondary reference Matsuzaki does not teach the limitation “wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set.”. The Examiner respectfully disagrees. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). As stated above in the 35 U.S.C. § 103 rejection of the independent claims, the primary reference teaches a robot apparatus comprising (Ouchi: Figure 1 robot system 1, ¶ 0044): a robot (Ouchi: Figure 1 robot 20, ¶ 0044); and a controller configured to control the robot (Ouchi: Figure 1 robot 20, ¶ 0044), and wherein the controller controls the robot (Ouchi: ¶ 0052, ¶ 0093, ¶ 0099, ¶ 0105, ¶ 0166). The secondary reference Hashimoto teaches an image pickup unit (Hashimoto: Figure 1 imaging device 30, ¶ 0021); wherein the controller obtains a goal image in which a virtual attractive force are set in a feature portion (Hashimoto: ¶ 0043, ¶ 0045, ¶ 0077), wherein the controller obtains, by using the image pickup unit, the current image in which a feature portion corresponding to the feature portion in the goal image is captured (Hashimoto: ¶ 0043, ¶ 0077), and wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set (Hashimoto: ¶ 0043, ¶ 0045, ¶ 0077). The cited passages of Hashimoto clearly teach that an imagining device is used to capture a goal image as well as a current image of the object currently gripped by the robot. The goal image is used to set the parameter required for the system to perform impedance control of the robot such that the current image of the object approaches the goal image of the object. One of ordinary skill in the art would have recognized that this impedance control functions as a virtual attractive force. Additionally, the system taught in Ouchi is already configured to perform impedance control of a robot in order to manipulate an object. As such, the system taught in Ouchi is readily configurable with the methods taught in Hashimoto. Furthermore, such a modification would not have changed or introduced new functionality. No inventive effort would have been required. The secondary reference Matsuzaki teaches wherein the controller obtains a goal image in which a virtual attractive force and a virtual repulsive force are set in a feature portion (Matsuzaki: ¶ 0023, ¶ 0028), wherein the controller controls the robot, when the feature portion in the current image approaches the feature portion in the goal image in which the virtual attractive force is set, to avoid the feature portion in the goal image in which the virtual repulsive force is set (Matsuzaki: ¶ 0023, ¶ 0028, ¶ 0030). Furthermore, the system taught in Ouchi in view of Matsuzaki is already configured to set a virtual attractive force in the goal image and control the robot such that the feature point in the current image approaches the feature point in the goal image. As such one of ordinary skill in the art would have been able add the repulsive force at a feature point and avoiding said feature point based on the repulsive force in the control of the robot as taught in Mastsuzaki. Even though the feature portions of the obstacle and control object are implemented using 3D models at predetermined locations of the robot, using the method with images taken from an imaging device would not change the functionality of the method or introduce new functionality from such a combination. No inventive effort would have been required. Therefore, it is the opinion of the Examiner that the combination of Ouchi in view of Hashimoto in further view of Matsuzaki teaches the limitations of the amended independent claims 1, 14, 15, and 16.
Therefore, for the reasons stated in the 35 U.S.C. in the 35 U.S.C. § 103 rejection of the independent claims and herein, the rejection of independent claims is maintained.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Noah W Stiebritz whose telephone number is (571)272-3414. The examiner can normally be reached Monday thru Friday 7-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at (571) 270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.W.S./ Examiner, Art Unit 3658
/Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658