Prosecution Insights
Last updated: April 19, 2026
Application No. 18/889,385

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM

Final Rejection §103
Filed
Sep 19, 2024
Examiner
STIEBRITZ, NOAH WILLIAM
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fujifilm Corporation
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
51%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
12 granted / 18 resolved
+14.7% vs TC avg
Minimal -16% lift
Without
With
+-15.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
44 currently pending
Career history
62
Total Applications
across all art units

Statute-Specific Performance

§101
18.6%
-21.4% vs TC avg
§103
61.7%
+21.7% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION This is a Final Office Action on the Merits in response to communications filed by applicant on February 4th, 2026. Claims 1-6 and 9-15 are currently pending and examined below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendments to the Claims, filed on February 4th, 2026, have been entered. Claims 1, 10, 14, and 15 are currently amended and pending, claims 2-6, 9, and 11-13 are original, unamended, and pending, and claims 7 and 8 have been canceled. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2023-163989, filed on September 26th, 2023. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 4-6, 9, and 11-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 10186130 B2 ("Whelan") in view of US 10631942 B2 ("Hashimoto") in further view of JP 2020116385 A ("Usui"). Regarding claim 1, Whelan teaches an information processing system comprising at least one processor, wherein the processor is configured to (Whelan: Abstract, “The advantageous embodiments include a system for operating machinery in a manufacturing environment. The system includes a sensor system and a computer. The sensor system is configured to distinguish human skeletal positions from non-human object positions and to determine whether one or more humans are present in a predetermined area. The computer is configured to: responsive to determining that only the one or more humans are in the predetermined area, determine whether a false positive result has occurred, wherein the false positive comprises a first determination that the one or more humans are present when no human is actually present. The computer is also configured to: responsive to determining that the false positive result has not occurred, the taking an action selected from the group consisting of issuing an alert, stopping the machinery, or a combination thereof.”, Column 5 lines 1-14, “As described above, machine 102 and machine 104 are mechanical devices used to complete industrial tasks. In a specific example, typically used in large scale manufacturing, hydraulic robots are automatically controlled, reprogrammable, multipurpose machines capable of moving in three or more axes. Typically, hydraulic robots have a large hydraulically-driven arm anchored to a base-like structure as shown at machine 102 and machine 104 in FIG. 1. Most hydraulic robots handle large repetitive tasks, such as welding, painting, and or molding. These machines can operate at high speeds and can carry heavy loads, making them ideal for manufacturing work. Such robots help manufacturers become more competitive and efficient, while reducing work related injuries caused by repetition.”, Column 17 lines 53-67, “Processor unit 1704 serves to execute instructions for software that may be loaded into memory 1706. This software may be any of the associative memories described elsewhere herein, or software for implementing the processes described elsewhere herein. Thus, for example, software loaded into memory 1706 may be software for executing method 500 of FIG. 5 or method 1500 of FIG. 15.”. The cited passages clearly teaches an information processing system comprising a processor.): acquire motion information indicating a motion of an operator who operates (Whelan: Column 7 lines 15-18, “Returning to FIG. 5, method 500 begins with collecting information (operation 502). Based on the information the computer determines if humans are present (operation 504). If not, the process returns back to operation 502.”, Column7 lines 40-48, “However, if the system is notified that the detected object is not static at operation 510, then the system makes another determination whether the detected object is at an acceptable distance from the machine in question (operation 514). If not, then the system is again notified that the object is not at an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored or moved (operation 512) and subsequently the method returns to collecting information at operation 502.”, Column 7 lines 49-59, “Once the system is notified that the detected object, which is not static, is not at an acceptable distance from the machine, the system determines whether the objects movements detected by the sensor are justified ( operation 518).”, Column 8 lines 16-27, “Returning to method 500, after notifying the system that the motion is justified at operation 522, the computer or system determines whether to ignore body parts (operation 524). The system ignores certain parts of the body if the system has been instructed to do so, such to ignore motion of the head but to pay attention to motion of the hands. If the system determines that a body part should be ignored, then method 500 returns to operation 512 to ignore or move the object and from there returns to operation 502 to continue collecting information.”, Column 11 lines 56-63, “FIG. 10 illustrates examples of justified movements, unjustified movements, and undesirable movements, in accordance with an illustrative embodiment. Arrow 1002 are justified movements detected by human motion sensor 1000 which are pre-defined as being acceptable such that an alert is not issued and modification of operation of the machine is not performed. Human motion sensor 1000 may be human motion sensor 300 of FIG. 3, for example.”. The cited passages clearly show that the system is configured to acquire motion information of an operator using a motion detector.), including a detection unit of at least one of a sensor, a camera, or a microphone (Whelan: Column 11 lines 56-63, “FIG. 10 illustrates examples of justified movements, unjustified movements, and undesirable movements, in accordance with an illustrative embodiment. Arrow 1002 are justified movements detected by human motion sensor 1000 which are pre-defined as being acceptable such that an alert is not issued and modification of operation of the machine is not performed. Human motion sensor 1000 may be human motion sensor 300 of FIG. 3, for example.”, Column 17 lines 22-30, “Robotic motion control system 1600 also includes human sensor 1608 in communication with motion controller 1604. Human sensor 1608 is calibrated to scan work area 1606 using structured light sensors 1610 to identify human 1612 and motion thereof within work area 1606. An example of a structured light sensor is a camera with software for interpreting images.”); control the robot such that the robot is switched from an operation mode in which the movable unit is in an operable state to a non-operation mode in which the movable unit is in an inoperable state in a case where the motion information satisfies a predetermined first condition (Whelan: Column 10 lines 39-49, “Venn diagram 700 illustrates the interactions between humans 702, robots 704, and objects 706. When a human or object interacts with a robot, it is either ignored 708 or an alarm is triggered 710 (or some other action taken with respect to the robot or other machine). Moreover, the computer ignores static objects 712, objects at an acceptable distance 714, valid parts of an object 716, and justified movements 718. For invalid parts of an object 720 and unjustified movements 722, the computer may instruct the machine or robot to stop working or otherwise modify its operation.”, Column 13 lines 1-11, “… the computer will track these points of articulation such that when they are closer than a predefined distance to the machine an alert will trigger or operation of the machine will change. In this manner, only the hands or wrists of an individual will activate an alarm or stop an operating machine.”, Column 13 lines 21-32, “This principle may be combined with a severity scale, as described with respect to FIG. 9. Thus, for example, the computer may be configured to track only certain body parts or only certain motions, and measure the severity (possibly the degree of proximity to the machine) of each body part. Thus, for example, if the hand is at a first distance from a press an alarm triggers, but if the hand is at a second, closer distance from the press then operation of the machine may be automatically stopped.”, Column 14 lines 29-39, “Computer 1406 is also configured to: responsive to determining that the false positive result has not occurred, take an action selected from the group consisting of issuing an alert, stopping machinery 1402, or a combination thereof.”, The cited passages clearly show that the system is configured to transition to a non-operable mode when the motion information satisfies a first condition.); acquire detection information acquired by the detection unit (Whelan: Column 7 lines 15-18, “Returning to FIG. 5, method 500 begins with collecting information (operation 502). Based on the information the computer determines if humans are present (operation 504). If not, the process returns back to operation 502.”, Column7 lines 40-48, “However, if the system is notified that the detected object is not static at operation 510, then the system makes another determination whether the detected object is at an acceptable distance from the machine in question (operation 514). If not, then the system is again notified that the object is not at an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored or moved (operation 512) and subsequently the method returns to collecting information at operation 502.”, Column 7 lines 49-59, “Once the system is notified that the detected object, which is not static, is not at an acceptable distance from the machine, the system determines whether the objects movements detected by the sensor are justified ( operation 518).”, Column 8 lines 16-27, “Returning to method 500, after notifying the system that the motion is justified at operation 522, the computer or system determines whether to ignore body parts (operation 524). The system ignores certain parts of the body if the system has been instructed to do so, such to ignore motion of the head but to pay attention to motion of the hands. If the system determines that a body part should be ignored, then method 500 returns to operation 512 to ignore or move the object and from there returns to operation 502 to continue collecting information.”, Column 12 lines 19-31, “The computer then compares the data received from human motion sensor 1000 to the known justified or undesirable movements, and then classifies each given input from the human motion sensor 1000 as justified or to be ignored.”, Column 13 lines 63-67, “Optionally, computer 1310 could display on a display device a skeleton with tracked points of articulation, thereby allowing people to observe what the computer is tracking based on input from human motion sensor 1302 or human motion sensor 1304.”. The cited passages clearly show that the processor is configured to acquire the detection result from the detection unit.); and output the detection information to at least one output device (Whelan: Column 13 lines 63-67, “Optionally, computer 1310 could display on a display device a skeleton with tracked points of articulation, thereby allowing people to observe what the computer is tracking based on input from human motion sensor 1302 or human motion sensor 1304.”, Column 18 lines 28-35, “Input/output (I/0) unit 1712 allows for input and output of data with other devices that may be connected to data processing system 1700. For example, input/output (I/0) unit 1712 may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, input/output (I/0) unit 1712 may send output to a printer. Display 1714 provides a mechanism to display information to a user.”); and stop the output of the detection information to the output device in a case where the motion information indicates that an operator is a predetermined distance away from the robot as a predetermined second condition (Whelan: Column 4 lines 50-55, “In an illustrative embodiment, sensors, such as sensor 106, sensor 108, and sensor 110 are used to detect humans and human movement in proximity to machine 102 or machine 104. Also shown in FIG. 1 are scan areas, such as scan area 112 and scan area 114, between which is the line of sight of a given sensor.”, Column 9 lines 35-40, “As indicated above, the advantageous embodiments use information collected from a human motion sensor, such as human motion sensor 600. As soon as human motion sensor 600 recognizes an object as human-like, as indicated by arrow 602, it sends a signal to a computer or software that a human is present, as indicated at skeleton 604.”, Column 13 lines 63-67, “Optionally, computer 1310 could display on a display device a skeleton with tracked points of articulation, thereby allowing people to observe what the computer is tracking based on input from human motion sensor 1302 or human motion sensor 1304.”. The cited passages show that, when a human enters the scanning area of the human motion detector, the system is configured to display the tracking information to a display unit. One of ordinary skill in the art would recognize that this is a second predetermined condition that is different from the first condition, has the second condition is when a human enters a scanning are of the human motion sensor and the first condition is when the human enters an area that is a predetermined range away from the robot.). Whelan does not teach acquire motion information indicating a motion of an operator who operates a movable unit of a robot, the robot including a detection unit of at least one of a sensor, a camera, or a microphone; and acquire detection information acquired by the detection unit of the robot. Hashimoto, in the same field of endeavor, teaches acquire motion information indicating a motion of an operator who operates a movable unit of a robot (Hashimoto: Column 3 lines 50-67, “The remote control robot system 100 according to this embodiment is a system including a master-slave type robot in which a slave arm operates following a motion of a master arm. The remote control robot system 100 is configured so that an operator located at a position distant from a working area (outside the working area) of a slave arm 10 (a robotic arm, will be described later in detail) of the robot main body 1 can input an operational instruction to the remote control robot system 100 by moving a master arm 70 of the remote control device 2 (a robotic arm operational instruction input part, will be described later in detail), to make the slave arm 10 perform an operation corresponding to the operational instruction by a control of the control device 3 to perform a work, such as an assembling work of components.”, Column 5 lines 13-32, “In this embodiment, the given operating condition parameter change instructing action is the operator making the hand gesture, and the contactless action detecting part 71 is a detector which detects the operator's hand gesture within a range set above the contactless action detecting part 71. As illustrated in FIG. 2, the contactless action detecting part 71 includes an infrared radiator 71a for radiating an infrared ray upwardly and a stereo camera 71b for receiving the infrared ray radiated from the infrared radiator 71a and reflected on a target object. It is further configured to calculate an attitude of each finger ( a shape of a hand) and a motion of the hand based on an image captured by the stereo camera 71b. Further, the contactless action detecting part 71 is installed near the master arm 70 and configured to be capable of manipulating the robot main body 1 while parallelly per forming an input of the operational instruction to the master arm 70 and an input of the operating condition parameter change instructing action to the contactless action detecting part 71. For example, LEAP(®) of Leap Motion Inc. may be used as the contactless action detecting part 71.”. The cited passages clearly teaches acquiring motion information of an operator who controls a robot.). Whelan teaches an information processing system comprising at least one processor, wherein the processor is configured to: acquire motion information indicating a motion of an operator, including a detection unit of at least one of a sensor, a camera, or a microphone; control the robot such that the robot is switched from an operation mode in which the movable unit is in an operable state to a non-operation mode in which the movable unit is in an inoperable state in a case where the motion information satisfies a predetermined first condition; acquire detection information acquired by the detection unit; output the detection information to at least one output device; and stop the output of the detection information to the output device in a case where the motion information indicates that an operator is a predetermined distance away from the robot as a predetermined second condition. Whelan does not teach acquire motion information indicating a motion of an operator who operates a movable unit of a robot. Hashimoto teaches acquire motion information indicating a motion of an operator who operates a movable unit of a robot. A person of ordinary skill in the art would have had the technological capabilities required to have modified the system taught in Whelan with acquire motion information indicating a motion of an operator who operates a movable unit of a robot taught in Hashimoto. Furthermore, Whelan is already configured to monitor the motion of humans in the vicinity of the robot(s) and control the robots based on said motion. Additionally, Whelan teaches input devices through which an operator can interact with the system (Whelan: Column 14 lines 5-10, “Finally, the advantageous embodiments contemplate using laptop 1312 to install, configure and or optimize the system. Laptop 1312 could connect to the machine via a physical plug or through a secured network connection. Thus, the advantageous embodiments are not necessarily limited to a dedicated computer or computer 1310.”, Column 18 lines 28-35, “Input/output (I/0) unit 1712 allows for input and output of data with other devices that may be connected to data processing system 1700. For example, input/output (I/0) unit 1712 may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, input/output (I/0) unit 1712 may send output to a printer. Display 1714 provides a mechanism to display information to a user.”). The system taught in Whelan is therefore readily configurable with the method of detecting motion of an operator of the system taught in Hashimoto. All that would be required would to modify the system to acquire motion information of an operator who operates the robot would be to implement the methods taught in Hashimoto. Such a modification would not change or introduce new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of an information processing system configured to: acquire motion information indicating a motion of an operator who operates a movable unit of a robot. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the information processing system taught in Whelan with acquire motion information indicating a motion of an operator who operates a movable unit of a robot taught in Hashimoto with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. Whelan in view of Hashimoto does not teach the robot including a detection unit of at least one of a sensor, a camera, or a microphone; and acquire detection information acquired by the detection unit of the robot. Usui, in the same field of endeavor, teaches the robot including a detection unit of at least one of a sensor, a camera, or a microphone (Usui: ¶ 0017, “A medical arm device 510 according to this embodiment is provided beside the treatment table 530. The medical arm device 510 includes a base 511 serving as a base, an arm 512 extending from the base 511, and an imaging unit 515 connected to the tip of the arm 512 as a tip unit.”, ¶ 0019, “An imaging unit 515 is connected to the tip of the arm portion 512 as a tip unit. The imaging unit 515 is a unit that acquires an image of an imaging target, and is, for example, a camera that can capture moving images and still images. As shown in FIG. 1 , the posture and position of the arm unit 512 and the imaging unit 515 provided at the tip of the arm unit 512 are controlled by the medical arm device 510 so that the imaging unit 515 captures an image of the treatment site of the treatment target 540.”, ¶ 0025, “Referring to FIG. 2, a medical arm device 400 according to this embodiment includes a base portion 410 and an arm portion 420.”, ¶ 0026, “The arm section 420 has a plurality of joints 421 a to 421 f, a plurality of links 422 a to 422 c connected to each other by the joints 421 a to 421 f, and an imaging unit 423 provided at the tip of the arm section 420.”, ¶ 0028, “The imaging unit 423 is a unit that acquires an image of a subject, and is, for example, a camera that captures moving images and still images. By controlling the driving of the arm portion 420, the position and orientation of the imaging unit 423 are controlled. In this embodiment, the imaging unit 423 captures an image of a region of the patient's body, for example, the treatment site.”); and acquire detection information acquired by the detection unit of the robot (Usui: ¶ 0019, “An imaging unit 515 is connected to the tip of the arm portion 512 as a tip unit. The imaging unit 515 is a unit that acquires an image of an imaging target, and is, for example, a camera that can capture moving images and still images. As shown in FIG. 1 , the posture and position of the arm unit 512 and the imaging unit 515 provided at the tip of the arm unit 512 are controlled by the medical arm device 510 so that the imaging unit 515 captures an image of the treatment site of the treatment target 540.”, ¶ 0020, “Furthermore, a display device 550 such as a monitor or display is installed at a position facing the user 520. The image of the treatment site captured by the imaging unit 515 is displayed as an electronic image on the display screen of the display device 550 . The user 520 performs various treatments while viewing the electronic image of the treatment area displayed on the display screen of the display device 550.”, ¶ 0028, “The imaging unit 423 is a unit that acquires an image of a subject, and is, for example, a camera that captures moving images and still images. By controlling the driving of the arm portion 420, the position and orientation of the imaging unit 423 are controlled. In this embodiment, the imaging unit 423 captures an image of a region of the patient's body, for example, the treatment site.”. The cited passages clearly show that the processor acquires the detection result from the detection unit of the robot.). Whelan in view of Hashimoto teaches wherein: includes a detection unit of at least one of a sensor, a camera, or a microphone, and the processor is configured to: acquire detection information acquired by the detection unit; and output the detection information to at least one output device. Whelan in view of Hashimoto does not teach the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquire detection information acquired by the detection unit of the robot. Usui teaches the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquire detection information acquired by the detection unit of the robot. A person of ordinary skill in the art would have had the technological capabilities required to have modified the system taught in Whelan in view of Hashimoto with the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquire detection information acquired by the detection unit of the robot taught in Usui. Furthermore, the system taught in Whelan in view of Hashimoto is already configured to gather information using a detection unit and display this information on a display. Modifying the robot to have sensors mounted on the robot itself as taught in Usui would have been well within the technological capabilities of a person of ordinary skill in the art. The modification would have required simply adding additional sensors to the robot according to known methods. Such a modification would not have changed or introduced new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of an information processing system configured to: the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquire detection information acquired by the detection unit of the robot. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the system taught in Whelan in view of Hashimoto with the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquire detection information acquired by the detection unit of the robot taught in Usui with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. Regarding claim 4, Whelan in view of Hashimoto in further view of Usui teaches wherein the processor is configured to control the robot such that the robot is switched from the operation mode to the non-operation mode in a case where the motion information indicates that the operator has entered a predetermined area as the first condition (Whelan: Column 7 lines 15-18, “Returning to FIG. 5, method 500 begins with collecting information (operation 502). Based on the information the computer determines if humans are present (operation 504). If not, the process returns back to operation 502.”, Column 14 lines 20-28, “System 1400 is a system for operating machinery in a manufacturing environment including machinery 1402. System 1400 includes sensor system 1404 configured to distinguish human skeletal positions from non-human object positions. Sensor system 1404 is further configured to determine whether one or more humans are present in a predetermined area.”, Column 14 lines 29-39, “Computer 1406 is also configured to: responsive to determining that the false positive result has not occurred, take an action selected from the group consisting of issuing an alert, stopping machinery 1402, or a combination thereof.”. The cited passages clearly teach that the robot is configured to transition to a non-operable mode when the human enters a predetermined area.). Regarding claim 5, Whelan in view of Hashimoto in further view of Usui teaches wherein the predetermined area is an area within a predetermined range from the robot (Whelan: Column 7 lines 40-48, “However, if the system is notified that the detected object is not static at operation 510, then the system makes another determination whether the detected object is at an acceptable distance from the machine in question (operation 514). If not, then the system is again notified that the object is not at an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored or moved (operation 512) and subsequently the method returns to collecting information at operation 502.”, Column 7 lines 49-59, “Once the system is notified that the detected object, which is not static, is not at an acceptable distance from the machine, the system determines whether the objects movements detected by the sensor are justified ( operation 518).”, Column 13 lines 1-11, “… the computer will track these points of articulation such that when they are closer than a predefined distance to the machine an alert will trigger or operation of the machine will change. In this manner, only the hands or wrists of an individual will activate an alarm or stop an operating machine.”, Column 13 lines 21-32, “This principle may be combined with a severity scale, as described with respect to FIG. 9. Thus, for example, the computer may be configured to track only certain body parts or only certain motions, and measure the severity (possibly the degree of proximity to the machine) of each body part. Thus, for example, if the hand is at a first distance from a press an alarm triggers, but if the hand is at a second, closer distance from the press then operation of the machine may be automatically stopped.”. The cited passages clearly show that the predetermined area is a predetermined range from the robot.). Regarding claim 6, Whelan in view of Hashimoto in further view of Usui teaches wherein: the predetermined area is at least one of a radiation controlled area, a high magnetic field controlled area, a biohazard controlled area, an aseptic room, an intensive care room, or a clean room (Usui: Figure 22, ¶ 0252, “For example, Figure 22 is an explanatory diagram for explaining an application example of a medical arm system according to one embodiment of the present disclosure, and shows an example of a case in which a bilateral system is configured using the medical arm system. That is, in the example shown in FIG. 22, an arm device 510a that operates as a master device and an arm device 510b that operates as a slave device are connected via a network N1. The type of the network N1 connecting the arm device 510a and the arm device 510b is not particularly limited. With this configuration, an image of a patient 540 located in a remote location that is physically (or in some cases technically or conceptually) distant and captured by an imaging unit 560 is presented to a practitioner 520 via a display device 550. A remote location may be, for example, a different hospital, an adjacent room in the same hospital (for example, an X-ray room, CT (Computed Tomography) room, or radiation therapy room where medical equipment emits radiation), or a remote location in the same operating room.”. The cited passages teach that the robot (510b) is configured to operate in an area including an imagining device that emits radiation or in a radiation therapy room. These are clear examples of a radiation controlled area.), and the robot is operated in the predetermined area (Column 7 lines 15-18, “Returning to FIG. 5, method 500 begins with collecting information (operation 502). Based on the information the computer determines if humans are present (operation 504). If not, the process returns back to operation 502.”, Column7 lines 40-48, “However, if the system is notified that the detected object is not static at operation 510, then the system makes another determination whether the detected object is at an acceptable distance from the machine in question (operation 514). If not, then the system is again notified that the object is not at an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored or moved (operation 512) and subsequently the method returns to collecting information at operation 502.”, Column 7 lines 49-59, “Once the system is notified that the detected object, which is not static, is not at an acceptable distance from the machine, the system determines whether the objects movements detected by the sensor are justified ( operation 518).”, Column 13 lines 1-11, “… the computer will track these points of articulation such that when they are closer than a predefined distance to the machine an alert will trigger or operation of the machine will change. In this manner, only the hands or wrists of an individual will activate an alarm or stop an operating machine.”, Column 13 lines 21-32, “This principle may be combined with a severity scale, as described with respect to FIG. 9. Thus, for example, the computer may be configured to track only certain body parts or only certain motions, and measure the severity (possibly the degree of proximity to the machine) of each body part. Thus, for example, if the hand is at a first distance from a press an alarm triggers, but if the hand is at a second, closer distance from the press then operation of the machine may be automatically stopped.”, Column 14 lines 20-28, “System 1400 is a system for operating machinery in a manufacturing environment including machinery 1402. System 1400 includes sensor system 1404 configured to distinguish human skeletal positions from non-human object positions. Sensor system 1404 is further configured to determine whether one or more humans are present in a predetermined area.”, Column 14 lines 29-39, “Computer 1406 is also configured to: responsive to determining that the false positive result has not occurred, take an action selected from the group consisting of issuing an alert, stopping machinery 1402, or a combination thereof.”. The cited passages clearly show that the system is configured to operate in the predetermined area.). Regarding claim 9, Whelan in view of Hashimoto in further view of Usui teaches wherein the processor is configured to: acquire assistant motion information indicating a motion of an assistant other than the operator (Whelan: Figure 10, Column 11 lines 56-63, “FIG. 10 illustrates examples of justified movements, unjustified movements, and undesirable movements, in accordance with an illustrative embodiment. Arrow 1002 are justified movements detected by human motion sensor 1000 which are pre-defined as being acceptable such that an alert is not issued and modification of operation of the machine is not performed. Human motion sensor 1000 may be human motion sensor 300 of FIG. 3, for example.”, Column 11 line 64 – Column 12 line 3, “As indicated above, in some cases an individual is authorized to be near or take a specified action with respect to an operating machine. For example, workers may need to feed parts to a machine, as shown in FIG. 10 at arrow 1002. In order to address this concern, the computer uses "movement recognition" to identify movements that are predetermined to be justified or safe.”, Column 12 lines 4-13, “On the other hand, if an unauthorized movement, such as that of a human walking at an unauthorized space near a machine as indicated at arrow 1004, was detected, then the computer would activate an alarm or modify operation of the machine. In the same way, the computer can learn a safe or acceptable movement; it can also learn or recognize an undesirable movement, as indicated by arrow 1006. An example of an undesirable movement may be walking while talking on a mobile phone, or perhaps carrying something, or any other movement predetermined to be undesirable.”, Column 14 lines 20-28, “System 1400 is a system for operating machinery in a manufacturing environment including machinery 1402. System 1400 includes sensor system 1404 configured to distinguish human skeletal positions from non-human object positions. Sensor system 1404 is further configured to deter- mine whether one or more humans are present in a predetermined area.”. The cited figure and passages clearly show that the system is configured to track the motion of multiple humans. One of ordinary skill in the art would recognize that this would include an assistant.); and control the robot such that the robot is switched from the operation mode to the non-operation mode in a case where the assistant motion information satisfies a predetermined third condition (Whelan: Column 14 lines 20-28, “System 1400 is a system for operating machinery in a manufacturing environment including machinery 1402. System 1400 includes sensor system 1404 configured to distinguish human skeletal positions from non-human object positions. Sensor system 1404 is further configured to deter- mine whether one or more humans are present in a predetermined area.”, Column 14 lines 29-39, “System 1400 includes computer 1406. Computer 1406 is configured to: responsive to determining that only the one or more humans are in the predetermined area, determine whether a false positive result has occurred. The false positive may be a first determination that the one or more humans are present when no human is actually present. Computer 1406 is also configured to: responsive to determining that the false positive result has not occurred, take an action selected from the group consisting of issuing an alert, stopping machinery 1402, or a combination thereof.”, Column 14 lines 40-52, “This illustrative embodiment may be modified or expanded. For example, computer 1406, in being configured to determine whether one or more humans are present the predetermined area, computer 1406 is further configured to: use sensor system 1404 to track a plurality of points of articulation of the skeletal positions of the one or more humans to create tracked points of articulation. In this case, computer 1406 is programmed to compare the tracked points of articulation to known sets of points of articulation. Computer 1406 is further configured, responsive to the tracked points of articulation match at least one of the known sets of points of articulation, to determine that only the one or more humans are present in the predetermined area.”. The cited passages clearly show that the system is capable of tracking the movements of one or more humans and stopping the operation of the robot when one or more humans enter the predetermined area. One of ordinary skill in the art would recognize that this teaches a predetermined third condition for an “assistant”, e.g.. when human A (i.e. the operator) enters the predetermined area is the first condition and when human B (i.e. the assistant) enters the predetermined area is the third condition). Regarding claim 11, Whelan in view of Hashimoto in further view of Usui teaches wherein a function other than the movable unit included in the robot is not stopped in the non-operation mode (Whelan: Column 10 lines 39-49, “Venn diagram 700 illustrates the interactions between humans 702, robots 704, and objects 706. When a human or object interacts with a robot, it is either ignored 708 or an alarm is triggered 710 (or some other action taken with respect to the robot or other machine). Moreover, the computer ignores static objects 712, objects at an acceptable distance 714, valid parts of an object 716, and justified movements 718. For invalid parts of an object 720 and unjustified movements 722, the computer may instruct the machine or robot to stop working or otherwise modify its operation.”, Column 13 lines 1-11, “… the computer will track these points of articulation such that when they are closer than a predefined distance to the machine an alert will trigger or operation of the machine will change. In this manner, only the hands or wrists of an individual will activate an alarm or stop an operating machine.”, Column 13 lines 21-32, “This principle may be combined with a severity scale, as described with respect to FIG. 9. Thus, for example, the computer may be configured to track only certain body parts or only certain motions, and measure the severity (possibly the degree of proximity to the machine) of each body part. Thus, for example, if the hand is at a first distance from a press an alarm triggers, but if the hand is at a second, closer distance from the press then operation of the machine may be automatically stopped.”, Column 14 lines 29-39, “Computer 1406 is also configured to: responsive to determining that the false positive result has not occurred, take an action selected from the group consisting of issuing an alert, stopping machinery 1402, or a combination thereof.”. One of ordinary skill in the art would recognize that transition to the non-operable state only stops the movement of the robot and does not affect any other components of the system.). Regarding claim 12, Whelan in view of Hashimoto in further view of Usui teaches wherein the robot includes at least one of a movement mechanism or an operation mechanism as the movable unit (Whelan: Figure 1 robots 102 and 104, Figure 8 robot 800, Column 4 lines 40-49, “FIG. 1 illustrates a manufacturing environment, in accordance with an illustrative embodiment. Manufacturing environment 100 includes machine 102, machine 104, both of which are used to manufacture objects or assemblies. Machine 102 and machine 104 may be robots or other manufacturing equipment such as, but not limited to, hydraulic machines, arms, presses, tools, drills, saws, riveters, and many other automatic or semi-automatic devices. Manufacturing environment 100 may be particularly adapted to an aircraft manufacturing facility.”, Column 10 lines 64-67, “FIG. 8 illustrates an example of a false positive detection of a human in proximity to a machine, in accordance with an illustrative embodiment. Machine 800 may be, for example, machine 102 or 104 of FIG. 1 or machine 204 of FIG. 2.”. The cited passages and figures clearly show a robotic manipulator with articulable joints and an end effector that function as the movable unit. Hashimoto: Figure 1, Column 4 lines 2-4, “The robot main body 1 includes the slave arm 10, an end effector 16, a traveling unit 17, and a camera 51, and is installed in the working area.”, Column 4 lines 5-7, “The slave arm 10 is, for example, an arm of an articulated type industrial robot, but it is not limited to this. The slave arm 10 includes an arm main body 13 and a pedestal 15.”, Column 4 lines 8-20, “The arm main body 13 includes a plurality of links sequentially connected in a direction from a base-end part toward a tip-end part, and one or more joints coupling the adjacent links so that one of them is rotatable with respect to the other link. Further, the end effector 16 is coupled to the tip-end part of the arm main body 13. Moreover, the arm main body 13 is configured so that the tip-end part is moved with respect to the base-end part by rotating the joint, and the end effector 16 thus moves within a given operational area. The arm main body 13 includes a robotic arm drive part (not illustrated) which drives a plurality of joint axes. Further, the pedestal 15 supports the arm main body 13 and the end effector 16.”, Column 4 lines 35-42, “The traveling unit 17 is provided to the pedestal 15 and causes the entire robot main body 1 to travel. The traveling unit 17 has, for example, wheels and a wheel drive part (not illustrated) which rotatably drives the wheels. The wheel drive part rotatably drives the wheels to move the robot main body 1. Thus, in this embodiment, the robot main body 1 is a self-running robot which is self-runnable, but it is not limited to this.”). Regarding claim 13, Whelan in view of Hashimoto in further view of Usui teaches wherein the motion information indicates a movement of at least one of a predetermined body part, a line of sight, a voice, a brain wave, or a position of the operator (Whelan: Column 10 lines 39-49, “Venn diagram 700 illustrates the interactions between humans 702, robots 704, and objects 706. When a human or object interacts with a robot, it is either ignored 708 or an alarm is triggered 710 (or some other action taken with respect to the robot or other machine). Moreover, the computer ignores static objects 712, objects at an acceptable distance 714, valid parts of an object 716, and justified movements 718. For invalid parts of an object 720 and unjustified movements 722, the computer may instruct the machine or robot to stop working or otherwise modify its operation.”, Column 13 lines 1-11, “… the computer will track these points of articulation such that when they are closer than a predefined distance to the machine an alert will trigger or operation of the machine will change. In this manner, only the hands or wrists of an individual will activate an alarm or stop an operating machine.”, Column 13 lines 21-32, “This principle may be combined with a severity scale, as described with respect to FIG. 9. Thus, for example, the computer may be configured to track only certain body parts or only certain motions, and measure the severity (possibly the degree of proximity to the machine) of each body part. Thus, for example, if the hand is at a first distance from a press an alarm triggers, but if the hand is at a second, closer distance from the press then operation of the machine may be automatically stopped.”. The cited passages clearly show that the system is configured to determine the motion of specific body parts of an operator and track an operators position.). Regarding claim 14, Whelan teaches an information processing method executed by a computer, the method comprising (Whelan: Abstract, “The advantageous embodiments include a system for operating machinery in a manufacturing environment. The system includes a sensor system and a computer. The sensor system is configured to distinguish human skeletal positions from non-human object positions and to determine whether one or more humans are present in a predetermined area. The computer is configured to: responsive to determining that only the one or more humans are in the predetermined area, determine whether a false positive result has occurred, wherein the false positive comprises a first determination that the one or more humans are present when no human is actually present. The computer is also configured to: responsive to determining that the false positive result has not occurred, the taking an action selected from the group consisting of issuing an alert, stopping the machinery, or a combination thereof.”, Column 5 lines 1-14, “As described above, machine 102 and machine 104 are mechanical devices used to complete industrial tasks. In a specific example, typically used in large scale manufacturing, hydraulic robots are automatically controlled, reprogrammable, multipurpose machines capable of moving in three or more axes. Typically, hydraulic robots have a large hydraulically-driven arm anchored to a base-like structure as shown at machine 102 and machine 104 in FIG. 1. Most hydraulic robots handle large repetitive tasks, such as welding, painting, and or molding. These machines can operate at high speeds and can carry heavy loads, making them ideal for manufacturing work. Such robots help manufacturers become more competitive and efficient, while reducing work related injuries caused by repetition.”, Column 17 lines 53-67, “Processor unit 1704 serves to execute instructions for software that may be loaded into memory 1706. This software may be any of the associative memories described elsewhere herein, or software for implementing the processes described elsewhere herein. Thus, for example, software loaded into memory 1706 may be software for executing method 500 of FIG. 5 or method 1500 of FIG. 15.”. The cited passages clearly teaches an information processing system comprising a processor.): acquiring motion information indicating a motion of an operator who operates (Whelan: Column 7 lines 15-18, “Returning to FIG. 5, method 500 begins with collecting information (operation 502). Based on the information the computer determines if humans are present (operation 504). If not, the process returns back to operation 502.”, Column7 lines 40-48, “However, if the system is notified that the detected object is not static at operation 510, then the system makes another determination whether the detected object is at an acceptable distance from the machine in question (operation 514). If not, then the system is again notified that the object is not at an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored or moved (operation 512) and subsequently the method returns to collecting information at operation 502.”, Column 7 lines 49-59, “Once the system is notified that the detected object, which is not static, is not at an acceptable distance from the machine, the system determines whether the objects movements detected by the sensor are justified ( operation 518).”, Column 8 lines 16-27, “Returning to method 500, after notifying the system that the motion is justified at operation 522, the computer or system determines whether to ignore body parts (operation 524). The system ignores certain parts of the body if the system has been instructed to do so, such to ignore motion of the head but to pay attention to motion of the hands. If the system determines that a body part should be ignored, then method 500 returns to operation 512 to ignore or move the object and from there returns to operation 502 to continue collecting information.”, Column 11 lines 56-63, “FIG. 10 illustrates examples of justified movements, unjustified movements, and undesirable movements, in accordance with an illustrative embodiment. Arrow 1002 are justified movements detected by human motion sensor 1000 which are pre-defined as being acceptable such that an alert is not issued and modification of operation of the machine is not performed. Human motion sensor 1000 may be human motion sensor 300 of FIG. 3, for example.”. The cited passages clearly show that the system is configured to acquire motion information of an operator using a motion detector.), including a detection unit of at least one of a sensor, a camera, or a microphone (Whelan: Column 11 lines 56-63, “FIG. 10 illustrates examples of justified movements, unjustified movements, and undesirable movements, in accordance with an illustrative embodiment. Arrow 1002 are justified movements detected by human motion sensor 1000 which are pre-defined as being acceptable such that an alert is not issued and modification of operation of the machine is not performed. Human motion sensor 1000 may be human motion sensor 300 of FIG. 3, for example.”, Column 17 lines 22-30, “Robotic motion control system 1600 also includes human sensor 1608 in communication with motion controller 1604. Human sensor 1608 is calibrated to scan work area 1606 using structured light sensors 1610 to identify human 1612 and motion thereof within work area 1606. An example of a structured light sensor is a camera with software for interpreting images.”); controlling the robot such that the robot is switched from an operation mode in which the movable unit is in an operable state to a non-operation mode in which the movable unit is in an inoperable state in a case where the motion information satisfies a predetermined first condition (Whelan: Column 10 lines 39-49, “Venn diagram 700 illustrates the interactions between humans 702, robots 704, and objects 706. When a human or object interacts with a robot, it is either ignored 708 or an alarm is triggered 710 (or some other action taken with respect to the robot or other machine). Moreover, the computer ignores static objects 712, objects at an acceptable distance 714, valid parts of an object 716, and justified movements 718. For invalid parts of an object 720 and unjustified movements 722, the computer may instruct the machine or robot to stop working or otherwise modify its operation.”, Column 13 lines 1-11, “… the computer will track these points of articulation such that when they are closer than a predefined distance to the machine an alert will trigger or operation of the machine will change. In this manner, only the hands or wrists of an individual will activate an alarm or stop an operating machine.”, Column 13 lines 21-32, “This principle may be combined with a severity scale, as described with respect to FIG. 9. Thus, for example, the computer may be configured to track only certain body parts or only certain motions, and measure the severity (possibly the degree of proximity to the machine) of each body part. Thus, for example, if the hand is at a first distance from a press an alarm triggers, but if the hand is at a second, closer distance from the press then operation of the machine may be automatically stopped.”, Column 14 lines 29-39, “Computer 1406 is also configured to: responsive to determining that the false positive result has not occurred, take an action selected from the group consisting of issuing an alert, stopping machinery 1402, or a combination thereof.”, The cited passages clearly show that the system is configured to transition to a non-operable mode when the motion information satisfies a first condition.); acquiring detection information acquired by the detection unit (Whelan: Column 7 lines 15-18, “Returning to FIG. 5, method 500 begins with collecting information (operation 502). Based on the information the computer determines if humans are present (operation 504). If not, the process returns back to operation 502.”, Column7 lines 40-48, “However, if the system is notified that the detected object is not static at operation 510, then the system makes another determination whether the detected object is at an acceptable distance from the machine in question (operation 514). If not, then the system is again notified that the object is not at an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored or moved (operation 512) and subsequently the method returns to collecting information at operation 502.”, Column 7 lines 49-59, “Once the system is notified that the detected object, which is not static, is not at an acceptable distance from the machine, the system determines whether the objects movements detected by the sensor are justified ( operation 518).”, Column 8 lines 16-27, “Returning to method 500, after notifying the system that the motion is justified at operation 522, the computer or system determines whether to ignore body parts (operation 524). The system ignores certain parts of the body if the system has been instructed to do so, such to ignore motion of the head but to pay attention to motion of the hands. If the system determines that a body part should be ignored, then method 500 returns to operation 512 to ignore or move the object and from there returns to operation 502 to continue collecting information.”, Column 12 lines 19-31, “The computer then compares the data received from human motion sensor 1000 to the known justified or undesirable movements, and then classifies each given input from the human motion sensor 1000 as justified or to be ignored.”, Column 13 lines 63-67, “Optionally, computer 1310 could display on a display device a skeleton with tracked points of articulation, thereby allowing people to observe what the computer is tracking based on input from human motion sensor 1302 or human motion sensor 1304.”. The cited passages clearly show that the processor is configured to acquire the detection result from the detection unit.); and outputting the detection information to at least one output device (Whelan: Column 13 lines 63-67, “Optionally, computer 1310 could display on a display device a skeleton with tracked points of articulation, thereby allowing people to observe what the computer is tracking based on input from human motion sensor 1302 or human motion sensor 1304.”, Column 18 lines 28-35, “Input/output (I/0) unit 1712 allows for input and output of data with other devices that may be connected to data processing system 1700. For example, input/output (I/0) unit 1712 may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, input/output (I/0) unit 1712 may send output to a printer. Display 1714 provides a mechanism to display information to a user.”); and stopping the output of the detection information to the output device in a case where the motion information indicates that an operator is a predetermined distance away from the robot as a predetermined second condition (Whelan: Column 4 lines 50-55, “In an illustrative embodiment, sensors, such as sensor 106, sensor 108, and sensor 110 are used to detect humans and human movement in proximity to machine 102 or machine 104. Also shown in FIG. 1 are scan areas, such as scan area 112 and scan area 114, between which is the line of sight of a given sensor.”, Column 9 lines 35-40, “As indicated above, the advantageous embodiments use information collected from a human motion sensor, such as human motion sensor 600. As soon as human motion sensor 600 recognizes an object as human-like, as indicated by arrow 602, it sends a signal to a computer or software that a human is present, as indicated at skeleton 604.”, Column 13 lines 63-67, “Optionally, computer 1310 could display on a display device a skeleton with tracked points of articulation, thereby allowing people to observe what the computer is tracking based on input from human motion sensor 1302 or human motion sensor 1304.”. The cited passages show that, when a human enters the scanning area of the human motion detector, the system is configured to display the tracking information to a display unit. One of ordinary skill in the art would recognize that this is a second predetermined condition that is different from the first condition, has the second condition is when a human enters a scanning are of the human motion sensor and the first condition is when the human enters an area that is a predetermined range away from the robot.). Whelan does not teach acquiring motion information indicating a motion of an operator who operates a movable unit of a robot; the robot including a detection unit of at least one of a sensor, a camera, or a microphone; and acquiring detection information acquired by the detection unit of the robot. Hashimoto, in the same field of endeavor, teaches acquiring motion information indicating a motion of an operator who operates a movable unit of a robot (Hashimoto: Column 3 lines 50-67, “The remote control robot system 100 according to this embodiment is a system including a master-slave type robot in which a slave arm operates following a motion of a master arm. The remote control robot system 100 is configured so that an operator located at a position distant from a working area (outside the working area) of a slave arm 10 (a robotic arm, will be described later in detail) of the robot main body 1 can input an operational instruction to the remote control robot system 100 by moving a master arm 70 of the remote control device 2 (a robotic arm operational instruction input part, will be described later in detail), to make the slave arm 10 perform an operation corresponding to the operational instruction by a control of the control device 3 to perform a work, such as an assembling work of components.”, Column 5 lines 13-32, “In this embodiment, the given operating condition parameter change instructing action is the operator making the hand gesture, and the contactless action detecting part 71 is a detector which detects the operator's hand gesture within a range set above the contactless action detecting part 71. As illustrated in FIG. 2, the contactless action detecting part 71 includes an infrared radiator 71a for radiating an infrared ray upwardly and a stereo camera 71b for receiving the infrared ray radiated from the infrared radiator 71a and reflected on a target object. It is further configured to calculate an attitude of each finger ( a shape of a hand) and a motion of the hand based on an image captured by the stereo camera 71b. Further, the contactless action detecting part 71 is installed near the master arm 70 and configured to be capable of manipulating the robot main body 1 while parallelly per forming an input of the operational instruction to the master arm 70 and an input of the operating condition parameter change instructing action to the contactless action detecting part 71. For example, LEAP(®) of Leap Motion Inc. may be used as the contactless action detecting part 71.”. The cited passages clearly teaches acquiring motion information of an operator who controls a robot.). Whelan teaches an information processing method executed by a computer, the method comprising: acquiring motion information indicating a motion of an operator, including a detection unit of at least one of a sensor, a camera, or a microphone; and controlling the robot such that the robot is switched from an operation mode in which the movable unit is in an operable state to a non-operation mode in which the movable unit is in an inoperable state in a case where the motion information satisfies a predetermined first condition; acquiring detection information acquired by the detection unit; outputting the detection information to at least one output device; and stopping the output of the detection information to the output device in a case where the motion information indicates that an operator is a predetermined distance away from the robot as a predetermined second condition. Whelan does not teach acquiring motion information indicating a motion of an operator who operates a movable unit of a robot. Hashimoto teaches acquiring motion information indicating a motion of an operator who operates a movable unit of a robot. A person of ordinary skill in the art would have had the technological capabilities required to have modified the method taught in Whelan with acquiring motion information indicating a motion of an operator who operates a movable unit of a robot taught in Hashimoto. Furthermore, Whelan is already configured to monitor the motion of humans in the vicinity of the robot(s) and control the robots based on said motion. Additionally, Whelan teaches input devices through which an operator can interact with the system (Whelan: Column 14 lines 5-10, “Finally, the advantageous embodiments contemplate using laptop 1312 to install, configure and or optimize the system. Laptop 1312 could connect to the machine via a physical plug or through a secured network connection. Thus, the advantageous embodiments are not necessarily limited to a dedicated computer or computer 1310.”, Column 18 lines 28-35, “Input/output (I/0) unit 1712 allows for input and output of data with other devices that may be connected to data processing system 1700. For example, input/output (I/0) unit 1712 may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, input/output (I/0) unit 1712 may send output to a printer. Display 1714 provides a mechanism to display information to a user.”). The method taught in Whelan is therefore readily configurable with the method of detecting motion of an operator of the system taught in Hashimoto. All that would be required would to modify the system to acquire motion information of an operator who operates the robot would be to implement the methods taught in Hashimoto. Such a modification would not change or introduce new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of an information processing method executed by a computer, the method comprising: acquiring motion information indicating a motion of an operator who operates a movable unit of a robot. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the information processing method taught in Whelan with acquiring motion information indicating a motion of an operator who operates a movable unit of a robot taught in Hashimoto with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. Whelan in view of Hashimoto does not teach the robot including a detection unit of at least one of a sensor, a camera, or a microphone; and acquiring detection information acquired by the detection unit of the robot. Usui, in the same field of endeavor, teaches the robot including a detection unit of at least one of a sensor, a camera, or a microphone (Usui: ¶ 0017, “A medical arm device 510 according to this embodiment is provided beside the treatment table 530. The medical arm device 510 includes a base 511 serving as a base, an arm 512 extending from the base 511, and an imaging unit 515 connected to the tip of the arm 512 as a tip unit.”, ¶ 0019, “An imaging unit 515 is connected to the tip of the arm portion 512 as a tip unit. The imaging unit 515 is a unit that acquires an image of an imaging target, and is, for example, a camera that can capture moving images and still images. As shown in FIG. 1 , the posture and position of the arm unit 512 and the imaging unit 515 provided at the tip of the arm unit 512 are controlled by the medical arm device 510 so that the imaging unit 515 captures an image of the treatment site of the treatment target 540.”, ¶ 0025, “Referring to FIG. 2, a medical arm device 400 according to this embodiment includes a base portion 410 and an arm portion 420.”, ¶ 0026, “The arm section 420 has a plurality of joints 421 a to 421 f, a plurality of links 422 a to 422 c connected to each other by the joints 421 a to 421 f, and an imaging unit 423 provided at the tip of the arm section 420.”, ¶ 0028, “The imaging unit 423 is a unit that acquires an image of a subject, and is, for example, a camera that captures moving images and still images. By controlling the driving of the arm portion 420, the position and orientation of the imaging unit 423 are controlled. In this embodiment, the imaging unit 423 captures an image of a region of the patient's body, for example, the treatment site.”); and acquiring detection information acquired by the detection unit of the robot (Usui: ¶ 0019, “An imaging unit 515 is connected to the tip of the arm portion 512 as a tip unit. The imaging unit 515 is a unit that acquires an image of an imaging target, and is, for example, a camera that can capture moving images and still images. As shown in FIG. 1 , the posture and position of the arm unit 512 and the imaging unit 515 provided at the tip of the arm unit 512 are controlled by the medical arm device 510 so that the imaging unit 515 captures an image of the treatment site of the treatment target 540.”, ¶ 0020, “Furthermore, a display device 550 such as a monitor or display is installed at a position facing the user 520. The image of the treatment site captured by the imaging unit 515 is displayed as an electronic image on the display screen of the display device 550 . The user 520 performs various treatments while viewing the electronic image of the treatment area displayed on the display screen of the display device 550.”, ¶ 0028, “The imaging unit 423 is a unit that acquires an image of a subject, and is, for example, a camera that captures moving images and still images. By controlling the driving of the arm portion 420, the position and orientation of the imaging unit 423 are controlled. In this embodiment, the imaging unit 423 captures an image of a region of the patient's body, for example, the treatment site.”. The cited passages clearly show that the processor acquires the detection result from the detection unit of the robot.). Whelan in view of Hashimoto teaches wherein: includes a detection unit of at least one of a sensor, a camera, or a microphone, and the processor is configured to: acquiring detection information acquired by the detection unit; and outputting the detection information to at least one output device. Whelan in view of Hashimoto does not teach the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquiring detection information acquired by the detection unit of the robot. Usui teaches the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquiring detection information acquired by the detection unit of the robot. A person of ordinary skill in the art would have had the technological capabilities required to have modified the method taught in Whelan in view of Hashimoto with the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquiring detection information acquired by the detection unit of the robot taught in Usui. Furthermore, the method taught in Whelan in view of Hashimoto is already configured to gather information using a detection unit and display this information on a display. Modifying the robot to have sensors mounted on the robot itself as taught in Usui would have been well within the technological capabilities of a person of ordinary skill in the art. The modification would have required simply adding additional sensors to the robot according to known methods. Such a modification would not have changed or introduced new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of an information processing method configured to: the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquiring detection information acquired by the detection unit of the robot. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the method taught in Whelan in view of Hashimoto with the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquiring detection information acquired by the detection unit of the robot taught in Usui with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. Regarding claim 15, Whelan teaches a non-transitory computer-readable storage medium storing an information processing program causing a computer to execute a process comprising (Whelan: Abstract, “The advantageous embodiments include a system for operating machinery in a manufacturing environment. The system includes a sensor system and a computer. The sensor system is configured to distinguish human skeletal positions from non-human object positions and to determine whether one or more humans are present in a predetermined area. The computer is configured to: responsive to determining that only the one or more humans are in the predetermined area, determine whether a false positive result has occurred, wherein the false positive comprises a first determination that the one or more humans are present when no human is actually present. The computer is also configured to: responsive to determining that the false positive result has not occurred, the taking an action selected from the group consisting of issuing an alert, stopping the machinery, or a combination thereof.”, Column 5 lines 1-14, “As described above, machine 102 and machine 104 are mechanical devices used to complete industrial tasks. In a specific example, typically used in large scale manufacturing, hydraulic robots are automatically controlled, reprogrammable, multipurpose machines capable of moving in three or more axes. Typically, hydraulic robots have a large hydraulically-driven arm anchored to a base-like structure as shown at machine 102 and machine 104 in FIG. 1. Most hydraulic robots handle large repetitive tasks, such as welding, painting, and or molding. These machines can operate at high speeds and can carry heavy loads, making them ideal for manufacturing work. Such robots help manufacturers become more competitive and efficient, while reducing work related injuries caused by repetition.”, Column 17 lines 53-67, “Processor unit 1704 serves to execute instructions for software that may be loaded into memory 1706. This software may be any of the associative memories described elsewhere herein, or software for implementing the processes described elsewhere herein. Thus, for example, software loaded into memory 1706 may be software for executing method 500 of FIG. 5 or method 1500 of FIG. 15.”. The cited passages clearly teaches an information processing system comprising a processor.): acquiring motion information indicating a motion of an operator who operates (Whelan: Column 7 lines 15-18, “Returning to FIG. 5, method 500 begins with collecting information (operation 502). Based on the information the computer determines if humans are present (operation 504). If not, the process returns back to operation 502.”, Column7 lines 40-48, “However, if the system is notified that the detected object is not static at operation 510, then the system makes another determination whether the detected object is at an acceptable distance from the machine in question (operation 514). If not, then the system is again notified that the object is not at an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored or moved (operation 512) and subsequently the method returns to collecting information at operation 502.”, Column 7 lines 49-59, “Once the system is notified that the detected object, which is not static, is not at an acceptable distance from the machine, the system determines whether the objects movements detected by the sensor are justified ( operation 518).”, Column 8 lines 16-27, “Returning to method 500, after notifying the system that the motion is justified at operation 522, the computer or system determines whether to ignore body parts (operation 524). The system ignores certain parts of the body if the system has been instructed to do so, such to ignore motion of the head but to pay attention to motion of the hands. If the system determines that a body part should be ignored, then method 500 returns to operation 512 to ignore or move the object and from there returns to operation 502 to continue collecting information.”, Column 11 lines 56-63, “FIG. 10 illustrates examples of justified movements, unjustified movements, and undesirable movements, in accordance with an illustrative embodiment. Arrow 1002 are justified movements detected by human motion sensor 1000 which are pre-defined as being acceptable such that an alert is not issued and modification of operation of the machine is not performed. Human motion sensor 1000 may be human motion sensor 300 of FIG. 3, for example.”. The cited passages clearly show that the system is configured to acquire motion information of an operator using a motion detector.), including a detection unit of at least one of a sensor, a camera, or a microphone (Whelan: Column 11 lines 56-63, “FIG. 10 illustrates examples of justified movements, unjustified movements, and undesirable movements, in accordance with an illustrative embodiment. Arrow 1002 are justified movements detected by human motion sensor 1000 which are pre-defined as being acceptable such that an alert is not issued and modification of operation of the machine is not performed. Human motion sensor 1000 may be human motion sensor 300 of FIG. 3, for example.”, Column 17 lines 22-30, “Robotic motion control system 1600 also includes human sensor 1608 in communication with motion controller 1604. Human sensor 1608 is calibrated to scan work area 1606 using structured light sensors 1610 to identify human 1612 and motion thereof within work area 1606. An example of a structured light sensor is a camera with software for interpreting images.”); controlling the robot such that the robot is switched from an operation mode in which the movable unit is in an operable state to a non-operation mode in which the movable unit is in an inoperable state in a case where the motion information satisfies a predetermined first condition (Whelan: Column 10 lines 39-49, “Venn diagram 700 illustrates the interactions between humans 702, robots 704, and objects 706. When a human or object interacts with a robot, it is either ignored 708 or an alarm is triggered 710 (or some other action taken with respect to the robot or other machine). Moreover, the computer ignores static objects 712, objects at an acceptable distance 714, valid parts of an object 716, and justified movements 718. For invalid parts of an object 720 and unjustified movements 722, the computer may instruct the machine or robot to stop working or otherwise modify its operation.”, Column 13 lines 1-11, “… the computer will track these points of articulation such that when they are closer than a predefined distance to the machine an alert will trigger or operation of the machine will change. In this manner, only the hands or wrists of an individual will activate an alarm or stop an operating machine.”, Column 13 lines 21-32, “This principle may be combined with a severity scale, as described with respect to FIG. 9. Thus, for example, the computer may be configured to track only certain body parts or only certain motions, and measure the severity (possibly the degree of proximity to the machine) of each body part. Thus, for example, if the hand is at a first distance from a press an alarm triggers, but if the hand is at a second, closer distance from the press then operation of the machine may be automatically stopped.”, Column 14 lines 29-39, “Computer 1406 is also configured to: responsive to determining that the false positive result has not occurred, take an action selected from the group consisting of issuing an alert, stopping machinery 1402, or a combination thereof.”, The cited passages clearly show that the system is configured to transition to a non-operable mode when the motion information satisfies a first condition.)’ acquiring detection information acquired by the detection unit (Whelan: Column 7 lines 15-18, “Returning to FIG. 5, method 500 begins with collecting information (operation 502). Based on the information the computer determines if humans are present (operation 504). If not, the process returns back to operation 502.”, Column7 lines 40-48, “However, if the system is notified that the detected object is not static at operation 510, then the system makes another determination whether the detected object is at an acceptable distance from the machine in question (operation 514). If not, then the system is again notified that the object is not at an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored an acceptable distance (operation 516). If so, that is the object is at an acceptable distance, then the object is ignored or moved (operation 512) and subsequently the method returns to collecting information at operation 502.”, Column 7 lines 49-59, “Once the system is notified that the detected object, which is not static, is not at an acceptable distance from the machine, the system determines whether the objects movements detected by the sensor are justified ( operation 518).”, Column 8 lines 16-27, “Returning to method 500, after notifying the system that the motion is justified at operation 522, the computer or system determines whether to ignore body parts (operation 524). The system ignores certain parts of the body if the system has been instructed to do so, such to ignore motion of the head but to pay attention to motion of the hands. If the system determines that a body part should be ignored, then method 500 returns to operation 512 to ignore or move the object and from there returns to operation 502 to continue collecting information.”, Column 12 lines 19-31, “The computer then compares the data received from human motion sensor 1000 to the known justified or undesirable movements, and then classifies each given input from the human motion sensor 1000 as justified or to be ignored.”, Column 13 lines 63-67, “Optionally, computer 1310 could display on a display device a skeleton with tracked points of articulation, thereby allowing people to observe what the computer is tracking based on input from human motion sensor 1302 or human motion sensor 1304.”. The cited passages clearly show that the processor is configured to acquire the detection result from the detection unit.); and outputting the detection information to at least one output device (Whelan: Column 13 lines 63-67, “Optionally, computer 1310 could display on a display device a skeleton with tracked points of articulation, thereby allowing people to observe what the computer is tracking based on input from human motion sensor 1302 or human motion sensor 1304.”, Column 18 lines 28-35, “Input/output (I/0) unit 1712 allows for input and output of data with other devices that may be connected to data processing system 1700. For example, input/output (I/0) unit 1712 may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, input/output (I/0) unit 1712 may send output to a printer. Display 1714 provides a mechanism to display information to a user.”); and stopping the output of the detection information to the output device in a case where the motion information indicates that an operator is a predetermined distance away from the robot as a predetermined second condition (Whelan: Column 4 lines 50-55, “In an illustrative embodiment, sensors, such as sensor 106, sensor 108, and sensor 110 are used to detect humans and human movement in proximity to machine 102 or machine 104. Also shown in FIG. 1 are scan areas, such as scan area 112 and scan area 114, between which is the line of sight of a given sensor.”, Column 9 lines 35-40, “As indicated above, the advantageous embodiments use information collected from a human motion sensor, such as human motion sensor 600. As soon as human motion sensor 600 recognizes an object as human-like, as indicated by arrow 602, it sends a signal to a computer or software that a human is present, as indicated at skeleton 604.”, Column 13 lines 63-67, “Optionally, computer 1310 could display on a display device a skeleton with tracked points of articulation, thereby allowing people to observe what the computer is tracking based on input from human motion sensor 1302 or human motion sensor 1304.”. The cited passages show that, when a human enters the scanning area of the human motion detector, the system is configured to display the tracking information to a display unit. One of ordinary skill in the art would recognize that this is a second predetermined condition that is different from the first condition, has the second condition is when a human enters a scanning are of the human motion sensor and the first condition is when the human enters an area that is a predetermined range away from the robot.). Whelan does not teach acquiring motion information indicating a motion of an operator who operates a movable unit of a robot; the robot including a detection unit of at least one of a sensor, a camera, or a microphone; and acquiring detection information acquired by the detection unit of the robot. Hashimoto, in the same field of endeavor, teaches acquiring motion information indicating a motion of an operator who operates a movable unit of a robot (Hashimoto: Column 3 lines 50-67, “The remote control robot system 100 according to this embodiment is a system including a master-slave type robot in which a slave arm operates following a motion of a master arm. The remote control robot system 100 is configured so that an operator located at a position distant from a working area (outside the working area) of a slave arm 10 (a robotic arm, will be described later in detail) of the robot main body 1 can input an operational instruction to the remote control robot system 100 by moving a master arm 70 of the remote control device 2 (a robotic arm operational instruction input part, will be described later in detail), to make the slave arm 10 perform an operation corresponding to the operational instruction by a control of the control device 3 to perform a work, such as an assembling work of components.”, Column 5 lines 13-32, “In this embodiment, the given operating condition parameter change instructing action is the operator making the hand gesture, and the contactless action detecting part 71 is a detector which detects the operator's hand gesture within a range set above the contactless action detecting part 71. As illustrated in FIG. 2, the contactless action detecting part 71 includes an infrared radiator 71a for radiating an infrared ray upwardly and a stereo camera 71b for receiving the infrared ray radiated from the infrared radiator 71a and reflected on a target object. It is further configured to calculate an attitude of each finger ( a shape of a hand) and a motion of the hand based on an image captured by the stereo camera 71b. Further, the contactless action detecting part 71 is installed near the master arm 70 and configured to be capable of manipulating the robot main body 1 while parallelly per forming an input of the operational instruction to the master arm 70 and an input of the operating condition parameter change instructing action to the contactless action detecting part 71. For example, LEAP(®) of Leap Motion Inc. may be used as the contactless action detecting part 71.”. The cited passages clearly teaches acquiring motion information of an operator who controls a robot.). Whelan teaches a non-transitory computer-readable storage medium storing an information processing program causing a computer to execute a process comprising: acquiring motion information indicating a motion of an operator, including a detection unit of at least one of a sensor, a camera, or a microphone; and controlling the robot such that the robot is switched from an operation mode in which the movable unit is in an operable state to a non-operation mode in which the movable unit is in an inoperable state in a case where the motion information satisfies a predetermined first condition; acquiring detection information acquired by the detection unit; outputting the detection information to at least one output device; and stopping the output of the detection information to the output device in a case where the motion information indicates that an operator is a predetermined distance away from the robot as a predetermined second condition. Whelan does not teach acquiring motion information indicating a motion of an operator who operates a movable unit of a robot. Hashimoto teaches acquiring motion information indicating a motion of an operator who operates a movable unit of a robot. A person of ordinary skill in the art would have had the technological capabilities required to have modified the non-transitory computer-readable storage medium taught in Whelan with acquiring motion information indicating a motion of an operator who operates a movable unit of a robot taught in Hashimoto. Furthermore, Whelan is already configured to monitor the motion of humans in the vicinity of the robot(s) and control the robots based on said motion. Additionally, Whelan teaches input devices through which an operator can interact with the system (Whelan: Column 14 lines 5-10, “Finally, the advantageous embodiments contemplate using laptop 1312 to install, configure and or optimize the system. Laptop 1312 could connect to the machine via a physical plug or through a secured network connection. Thus, the advantageous embodiments are not necessarily limited to a dedicated computer or computer 1310.”, Column 18 lines 28-35, “Input/output (I/0) unit 1712 allows for input and output of data with other devices that may be connected to data processing system 1700. For example, input/output (I/0) unit 1712 may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, input/output (I/0) unit 1712 may send output to a printer. Display 1714 provides a mechanism to display information to a user.”). The non-transitory computer-readable storage medium taught in Whelan is therefore readily configurable with the method of detecting motion of an operator of the system taught in Hashimoto. All that would be required would to modify the system to acquire motion information of an operator who operates the robot would be to implement the methods taught in Hashimoto. Such a modification would not change or introduce new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of a non-transitory computer-readable storage medium storing an information processing program causing a computer to execute a process comprising: acquiring motion information indicating a motion of an operator who operates a movable unit of a robot. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the non-transitory computer-readable storage medium taught in Whelan with acquiring motion information indicating a motion of an operator who operates a movable unit of a robot taught in Hashimoto with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. Whelan in view of Hashimoto does not teach the robot including a detection unit of at least one of a sensor, a camera, or a microphone; and acquiring detection information acquired by the detection unit of the robot. Usui, in the same field of endeavor, teaches the robot including a detection unit of at least one of a sensor, a camera, or a microphone (Usui: ¶ 0017, “A medical arm device 510 according to this embodiment is provided beside the treatment table 530. The medical arm device 510 includes a base 511 serving as a base, an arm 512 extending from the base 511, and an imaging unit 515 connected to the tip of the arm 512 as a tip unit.”, ¶ 0019, “An imaging unit 515 is connected to the tip of the arm portion 512 as a tip unit. The imaging unit 515 is a unit that acquires an image of an imaging target, and is, for example, a camera that can capture moving images and still images. As shown in FIG. 1 , the posture and position of the arm unit 512 and the imaging unit 515 provided at the tip of the arm unit 512 are controlled by the medical arm device 510 so that the imaging unit 515 captures an image of the treatment site of the treatment target 540.”, ¶ 0025, “Referring to FIG. 2, a medical arm device 400 according to this embodiment includes a base portion 410 and an arm portion 420.”, ¶ 0026, “The arm section 420 has a plurality of joints 421 a to 421 f, a plurality of links 422 a to 422 c connected to each other by the joints 421 a to 421 f, and an imaging unit 423 provided at the tip of the arm section 420.”, ¶ 0028, “The imaging unit 423 is a unit that acquires an image of a subject, and is, for example, a camera that captures moving images and still images. By controlling the driving of the arm portion 420, the position and orientation of the imaging unit 423 are controlled. In this embodiment, the imaging unit 423 captures an image of a region of the patient's body, for example, the treatment site.”); and acquiring detection information acquired by the detection unit of the robot (Usui: ¶ 0019, “An imaging unit 515 is connected to the tip of the arm portion 512 as a tip unit. The imaging unit 515 is a unit that acquires an image of an imaging target, and is, for example, a camera that can capture moving images and still images. As shown in FIG. 1 , the posture and position of the arm unit 512 and the imaging unit 515 provided at the tip of the arm unit 512 are controlled by the medical arm device 510 so that the imaging unit 515 captures an image of the treatment site of the treatment target 540.”, ¶ 0020, “Furthermore, a display device 550 such as a monitor or display is installed at a position facing the user 520. The image of the treatment site captured by the imaging unit 515 is displayed as an electronic image on the display screen of the display device 550 . The user 520 performs various treatments while viewing the electronic image of the treatment area displayed on the display screen of the display device 550.”, ¶ 0028, “The imaging unit 423 is a unit that acquires an image of a subject, and is, for example, a camera that captures moving images and still images. By controlling the driving of the arm portion 420, the position and orientation of the imaging unit 423 are controlled. In this embodiment, the imaging unit 423 captures an image of a region of the patient's body, for example, the treatment site.”. The cited passages clearly show that the processor acquires the detection result from the detection unit of the robot.). Whelan in view of Hashimoto teaches non-transitory computer-readable storage medium wherein: includes a detection unit of at least one of a sensor, a camera, or a microphone, and the processor is configured to: acquiring detection information acquired by the detection unit; and outputting the detection information to at least one output device. Whelan in view of Hashimoto does not teach the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquiring detection information acquired by the detection unit of the robot. Usui teaches the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquiring detection information acquired by the detection unit of the robot. A person of ordinary skill in the art would have had the technological capabilities required to have modified the non-transitory computer-readable storage medium taught in Whelan in view of Hashimoto with the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquiring detection information acquired by the detection unit of the robot taught in Usui. Furthermore, the non-transitory computer-readable storage medium taught in Whelan in view of Hashimoto is already configured to gather information using a detection unit and display this information on a display. Modifying the robot to have sensors mounted on the robot itself as taught in Usui would have been well within the technological capabilities of a person of ordinary skill in the art. The modification would have required simply adding additional sensors to the robot according to known methods. Such a modification would not have changed or introduced new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of a non-transitory computer-readable storage medium storing an information processing program configured to: the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquiring detection information acquired by the detection unit of the robot. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the non-transitory computer-readable storage medium taught in Whelan in view of Hashimoto with the robot including a detection unit of at least one of a sensor, a camera, or a microphone; acquiring detection information acquired by the detection unit of the robot taught in Usui with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. Claim(s) 2-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 10186130 B2 ("Whelan") in view of US 2023/0390929 A1 ("Hashimoto") in further view of JP 2020116385 A ("Usui") in further view of US 9623560 B1 ("Theobald"). Regarding claim 2, Whelan in view of Hashimoto in further view of Usui teaches wherein the processor is configured to: acquire operation information for the movable unit that is input by the operator (Hashimoto: Column 4 lines 58-62, “The master arm 70 is a device which receives an input of an operational instruction for the slave arm 10 from the operator. In this embodiment, the master arm 70 is a device by which a target attitude of the slave arm 10 can be inputted and an operation mode for the slave arm 10 can be inputted.”, Column 5 lines 13-32, “In this embodiment, the given operating condition parameter change instructing action is the operator making the hand gesture, and the contactless action detecting part 71 is a detector which detects the operator's hand gesture within a range set above the contactless action detecting part 71. As illustrated in FIG. 2, the contactless action detecting part 71 includes an infrared radiator 71a for radiating an infrared ray upwardly and a stereo camera 71b for receiving the infrared ray radiated from the infrared radiator 71a and reflected on a target object. It is further configured to calculate an attitude of each finger ( a shape of a hand) and a motion of the hand based on an image captured by the stereo camera 71b. Further, the contactless action detecting part 71 is installed near the master arm 70 and configured to be capable of manipulating the robot main body 1 while parallelly per forming an input of the operational instruction to the master arm 70 and an input of the operating condition parameter change instructing action to the contactless action detecting part 71. For example, LEAP(®) of Leap Motion Inc. may be used as the contactless action detecting part 71.”, Column 8 lines 1-5, “The signals outputted from the master arm 70, the contactless action detecting part 71, and the mode selecting part 75 of the remote control device 2 are inputted to the control device 3. Further, the signal outputted from the camera 51 is inputted to the control device 3.”, Column 8 line 19-35, “The operator controls the master arm 70 to operate the slave arm 10 to move the workpiece W, which is held by the end effector 16, toward an installation position P. Here, when the operator makes the first hand gesture which includes the hand shape by which a parameter value rises, the contactless action detecting part 71 detects the first hand gesture, and transmits the attitudes of the respective fingers and the hand motion related to the first hand gesture to the control device 3. Then, the change instruction content identifying module 34 determines that the operating condition parameter change instruction related to the change mode of "arm operation speed increase" has been inputted to the remote control robot system 100 based on the first hand gesture.”. The cited passages clearly teach acquiring operation information that is inputted by an operator.); control the robot such that the robot makes a motion corresponding to the operation information in a case of the operation mode (Hashimoto: Column 6 lines 9-21, “The manual mode is a working mode in which the control device 3 operates the robot main body 1 according to the operational instruction inputted to the remote control robot system 100 via the master arm 70. That is, in the manual mode, the motion controlling module 33 controls the operation of the slave arm 10 based on the operational instruction inputted to the master arm 70. This manual mode includes a mode in which, when the control device operates the robot 3 main body 1 based on the operational instruction inputted by the operator controlling the master arm 70, the control device applies a correction in a part of the operational 3 instruction inputted by the operator to operate the robot main body 1.”, Column 6 lines 49-55, “The change instruction content identifying module 34, based on change instruction content data stored in the memory part 32, determines one hand gesture of the operator detected by the contactless action detecting part 71, that is, an operating condition parameter change mode of the robot main body 1 corresponding to the operating condition parameter change instructing action.”, Column 8 lines 1-5, “The signals outputted from the master arm 70, the contactless action detecting part 71, and the mode selecting part 75 of the remote control device 2 are inputted to the control device 3. Further, the signal outputted from the camera 51 is inputted to the control device 3.”. The cited passages clearly teach that the robot is controlled based on the operation information inputted by the operator.). Whelan in view of Hashimoto in further view of Usui discard the operation information in a case of the non-operation mode. Theobald, in the same field of endeavor, teaches discard the operation information in a case of the non-operation mode (Theobald: Column 3 lines 51-59, “If the robot 10 is controlled by a remote operator, the robot 10 may include a control device to essentially "override" the remote operator's control commands. For example, an operator may be controlling the robot 10 remotely and may be unaware of people in the vicinity of the robot 10. If the robot 10 is at risk of contacting a human, the robot 10 may come to a stop or otherwise adjust its planned path to avoid contacting the human, notwithstanding the remote operator's control commands.”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the information processing system taught in Whelan in view of Hashimoto in further view of Usui with teaches discard the operation information in a case of the non-operation mode taught in Theobald with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because it would have been obvious to try. The system taught in Whelan in view of Hashimoto in further view of Usui is already configured to transition between an operable state and a non-operable state based on the motion information of humans and is also configured to receive operator inputs. A person of ordinary skill in the art would have recognized that the system would need to override or otherwise ignore the operator inputs when the system would set the robot to a non-operable state in order to properly prevent the robot from coming into contact with a human. Furthermore, implementing the method of discarding the operation information in a case of the non-operation mode taught in Theobald would have been well within the technological capabilities to one of ordinary skill in the art. Such a modification would not have changed or introduced new functionality. No inventive effort would have been required. Regarding claim 3, Whelan in view of Hashimoto in further view of Usui in further view of Theobald teaches wherein the processor is configured to generate the operation information causing the movable unit to be operated in operative association with the motion information in a case of the operation mode (Hashimoto: Column 5 lines 13-32, “In this embodiment, the given operating condition parameter change instructing action is the operator making the hand gesture, and the contactless action detecting part 71 is a detector which detects the operator's hand gesture within a range set above the contactless action detecting part 71. As illustrated in FIG. 2, the contactless action detecting part 71 includes an infrared radiator 71a for radiating an infrared ray upwardly and a stereo camera 71b for receiving the infrared ray radiated from the infrared radiator 71a and reflected on a target object. It is further configured to calculate an attitude of each finger ( a shape of a hand) and a motion of the hand based on an image captured by the stereo camera 71b. Further, the contactless action detecting part 71 is installed near the master arm 70 and configured to be capable of manipulating the robot main body 1 while parallelly per forming an input of the operational instruction to the master arm 70 and an input of the operating condition parameter change instructing action to the contactless action detecting part 71. For example, LEAP(®) of Leap Motion Inc. may be used as the contactless action detecting part 71.”, Column 8 lines 1-5, “The signals outputted from the master arm 70, the contactless action detecting part 71, and the mode selecting part 75 of the remote control device 2 are inputted to the control device 3. Further, the signal outputted from the camera 51 is inputted to the control device 3.”, Column 8 line 19-35, “The operator controls the master arm 70 to operate the slave arm 10 to move the workpiece W, which is held by the end effector 16, toward an installation position P. Here, when the operator makes the first hand gesture which includes the hand shape by which a parameter value rises, the contactless action detecting part 71 detects the first hand gesture, and transmits the attitudes of the respective fingers and the hand motion related to the first hand gesture to the control device 3. Then, the change instruction content identifying module 34 determines that the operating condition parameter change instruction related to the change mode of "arm operation speed increase" has been inputted to the remote control robot system 100 based on the first hand gesture.”. The cited passages clearly controlling the robot based on the detect gestures and movements of an operator. One of ordinary skill in the art would recognize that this teaches that the operation information used to control the robot is generated in association with the determined motion information of the operator.). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 10186130 B2 ("Whelan") in view of US 2023/0390929 A1 ("Hashimoto") in further view of JP 2020116385 A ("Usui") in further view of US 11565418 B2 ("Sejimo"). Regarding claim 10, Whelan in view of Hashimoto in further view of Usui teaches wherein the processor is configured to: output the detection information to a first output device associated with the operator in advance (Whelan: Column 13 lines 63-67, “Optionally, computer 1310 could display on a display device a skeleton with tracked points of articulation, thereby allowing people to observe what the computer is tracking based on input from human motion sensor 1302 or human motion sensor 1304.”, Column 18 lines 28-35, “Input/output (I/0) unit 1712 allows for input and output of data with other devices that may be connected to data processing system 1700. For example, input/output (I/0) unit 1712 may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, input/output (I/0) unit 1712 may send output to a printer. Display 1714 provides a mechanism to display information to a user.”) acquire assistant motion information indicating a motion of the assistant (Whelan: Figure 10, Column 11 lines 56-63, “FIG. 10 illustrates examples of justified movements, unjustified movements, and undesirable movements, in accordance with an illustrative embodiment. Arrow 1002 are justified movements detected by human motion sensor 1000 which are pre-defined as being acceptable such that an alert is not issued and modification of operation of the machine is not performed. Human motion sensor 1000 may be human motion sensor 300 of FIG. 3, for example.”, Column 11 line 64 – Column 12 line 3, “As indicated above, in some cases an individual is authorized to be near or take a specified action with respect to an operating machine. For example, workers may need to feed parts to a machine, as shown in FIG. 10 at arrow 1002. In order to address this concern, the computer uses "movement recognition" to identify movements that are predetermined to be justified or safe.”, Column 12 lines 4-13, “On the other hand, if an unauthorized movement, such as that of a human walking at an unauthorized space near a machine as indicated at arrow 1004, was detected, then the computer would activate an alarm or modify operation of the machine. In the same way, the computer can learn a safe or acceptable movement; it can also learn or recognize an undesirable movement, as indicated by arrow 1006. An example of an undesirable movement may be walking while talking on a mobile phone, or perhaps carrying something, or any other movement predetermined to be undesirable.”, Column 14 lines 20-28, “System 1400 is a system for operating machinery in a manufacturing environment including machinery 1402. System 1400 includes sensor system 1404 configured to distinguish human skeletal positions from non-human object positions. Sensor system 1404 is further configured to deter- mine whether one or more humans are present in a predetermined area.”. The cited figure and passages clearly show that the system is configured to track the motion of multiple humans. One of ordinary skill in the art would recognize that this would include an assistant.); and stop the output of the detection information to the second output device in a case where the assistant motion information satisfies a predetermined fourth condition (Whelan: Column 4 lines 50-55, “In an illustrative embodiment, sensors, such as sensor 106, sensor 108, and sensor 110 are used to detect humans and human movement in proximity to machine 102 or machine 104. Also shown in FIG. 1 are scan areas, such as scan area 112 and scan area 114, between which is the line of sight of a given sensor.”, Column 9 lines 35-40, “As indicated above, the advantageous embodiments use information collected from a human motion sensor, such as human motion sensor 600. As soon as human motion sensor 600 recognizes an object as human-like, as indicated by arrow 602, it sends a signal to a computer or software that a human is present, as indicated at skeleton 604.”, Column 13 lines 63-67, “Optionally, computer 1310 could display on a display device a skeleton with tracked points of articulation, thereby allowing people to observe what the computer is tracking based on input from human motion sensor 1302 or human motion sensor 1304.”. The cited passages show that, when a human enters the scanning area of the human motion detector, the system is configured to display the tracking information to a display unit. One of ordinary skill in the art would recognize that this is a second predetermined condition that is different from the first condition, has the second condition is when a human enters a scanning are of the human motion sensor and the first condition is when the human enters an area that is a predetermined range away from the robot. Additionally, because the system is capable of tracking the motion of multiple humans, one of ordinary skill in the art would recognize that this teaches a predetermined fourth condition for an “assistant”, e.g.. when human A (i.e. the operator) enters the scanning area of the human motion sensor is the second condition and when human B (i.e. the assistant) enters scanning area of the human motion sensor is the fourth condition.). Whelan in view of Hashimoto in further view of Usui does not teach and a second output device associated with an assistant other than the operator in advance. Sejimo, in the same field of endeavor, teaches and a second output device associated with an assistant other than the operator in advance (Sejimo: Column lines, “he display device 401 shown in FIG. 4 includes a monitor and has a function of displaying various screens and the like. Therefore, an operator can confirm a driving state and the like of the robot 1 via the display device 401.”, Column lines, “A display input device including both of the functions of the display device 401 and the input device 402 may be used instead of the display device 401 and the input device 402. As the display input device, for example, a touch panel display can be used. The robot system 100 may include one display device 401 and one input device 402 or may include a plurality of display devices 401 and a plurality of input devices 402.”. The cited passage clearly teaches a plurality of display devices configured to display information to multiple users.). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the system taught in Whelan in view of Hashimoto in further view of Usui with and a second output device associated with an assistant other than the operator in advance taught in Sejimo with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because it would have required the simple addition of a known peripheral device to the system. The system taught in Whelan in view of Hashimoto in further view of Usui already teaches the use of a display device to display information to humans. A person of ordinary skill in the art would have had the technological capabilities required to have added a second display device to the system. Such a modification would not have changed or introduced new functionality. No inventive effort would have been required. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 14, and 15, specifically that the primary reference does not teach the limitations of the amended independent claims, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant's arguments filed February 4th, 2026 have been fully considered but they are not persuasive. Regarding Applicant’s arguments on Pages 8-9, Applicant argues that there would have been no reason to modify the primary reference. The Examiner respectfully disagrees. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, one of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. The system taught in Whelan in view of Hashimoto is already configured to gather information using a detection unit and display this information on a display. Modifying the robot to have sensors mounted on the robot itself as taught in Usui would have been well within the technological capabilities of a person of ordinary skill in the art. The modification would have required simply adding additional sensors to the robot according to known methods. Such a modification would not have changed or introduced new functionality. No inventive effort would have been required. Therefore, for the reasons stated above and in the 35 U.S.C. § 103 rejection section, the 35 U.S.C. § 103 rejection of the independent claims 1, 14, and 15 are maintained. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Noah W Stiebritz whose telephone number is (571)272-3414. The examiner can normally be reached Monday thru Friday 7-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at (571) 270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.W.S./ Examiner, Art Unit 3658 /Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Sep 19, 2024
Application Filed
Dec 01, 2025
Non-Final Rejection — §103
Feb 04, 2026
Response Filed
Mar 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602063
LOAD HANDLING SYSTEM AND LOAD HANDLING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12575900
Steerable Eversion Robot System and Method of Operating the Steerable Eversion Robot System
2y 5m to grant Granted Mar 17, 2026
Patent 12552043
METHOD FOR CONTROLLING ROBOTIC ARM, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12472640
CONTROL METHOD AND SYSTEM FOR ARTICLE TRANSPORTATION BASED ON MOBILE ROBOT
2y 5m to grant Granted Nov 18, 2025
Patent 12467759
VEHICLE WITH SWITCHABLE FORWARD AND BACKWARD CONFIGURATIONS, CONTROL METHOD, AND CONTROL PROGRAM
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
51%
With Interview (-15.6%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month