Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office action is in response to the amendment filed on 02/05/2026. Claims 1-11 and 26-34 are currently pending with claims 1-2 and 26-27 being amended and claims 31-34 being newly added.
Response to Amendment
The amendments to the claims submitted on 02/05/2026 overcome the claim objections set forth in the previous Office action except for those set forth in the claim objection section.
Response to Arguments
Examiner notes wherein Applicant argues the newly amended limitations, which have not been addressed by the prior art of record. As such, Examiner has augmented the below rejection(s) in view of the prior art of record to address the newly amended limitations.
Regarding the Applicant’s remarks directed to the rejection of the claims under 35 U.S.C. 101, the Examiner notes that these remarks are directed to the newly amended claim language and a full response has been made below. The Examiner disagrees that the amended claims overcome the 101 rejection as they are still performable in the human mind. The human mind may process the situation and determine how to adjust or “fine-tune” a safety envelope for a robotic device based on the environment and the desired/planned motion of the system. Examiner further notes wherein the “generation” of an action does not positively require the action to be enacted. The human may mentally determine the action they desire and the claim does not require the action to be performed by the device. The full analysis has been presented below.
Further, regarding the Applicant’s remarks directed to the rejection of the claims under 35 U.S.C. 103, the Examiner notes wherein the applicant has argued the newly amended limitations. The arguments/amendments have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Hopkinson et al. (US 20220126451 A1).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
The term “near” in claim 33 is a relative term which renders the claim indefinite. The term “near” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. This could be within the next 1 second, within the next 10 seconds, within the next 2 minutes. It is unclear how one of ordinary skill in the art may determine the metes and bounds of the claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Under 35 U.S.C. § 101 a claim is directed to non-statutory subject matter if:
It does not fall within one of the four statutory categories of invention (Process, Machine, Manufacture, or Composition of Matter) or
Meets a three-prong test for determining that
The claim recites a judicial exception (such as: a law of nature, a natural phenomenon, an abstract idea)
Without integration into a practical application, and
Does not recite additional elements that provide significantly more than the recited judicial exception
Claims 1-5, 7-11 and 26-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1: Statutory Category – Is the claim directed to one of the four statutory categories (a process, a machine, a manufacture, or a composition of matter)?
Claim 1 is directed to a device (i.e. a machine) and claim 26 is directed to a non-transitory computer readable medium (i.e. a machine). Therefore, claim(s) 1-11 and 26-34 are within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I – Is the claim directed to a judicial exception?
The judicial exceptions are as follows:
Abstract ideas (mathematical concepts, mental processes, and certain methods of organizing human activity)
Laws of nature (e.g., naturally occurring correlations, scientific principles)
Natural phenomena (e.g., wind)
Products of nature (e.g., a plant found in the wild, minerals)
Regarding Prong I of the Step 2A analysis in the 2019 Patent Eligibility Guidance (PEG), the claims are to be analyzed to determine whether they recite subject matter that falls within one of the
follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human
activity, and/or c) mental processes.
The Office submits that the foregoing bolded limitation(s) constitutes judicial exceptions in terms of “mental processes” because under its broadest reasonable interpretation, the limitations can
be “performed in the human mind, or by a human using a pen and paper”. See MPEP 2106.04(a)(2)(III).
Independent claim 1 includes limitations that recite an abstract idea (emphasized below).
Claim 1 recites:
1. (Currently Amended) A device comprising a processor configured to:
determine a safety envelope of a robot based on a planned movement of the robot and based on state information about a load carried by the robot, wherein the state information comprises a dynamic status of the load;
fine-tune the safety envelope for the robot and one or more other tracked objects based on a machine-learning model, giving rise to a fine-tuned safety envelope;
determine a safety risk based on a detected object with respect to the safety envelope; and
generate a mitigating action to the planned movement if the safety risk exceeds a threshold value.
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “determine…” in the context of this claim encompasses a person viewing a scene and forming a simple judgement and “generate…” in the context of this claim encompasses a person deciding that the robot should be stopped because the risk of contact with a person or object is too high. Regarding the limitation of determining the safety envelope based on state information and anticipated robot motion, a human may mentally perform this process by, for example, observing a robot carrying a load while moving forward in a straight line and carrying a payload in a fixed position relative to the center of gravity of the robot. The human may accordingly determine that the safety envelope should be closer to the robot as the movement is fairly stable. A human may further observe a robot moving through aisles of shelving which require the robot to turn and determine that the safety envelope should be larger as the robot is less stable. This determination could further be performed based on the robot being stationary and grasping an object then bringing the object closer to the center of gravity, wherein a user may determine a safety envelope which accounts for the instability of the changing distance between the object and the center of gravity. Regarding the use of a machine-learning model to “fine-tune” the determined envelope, this may also be performed by a human mind. A human may determine the safety envelope and then “fine-tune” the limits of the envelope based on their mental processing of the environment. For example, they may see a human approaching the robot carrying a lead which causes them to be unable to see the robot. They may then determine that the safety envelope should be smaller in order to keep that human safe while they pass within the operating limits of the system. If the planned motion of the system would cause the human and the robot to be within a certain distance that the user feels is unsafe then they may determine an action, such as stopping the robot or slowing movement down, which would reduce the risk that the human and robot collide. Accordingly, the claim recites at least one abstract idea.
Independent claim 26 includes limitations that recite an abstract idea (emphasized below).
Claim 26 recites:
26. (Currently Amended) A non-transitory, computer readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to:
determine a safety envelop of a robot based on a planned movement of the robot and based on state information about a load carried by the robot, wherein the state information comprises a dynamic status of the load;
fine-tuning the safety envelope for the robot and other tracked objects based on a machine -learning mode, giving rise to a fine-tuned safety envelope;
determine a safety risk based on a detected object with respect to the safety envelope; and
generate a mitigating action to the planned movement if the safety risk exceeds a threshold value.
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “determine…” in the context of this claim encompasses a person viewing a scene and forming a simple judgement and “generate…” in the context of this claim encompasses a person deciding that the robot should be stopped because the risk of contact with a person or object is too high. Regarding the limitation of determining the safety envelope based on state information and anticipated robot motion, a human may mentally perform this process by, for example, observing a robot carrying a load while moving forward in a straight line and carrying a payload in a fixed position relative to the center of gravity of the robot. The human may accordingly determine that the safety envelope should be closer to the robot as the movement is fairly stable. A human may further observe a robot moving through aisles of shelving which require the robot to turn and determine that the safety envelope should be larger as the robot is less stable. This determination could further be performed based on the robot being stationary and grasping an object then bringing the object closer to the center of gravity, wherein a user may determine a safety envelope which accounts for the instability of the changing distance between the object and the center of gravity. Regarding the use of a machine-learning model to “fine-tune” the determined envelope, this may also be performed by a human mind. A human may determine the safety envelope and then “fine-tune” the limits of the envelope based on their mental processing of the environment. For example, they may see a human approaching the robot carrying a lead which causes them to be unable to see the robot. They may then determine that the safety envelope should be smaller in order to keep that human safe while they pass within the operating limits of the system. If the planned motion of the system would cause the human and the robot to be within a certain distance that the user feels is unsafe then they may determine an action, such as stopping the robot or slowing movement down, which would reduce the risk that the human and robot collide. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II – Does the claim, as a whole, integrate the abstract idea into a practical
application?
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in MPEP 2106.04(d), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. The courts have indicated that additional elements such as: merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”.
The guidelines provide the following (non-exhaustive) list of exemplary considerations which are indicative that an additional element (or combination of elements) may have integrated the judicial element into a practical application:
An additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
An additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition
An additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture which is integral to the claim;
An additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception
The following is a (non-exhaustive) list of examples in which a judicial exception has not been integrated into a practical application:
An additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
An additional element adds insignificant extra-solutionary activity to the judicial exception;
An additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
The Office submits that the foregoing underlined limitation(s) recite additional elements that do
not integrate the recited judicial exception into a practical application.
Independent claim 1 includes limitations that recite additional limitations (emphasized below).
Claim 1 recites:
1. (Currently Amended) A device comprising a processor configured to:
determine a safety envelope of a robot based on a planned movement of the robot and based on state information about a load carried by the robot, wherein the state information comprises a dynamic status of the load, wherein the dynamic load status of the load comprises changes in a distance of the load from a center of gravity of the robot;
determine a safety risk based on a detected object with respect to the safety envelope; and
generate a mitigating action to the planned movement if the safety risk exceeds a threshold value.
For the following reason(s), the examiner submits that the above identified additional
limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitations of a processor the examiner submits that these limitations are insignificant extra-solution activities that merely use a computer (processor) to perform the process. The “processor” merely describes how to generally “apply” the otherwise mental judgements in a generic or general purpose vehicle control environment. The processor is recited at a high level of generality and merely automates the generating step.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Independent claim 26 includes limitations that recite additional limitations (emphasized below).
Claim 26 recites:
26. (Currently Amended) A non-transitory, computer readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to:
determine a safety envelop of a robot based on a planned movement of the robot and based on state information about a load carried by the robot, wherein the state information comprises a dynamic status of the load, wherein the dynamic load status of the load comprises changes in a distance of the load from a center of gravity of the robot;
determine a safety risk based on a detected object with respect to the safety envelope; and
generate a mitigating action to the planned movement if the safety risk exceeds a threshold value.
For the following reason(s), the examiner submits that the above identified additional
limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitations of a processor the examiner submits that these limitations are insignificant extra-solution activities that merely use a computer (processor) to perform the process. The “processor” merely describes how to generally “apply” the otherwise mental judgements in a generic or general purpose vehicle control environment. The processor is recited at a high level of generality and merely automates the generating step.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B – Do the additional elements incorporate an inventive concept to the claim?
In Step 2B of the 2019 PEG, a claim is to be evaluated as to whether the claim, as a whole, amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
Regarding independent claim 1:
Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the generating amounts to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept.
Further, a conclusion that an additional element is insignificant extra-solution activity in
Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well understood, routine, conventional activity in the field. The specification does not provide any indication that the processor is anything other than a conventional computer within a robotic control system. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Hence, the claim is not patent eligible.
Thus, the independent claim 1 as well as the dependent claims are directed toward an abstract idea, not integrated into a practical application, and do not comprise significantly more than the recited abstract idea.
Regarding independent claim 26:
Regarding Step 2B of the 2019 PEG, representative independent claim 26 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the generating amounts to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept.
Further, a conclusion that an additional element is insignificant extra-solution activity in
Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well understood, routine, conventional activity in the field. The specification does not provide any indication that the processor is anything other than a conventional computer within a robotic control system. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Hence, the claim is not patent eligible.
Thus, the independent claim 26 as well as the dependent claims are directed toward an abstract idea, not integrated into a practical application, and do not comprise significantly more than the recited abstract idea.
101 Analysis – Dependent Claims and Conclusion
Dependent claim(s) 2-5, 7-11, and 27-34 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-5, 7-11, and 27-34 are not patent eligible under the same rationale as provided for in the rejection of claim(s) 1 and 26.
Therefore, claim(s) 1-5, 7-11, and 26-34 is/are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 3-5, 7-11, 26, and 28-33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kriveshko et al. (US 20220234209 A1), hereinafter Kriveshko in view of Guy et al. (US 20170364073 A1), hereinafter Guy and Hopkinson et al. (US 20220126451 A1), hereinafter Hopkinson.
Regarding claim 1, Kriveshko teaches:
1. (Currently Amended) A device comprising a processor configured to:
determine a safety envelope of a robot (Paragraph 0079, "In some embodiments, path optimization includes creation of a 3D “keep-in” zone (or volume) (i.e., a zone/volume to which the robot is restricted during operation) and/or a “keep-out” zone (or volume) (i.e., a zone/volume from which the robot is restricted during operation). Keep-in and keep-out zones restrict robot motion through safe limitations on the possible robot axis positions in Cartesian and/or joint space. Safety limits may be set outside these zones so that, for example, their breach by the robot in operation triggers a stop. Conventionally, robot keep-in zones are defined as prismatic bodies. For example, referring to FIG. 9A, a keep-in zone 902 determined using the conventional approach takes the form of a prismatic volume; the keep-in zone 902 is typically larger than the total swept volume 904 of the machinery during operation (which may be determined either by simulation or characterization using, for example, scanning data acquired by the sensor system 101). Based on the determined keep-in zone 902, the robot controller may implement a position-limiting function to enforce the position limiting of the machinery to be within the keep-in zone 902.") based on a planned movement of the robot and based on state information about a load carried by the robot, (Paragraph 0057, " The process of modeling the robot dynamics and mapping the safe region, however, may be simplified by assuming that the robot's current position is fixed and estimating the region that any portion of the robot may conceivably occupy within a short future time interval only. Thus, various embodiments of the present invention include approaches to modeling the robot dynamics and/or human activities in the workspace 100 and mapping the human-robot collaborative workspace 100 (e.g., calculating the safe and/or unsafe regions) over short intervals based on the current states (e.g., current positions, velocities, accelerations, geometries, kinematics, expected positions and/or orientations associated with the next action in the task/application) associated with the machinery (including the robot 106 and/or other industrial equipment) and the human operator. In addition, the modeling and mapping procedure may be repeated (based on, for example, the scanning data of the machinery and the human acquired by the sensor system 101 during performance of the task/application) over time, thereby effectively updating the safe and/or unsafe regions on a quasi-continuous basis in real time.") …
…
determine a safety risk based on a detected object with respect to the fine-tuned safety envelope; and
generate a mitigating action to the planned movement if the safety risk exceeds a threshold value. (Paragraph 0018, "In various embodiments, the object-monitoring system is further configured to computationally detect a predetermined degree of proximity between the second potential occupancy envelope and the updated first potential occupancy envelope and to thereupon cause the controller to put the machinery in a safe state. For example, the predetermined degree of proximity may correspond to a protective separation distance. The object-monitoring system may be configured to (i) detect a current state of the machinery, (ii) compute parameters for putting the machinery in the safe state from the current state, and (iii) communicate the parameters to the controller when the predetermined degree of proximity is detected.")
Kriveshko does not specifically teach state information being a dynamic status of a load or use of a machine learning model to determine the safety envelope. However, Guy, in the same field of endeavor of robotic control, teaches:
… wherein the state information comprises a dynamic status of the load, (Paragraph 0075, "Another inventive subject matter includes a method of stably transporting a load by a first and a second robot. FIG. 10 depicts flow chart 1000 of one embodiment of the method. In this embodiment, the method begins with step 1010, which provides a first and second robot, each having a motive mechanism that is independently operable from the other, as described above with step 510. In step 1020, each of the robots obtains estimates of a load width, a load length, and a load height in a manner as described in step 520, and in step 1030 each robot obtains an estimate of load stability. In step 1040, each robot autonomously determines how to engage the load for stable transportation, as described with respect to FIG. 8, and in step 1050 each robot autonomously cooperates with the other robot to stably transport the load. It is contemplated that as the load is transported by the robots, the stability of the load may change. In step 1060, each of the robots autonomously reconfigures its engagement with the load in response to changes in load stability during transport.") …
However, Hopkinson, in the same field of endeavor of robotics, teaches:
… fine-tune the safety envelope for the robot and one or more other tracked objects based on a machine learning model, giving rise to a fine-tuned safety envelope; … (Paragraph 0104, “The rule analyzer 359 determines or assesses a likelihood or probability that a motion or transition (represented by an edge in a graph) will result in the processor-based workcell safety system triggering a stoppage, slowdown or precautionary occlusion or other inhibition of robot operation. For example, the rule analyzer 359 may evaluate or simulate a motion plan or portion thereof (e.g., an edge) of one or more robots, determining whether any transitions will violate a safety rule (e.g., result in the robot(s) or portion thereof passing too close to a human as defined by the safety monitoring rules 125c (FIG. 1) implemented by the processor-based workcell safety system). For example, the rule analyzer 359 may evaluate or simulate a position and/or path or trajectory of an object (e.g., human) or portion thereof, determining whether any position or movements of the object will violate a safety rule (e.g., result in a human or portion thereof passing too close to a robot or robots as defined by the safety monitoring rules 125c (FIG. 1) implemented by the processor-based workcell safety system). For instance, where the processor-based workcell safety system employs a laser scanner that sections a portion of the operational environment into a grid, and a rule enforced by the processor-based workcell safety causes a stoppage, slowdown or precautionary occlusion when a human is within one grid position of the position of a portion of the robot, the rule analyzer 359 may identify transitions that would bring a portion of the robot within one grid of the position of a human, or predicted position of a human, so that weights associated with edges corresponding to those identified transitions can be adjusted (e.g., increased).” As well as Paragraphs 0148, “In response to the validation indicating that an anomalous system status does not exist for the processor-based workcell safety system 200 (e.g., all sensors 132 operating within defined operational parameters, a sufficient number of sensors 132 operating within defined operational parameters, a majority of sensors 132 operating consistently with one another within defined operational parameters), at 516 at least one processor 222 (FIG. 2) of the processor-based workcell safety system 200 (FIG. 2) monitors the operational environment 104 (FIG. 1) for occurrences of violations of safety monitoring rules 125c (FIG. 1) To monitor the operational environment for safety rule violations the processor(s) 222 may employ sensor data that represents objects in the operational environment 104 (FIG. 1). The processor(s) 222 may identify objects that are, or that appear to be, humans. The processor(s) 222 may determine a current position of one or more humans in the operational environment and/or a three-dimensional area occupied by the human(s). The processor(s) 222 may, optionally predict a path or a trajectory of the human over a period of time and/or a three-dimensional area occupied by the human(s) over the period of time. For instance, the processor(s) 222 may determine the path or trajectory or three-dimensional area based on a current position of the human(s) and based on previous movements of the human(s), and/or based on predicted behavior or training of the human(s). The processor(s) 222 may employ artificial intelligence or machine-learning to predict the path or trajectory of the human. The processor(s) 222 may determine a current position of one or more robots and/or a three-dimensional area occupied by the robot(s) over the period of time. For instance, the processor(s) 222 may determine the path or trajectory or three-dimensional area based on a current position of the robot(s) and a motion plan for the robot.” and 0173, “Optionally at 806, at least one processor 322 of the processor-based robot control system 300 determines a predicted behavior of a human (e.g., operator) in the workcell or operational environment 104 or who appears to be likely to enter the workcell or operational environment 104. The at least one processor 322 may, for example, determine the predicted behavior of the person in the workcell or operational environment 104 using machine-learning or artificial intelligence, being trained on a dataset of similar operational environments and robot scenarios. The at least one processor 322 may, for example, determine the predicted behavior of the human in the workcell or operational environment 104 based at least in part on a set of operator training guidelines, which specify positions or locations and times and/or speed of movement of operators and other humans when present in the operational environment 104. The at least one processor 222 may, for example, determine a predicted trajectory (e.g., path, speed) of a human at least partially through the workcell or operational environment 104.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and control methods as taught by Kriveshko with the ability to continuously monitor the state of the load and react accordingly as taught by Guy and to utilize machine learning when determining the allowable limits for safe operation of the system as taught by Hopkinson. Guy teaches monitoring and estimating the load dynamics information of an object being transported by a robotic system. This information allows the system to transport objects stably and avoid tip over events from occurring, thereby increasing the safety and effectiveness of the system.
Regarding claim 3, where all the limitations of claim 1 are discussed above, Kriveshko further teaches:
3. (Original) The device of claim 1, wherein the planned movement of the robot (Paragraph 0057, " The process of modeling the robot dynamics and mapping the safe region, however, may be simplified by assuming that the robot's current position is fixed and estimating the region that any portion of the robot may conceivably occupy within a short future time interval only. Thus, various embodiments of the present invention include approaches to modeling the robot dynamics and/or human activities in the workspace 100 and mapping the human-robot collaborative workspace 100 (e.g., calculating the safe and/or unsafe regions) over short intervals based on the current states (e.g., current positions, velocities, accelerations, geometries, kinematics, expected positions and/or orientations associated with the next action in the task/application) associated with the machinery (including the robot 106 and/or other industrial equipment) and the human operator. In addition, the modeling and mapping procedure may be repeated (based on, for example, the scanning data of the machinery and the human acquired by the sensor system 101 during performance of the task/application) over time, thereby effectively updating the safe and/or unsafe regions on a quasi-continuous basis in real time.") comprises at least one of a movement of the robot along a planned trajectory, (Paragraph 0100, "In one embodiment, the mapping module 246 can receive data from a conventional computer vision system that monitors the machinery, the sensor system that scans the machinery and the operator, and/or the robot (e.g., joint position data, keep-in zones and/or or intended trajectory), in step 1432. The computer vision system utilizes the sensor system to track movements of the machinery and the operator during physical execution of the task. The computer vision system is calibrated to the coordinate reference frame of the workspace and transmits to the mapping module 246 coordinate data corresponding to the movements of the machinery and the operator. In various embodiments, the tracking data is then provided to the movement-prediction module 245 for predicting the movements of the machinery and the operator in the next time interval (step 1428). Subsequently, the mapping module 246 transforms this prediction data into voxel-level representations to produce the POEs of the machinery and the operator in the next time interval (step 1430). Steps 1428-1432 may be iteratively performed during execution of the task.") a velocity of the movement of the robot along the planned trajectory, or a position of the robot along the planned trajectory. (Paragraph 0090, "Additionally or alternatively, once the machinery's current state (e.g., payload, position, orientation, velocity and/or acceleration) is acquired, a PSD (generally defined as the minimum distance separating the machinery from the operator for ensuring safety) and/or other safety-related measures can be computed. For example, the PSD may be computed based on the POEs of the machinery and the human operator as well as any keep-in and/or keep-out zones. Again, because the machinery's state may change during execution of the task, the PSD may be continuously updated throughout the task as well. This can be achieved by, for example, using the sensor system 101 to periodically acquire the updated state of the machinery and the operator, and, based thereon, updating the PSD. In addition, the updated PSD may be compared to a predetermined threshold; if the updated PSD is smaller than the threshold, the control system 112 may adjust (e.g., reduce), for example, the speed of the machinery as further described below so as to bring the robot to a safe state. In various embodiments, the computed PSD is combined with the POE of the human operator to determine the optimal speed or robot path (or choosing among possible paths) for executing a task. For example, referring to FIG. 12A, the envelopes 1202-1206 represent the largest POEs of the operator at three instants, t.sub.1-3, respectively, during execution of a human-robot collaborative application; based on the computed PSDs 1208-1212, the robot's locations 1214-1218 that can be closest to the operator at the instants t.sub.1-t.sub.3, respectively, during performance of the task (while avoiding safety hazards) can be determined. As a result, an optimal path 1220 for the robot movement including the instants t.sub.1-t.sub.3 can be determined. Alternatively, instead of determining the unconstrained optimal path, the POE and PSD information can be used to select among allowed or predetermined paths given programmed or environmental constraints—i.e., identifying the path alternative that provides greatest efficiency without violating safety constraints.")
Regarding claim 4, where all the limitations of claim 1 are discussed above, Kriveshko further teaches:
4. (Previously Presented) The device of claim 1, wherein the processor is configured to determine the safety envelope (Paragraph 0079, "In some embodiments, path optimization includes creation of a 3D “keep-in” zone (or volume) (i.e., a zone/volume to which the robot is restricted during operation) and/or a “keep-out” zone (or volume) (i.e., a zone/volume from which the robot is restricted during operation). Keep-in and keep-out zones restrict robot motion through safe limitations on the possible robot axis positions in Cartesian and/or joint space. Safety limits may be set outside these zones so that, for example, their breach by the robot in operation triggers a stop. Conventionally, robot keep-in zones are defined as prismatic bodies. For example, referring to FIG. 9A, a keep-in zone 902 determined using the conventional approach takes the form of a prismatic volume; the keep-in zone 902 is typically larger than the total swept volume 904 of the machinery during operation (which may be determined either by simulation or characterization using, for example, scanning data acquired by the sensor system 101). Based on the determined keep-in zone 902, the robot controller may implement a position-limiting function to enforce the position limiting of the machinery to be within the keep-in zone 902.") based on at least one of a threshold braking distance of the robot with the load, (Paragraph 0090, "Additionally or alternatively, once the machinery's current state (e.g., payload, position, orientation, velocity and/or acceleration) is acquired, a PSD (generally defined as the minimum distance separating the machinery from the operator for ensuring safety) and/or other safety-related measures can be computed. For example, the PSD may be computed based on the POEs of the machinery and the human operator as well as any keep-in and/or keep-out zones. Again, because the machinery's state may change during execution of the task, the PSD may be continuously updated throughout the task as well. This can be achieved by, for example, using the sensor system 101 to periodically acquire the updated state of the machinery and the operator, and, based thereon, updating the PSD. In addition, the updated PSD may be compared to a predetermined threshold; if the updated PSD is smaller than the threshold, the control system 112 may adjust (e.g., reduce), for example, the speed of the machinery as further described below so as to bring the robot to a safe state. In various embodiments, the computed PSD is combined with the POE of the human operator to determine the optimal speed or robot path (or choosing among possible paths) for executing a task. For example, referring to FIG. 12A, the envelopes 1202-1206 represent the largest POEs of the operator at three instants, t.sub.1-3, respectively, during execution of a human-robot collaborative application; based on the computed PSDs 1208-1212, the robot's locations 1214-1218 that can be closest to the operator at the instants t.sub.1-t.sub.3, respectively, during performance of the task (while avoiding safety hazards) can be determined. As a result, an optimal path 1220 for the robot movement including the instants t.sub.1-t.sub.3 can be determined. Alternatively, instead of determining the unconstrained optimal path, the POE and PSD information can be used to select among allowed or predetermined paths given programmed or environmental constraints—i.e., identifying the path alternative that provides greatest efficiency without violating safety constraints.") a threshold turn radius of the robot with the load, (Paragraph 0098, FIG. 14A illustrates an exemplary approach for computing a POE of the machinery and/or human operator based at least in part on simulation of the machinery's operation in accordance herewith. In a first step 1402, the sensor system is activated to acquire information about the workspace, machinery and/or human operator. In a second step 1404, based on the scanning data acquired by the sensor system, the control system generates a 3D spatial representation (e.g., voxels) of the workspace (e.g., using the analysis module 242) and recognize the human and the machinery and movements thereof in the workspace (e.g., using the object-recognition module 243). In a third step 1406, the control system accesses the system memory to retrieve a model of the machinery that is acquired from the machinery manufacturer (or the conventional modeling tool) or generated based on the scanning data acquired by the sensor system. In a fourth step 1408, the control system (e.g., the simulation module 244) simulates operation of the machinery in a virtual volume in the workspace for performing a task/application. The simulation module 244 typically receives parameters characterizing the geometry and kinematics of the machinery (e.g., based on the machinery model) and is programmed with the task that the machinery is to perform; that task may also be programmed in the machinery (e.g., robot) controller. In one embodiment, the simulation result is then transmitted to the mapping module 246. (The division of responsibility between the modules 244, 246 is one possible design choice.) In addition, the control system (e.g., the movement-prediction module 245) may predict movement of the operator within a defined future interval when performing the task/application (step 1410). The movement prediction module 245 may utilize the current state of the operator and identification parameters characterizing the geometry and kinematics of the operator to predict all possible spatial regions that may be occupied by any portion of the human operator within the defined interval when performing the task/application. This data may then be passed to the mapping module 246, and once again, the division of responsibility between the modules 245, 246 is one possible design choice. Based on the simulation results and the predicted movement of the operator, the mapping module 246 creates spatial maps (e.g., POEs) of points within a workspace that may potentially be occupied by the machinery and the human operator (step 1412).") a velocity of the planned movement of the robot with the load, a trajectory of the planned movement of the robot with the load, or an acceleration of the planned movement of the robot with the load. (Paragraph 0090, "Additionally or alternatively, once the machinery's current state (e.g., payload, position, orientation, velocity and/or acceleration) is acquired, a PSD (generally defined as the minimum distance separating the machinery from the operator for ensuring safety) and/or other safety-related measures can be computed. For example, the PSD may be computed based on the POEs of the machinery and the human operator as well as any keep-in and/or keep-out zones. Again, because the machinery's state may change during execution of the task, the PSD may be continuously updated throughout the task as well. This can be achieved by, for example, using the sensor system 101 to periodically acquire the updated state of the machinery and the operator, and, based thereon, updating the PSD. In addition, the updated PSD may be compared to a predetermined threshold; if the updated PSD is smaller than the threshold, the control system 112 may adjust (e.g., reduce), for example, the speed of the machinery as further described below so as to bring the robot to a safe state. In various embodiments, the computed PSD is combined with the POE of the human operator to determine the optimal speed or robot path (or choosing among possible paths) for executing a task. For example, referring to FIG. 12A, the envelopes 1202-1206 represent the largest POEs of the operator at three instants, t.sub.1-3, respectively, during execution of a human-robot collaborative application; based on the computed PSDs 1208-1212, the robot's locations 1214-1218 that can be closest to the operator at the instants t.sub.1-t.sub.3, respectively, during performance of the task (while avoiding safety hazards) can be determined. As a result, an optimal path 1220 for the robot movement including the instants t.sub.1-t.sub.3 can be determined. Alternatively, instead of determining the unconstrained optimal path, the POE and PSD information can be used to select among allowed or predetermined paths given programmed or environmental constraints—i.e., identifying the path alternative that provides greatest efficiency without violating safety constraints.")
Regarding claim 5, where all the limitations of claim 1 are discussed above, Kriveshko further teaches:
5. (Original) The device of claim 1, wherein the processor is configured to determine a predicted trajectory of the detected object based on at least one of … a velocity of the detected object, an
acceleration of the detected object, (Paragraph 0098, "FIG. 14A illustrates an exemplary approach for computing a POE of the machinery and/or human operator based at least in part on simulation of the machinery's operation in accordance herewith. In a first step 1402, the sensor system is activated to acquire information about the workspace, machinery and/or human operator. In a second step 1404, based on the scanning data acquired by the sensor system, the control system generates a 3D spatial representation (e.g., voxels) of the workspace (e.g., using the analysis module 242) and recognize the human and the machinery and movements thereof in the workspace (e.g., using the object-recognition module 243). In a third step 1406, the control system accesses the system memory to retrieve a model of the machinery that is acquired from the machinery manufacturer (or the conventional modeling tool) or generated based on the scanning data acquired by the sensor system. In a fourth step 1408, the control system (e.g., the simulation module 244) simulates operation of the machinery in a virtual volume in the workspace for performing a task/application. The simulation module 244 typically receives parameters characterizing the geometry and kinematics of the machinery (e.g., based on the machinery model) and is programmed with the task that the machinery is to perform; that task may also be programmed in the machinery (e.g., robot) controller. In one embodiment, the simulation result is then transmitted to the mapping module 246. (The division of responsibility between the modules 244, 246 is one possible design choice.) In addition, the control system (e.g., the movement-prediction module 245) may predict movement of the operator within a defined future interval when performing the task/application (step 1410). The movement prediction module 245 may utilize the current state of the operator and identification parameters characterizing the geometry and kinematics of the operator to predict all possible spatial regions that may be occupied by any portion of the human operator within the defined interval when performing the task/application. This data may then be passed to the mapping module 246, and once again, the division of responsibility between the modules 245, 246 is one possible design choice. Based on the simulation results and the predicted movement of the operator, the mapping module 246 creates spatial maps (e.g., POEs) of points within a workspace that may potentially be occupied by the machinery and the human operator (step 1412).") a type of detected object, (Paragraph 0086, "Typically, the robot controller 1004 itself does not have a safe way to govern (e.g., modify) the state (e.g., speed, position, etc.) of the robot; rather, it only has a safe way to enforce a given state. To govern and enforce the state of the robot in a safe manner, in various embodiments, an object-monitoring system (OMS) 1010 is implemented to cooperatively work with the safety-rated component 1006 and non-safety-rated component 1008 as further described below. In one embodiment, the OMS 1010 obtains information about objects from the sensor system 1001 and uses this sensor information to identify relevant objects in the workspace 1000. For example, OMS 1010 may, based on the information obtained from the sensor system (and/or the robot), monitor whether the robot is in a safe state (e.g., remains within a specific zone (e.g., the keep-in zone), stays below a specified speed, etc.), and if not, issues a safe-action command (e.g., stop) to the robot controller 1004.") or a pose of the detected object. (Paragraph 0098, "FIG. 14A illustrates an exemplary approach for computing a POE of the machinery and/or human operator based at least in part on simulation of the machinery's operation in accordance herewith. In a first step 1402, the sensor system is activated to acquire information about the workspace, machinery and/or human operator. In a second step 1404, based on the scanning data acquired by the sensor system, the control system generates a 3D spatial representation (e.g., voxels) of the workspace (e.g., using the analysis module 242) and recognize the human and the machinery and movements thereof in the workspace (e.g., using the object-recognition module 243). In a third step 1406, the control system accesses the system memory to retrieve a model of the machinery that is acquired from the machinery manufacturer (or the conventional modeling tool) or generated based on the scanning data acquired by the sensor system. In a fourth step 1408, the control system (e.g., the simulation module 244) simulates operation of the machinery in a virtual volume in the workspace for performing a task/application. The simulation module 244 typically receives parameters characterizing the geometry and kinematics of the machinery (e.g., based on the machinery model) and is programmed with the task that the machinery is to perform; that task may also be programmed in the machinery (e.g., robot) controller. In one embodiment, the simulation result is then transmitted to the mapping module 246. (The division of responsibility between the modules 244, 246 is one possible design choice.) In addition, the control system (e.g., the movement-prediction module 245) may predict movement of the operator within a defined future interval when performing the task/application (step 1410). The movement prediction module 245 may utilize the current state of the operator and identification parameters characterizing the geometry and kinematics of the operator to predict all possible spatial regions that may be occupied by any portion of the human operator within the defined interval when performing the task/application. This data may then be passed to the mapping module 246, and once again, the division of responsibility between the modules 245, 246 is one possible design choice. Based on the simulation results and the predicted movement of the operator, the mapping module 246 creates spatial maps (e.g., POEs) of points within a workspace that may potentially be occupied by the machinery and the human operator (step 1412).")
Regarding claim 7, where all the limitations of claim 1 are discussed above, Kriveshko further teaches:
7. (Original) The device of claim 1, wherein the processor is configured to … from a sensor configured to collect sensor data (Paragraph 0015, "In various embodiments, one or more two-dimensional (2D) and/or three-dimensional (3D) imaging sensors are employed to scan the robot, human operator and/or workspace during actual execution of the task. Based thereon, the POEs of the robot and the human operator can be updated in real-time and provided as feedback to adjust the state (e.g., position, orientation, velocity, acceleration, etc.) of the robot and/or the modeled workspace. In some embodiments, the scanning data is stored in memory and can be used as an input when modeling the workspace in the same human-robot collaborative application next time. In some embodiments, robot state can be communicated from the robot controller, and subsequently validated by the 2D and/or 3D imaging sensors. In other embodiments, the scanning data may be exported from the system in a variety of formats for use in other CAD software. In still other embodiments, the POE is generated by simulating performance (rather than scanning actual performance) of a task by a robot or other machinery." as well as Paragraph 0100, "In one embodiment, the mapping module 246 can receive data from a conventional computer vision system that monitors the machinery, the sensor system that scans the machinery and the operator, and/or the robot (e.g., joint position data, keep-in zones and/or or intended trajectory), in step 1432. The computer vision system utilizes the sensor system to track movements of the machinery and the operator during physical execution of the task. The computer vision system is calibrated to the coordinate reference frame of the workspace and transmits to the mapping module 246 coordinate data corresponding to the movements of the machinery and the operator. In various embodiments, the tracking data is then provided to the movement-prediction module 245 for predicting the movements of the machinery and the operator in the next time interval (step 1428). Subsequently, the mapping module 246 transforms this prediction data into voxel-level representations to produce the POEs of the machinery and the operator in the next time interval (step 1430). Steps 1428-1432 may be iteratively performed during execution of the task.") …
Kriveshko does not specifically teach state information being dynamic status of a load. However, Guy, in the same field of endeavor of robotic control, teaches:
… receive the state information … indicative of the state information. (Paragraph 0075, "Another inventive subject matter includes a method of stably transporting a load by a first and a second robot. FIG. 10 depicts flow chart 1000 of one embodiment of the method. In this embodiment, the method begins with step 1010, which provides a first and second robot, each having a motive mechanism that is independently operable from the other, as described above with step 510. In step 1020, each of the robots obtains estimates of a load width, a load length, and a load height in a manner as described in step 520, and in step 1030 each robot obtains an estimate of load stability. In step 1040, each robot autonomously determines how to engage the load for stable transportation, as described with respect to FIG. 8, and in step 1050 each robot autonomously cooperates with the other robot to stably transport the load. It is contemplated that as the load is transported by the robots, the stability of the load may change. In step 1060, each of the robots autonomously reconfigures its engagement with the load in response to changes in load stability during transport.")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and control methods as taught by Kriveshko with the ability to continuously monitor the state of the load and react accordingly as taught by Guy. Guy teaches monitoring and estimating the load dynamics information of an object being transported by a robotic system. This information allows the system to transport objects stably, thereby increasing the safety and effectiveness of the system.
Regarding claim 8, where all the limitations of claim 7 are discussed above, Kriveshko further teaches:
8. (Original) The device of claim 7, wherein the device further comprises a receiver, wherein the processor is configured to receive the state information from the sensor via the receiver. (Paragraphs 0051-0052, "In various embodiments, data obtained by each of the sensors 102.sub.1-3 is transmitted to a control system 112. Based thereon, the control system 112 may computationally generate a 3D spatial representation (e.g., voxels) of the workspace 100, recognize the robot 106, human operator and/or workpiece handled by the robot and/or human operator, and track movements thereof as further described below. In addition, the sensors 102.sub.1-3 may be supported by various software and/or hardware components 114.sub.1-3 for changing the configurations (e.g., orientations and/or positions) of the sensors 102.sub.1-3; the control system 112 may be configured to adjust the sensors so as to provide optimal coverage of the monitored area in the workspace 100. The volume of space covered by each sensor—typically a solid truncated pyramid or solid frustum may be represented in any suitable fashion, e.g., the space may be divided into a 3D grid of small (5 cm, for example) voxels or other suitable form of volumetric representation. For example, a 3D representation of the workspace 100 may be generated using 2D or 3D ray tracing. This ray tracing can be performed dynamically or via the use of precomputed volumes, where objects in the workspace 100 are previously identified and captured by the control system 112. For convenience of presentation, the ensuing discussion assumes a voxel representation, and the control system 112 maintains an internal representation of the workspace 100 at the voxel level.
FIG. 2 illustrates, in greater detail, a representative embodiment of the control system 112, which may be implemented on a general-purpose computer. The control system 112 includes a central processing unit (CPU) 205, system memory 210, and one or more non-volatile mass storage devices (such as one or more hard disks and/or optical storage units) 212. The control system 112 further includes a bidirectional system bus 215 over which the CPU 205, functional modules in the memory 210, and storage device 212 communicate with each other as well as with internal or external input/output (I/O) devices, such as a display 220 and peripherals 222 (which may include traditional input devices such as a keyboard or a mouse). The control system 112 also includes a wireless transceiver 225 and one or more I/O ports 227. The transceiver 225 and I/O ports 227 may provide a network interface. The term “network” is herein used broadly to connote wired or wireless networks of computers or telecommunications devices (such as wired or wireless telephones, tablets, etc.). For example, a computer network may be a local area network (LAN) or a wide area network (WAN). When used in a LAN networking environment, computers may be connected to the LAN through a network interface or adapter; for example, a supervisor may establish communication with the control system 112 using a tablet that wirelessly joins the network. When used in a WAN networking environment, computers typically include a modem or other communication mechanism. Modems may be internal or external, and may be connected to the system bus via the user-input interface, or other appropriate mechanism. Networked computers may be connected over the Internet, an Intranet, Extranet, Ethernet, or any other system that provides communications. Some suitable communications protocols include TCP/IP, UDP, or OSI, for example. For wireless communications, communications protocols may include IEEE 802.11x (“Wi-Fi”), Bluetooth, ZigBee, IrDa, near-field communication (NFC), or other suitable protocol. Furthermore, components of the system may communicate through a combination of wired or wireless paths, and communication may involve both computer and telecommunications networks.")
Regarding claim 9, where all the limitations of claim 7 are discussed above, Kriveshko further teaches:
9. (Original) The device of claim 7, wherein the device further includes the sensor. (Paragraph 0015, "In various embodiments, one or more two-dimensional (2D) and/or three-dimensional (3D) imaging sensors are employed to scan the robot, human operator and/or workspace during actual execution of the task. Based thereon, the POEs of the robot and the human operator can be updated in real-time and provided as feedback to adjust the state (e.g., position, orientation, velocity, acceleration, etc.) of the robot and/or the modeled workspace. In some embodiments, the scanning data is stored in memory and can be used as an input when modeling the workspace in the same human-robot collaborative application next time. In some embodiments, robot state can be communicated from the robot controller, and subsequently validated by the 2D and/or 3D imaging sensors. In other embodiments, the scanning data may be exported from the system in a variety of formats for use in other CAD software. In still other embodiments, the POE is generated by simulating performance (rather than scanning actual performance) of a task by a robot or other machinery." as well as Paragraph 0100, "In one embodiment, the mapping module 246 can receive data from a conventional computer vision system that monitors the machinery, the sensor system that scans the machinery and the operator, and/or the robot (e.g., joint position data, keep-in zones and/or or intended trajectory), in step 1432. The computer vision system utilizes the sensor system to track movements of the machinery and the operator during physical execution of the task. The computer vision system is calibrated to the coordinate reference frame of the workspace and transmits to the mapping module 246 coordinate data corresponding to the movements of the machinery and the operator. In various embodiments, the tracking data is then provided to the movement-prediction module 245 for predicting the movements of the machinery and the operator in the next time interval (step 1428). Subsequently, the mapping module 246 transforms this prediction data into voxel-level representations to produce the POEs of the machinery and the operator in the next time interval (step 1430). Steps 1428-1432 may be iteratively performed during execution of the task.")
Regarding claim 10, where all the limitations of claim 1 are discussed above, Kriveshko further teaches:
10. (Original) The device of claim 1, wherein the device further includes a memory configured to store at least one of the safety envelope, the safety risk, or the mitigating action. (Paragraphs 0054-0055, "The system memory 210 may store a model of the machinery characterizing its geometry and kinematics and its permitted movements in the workspace. The model may be obtained from the machinery manufacturer or, alternatively, generated by the control system 112 based on the scanning data acquired by the sensor system 101. In addition, the memory 210 may store a safety protocol specifying various safety measures such as speed restrictions of the machinery in proximity to the human operator, a minimum separation distance between the machinery and the human, etc. In some embodiments, the memory 210 contains a series of frame buffers 235, i.e., partitions that store, in digital form (e.g., as pixels or voxels, or as depth maps), images obtained by the sensors 102.sub.1-3; the data may actually arrive via I/O ports 227 and/or transceiver 225 as discussed above.
The system memory 210 contains instructions, conceptually illustrated as a group of modules, that control the operation of CPU 205 and its interaction with the other hardware components. An operating system 240 (e.g., Windows or Linux) directs the execution of low-level, basic system functions such as memory allocation, file management and operation of the mass storage device 212. At a higher level, and as described in greater detail below, an analysis module 242 may register the images acquired by the sensor system 101 in the frame buffers 235, generate a 3D spatial representation (e.g., voxels) of the workspace and analyze the images to classify regions of the monitored workspace 100; an object-recognition module 243 may recognize the human and the machinery and movements thereof in the workspace based on the data acquired by the sensor system 101; a simulation module 244 may computationally perform at least a portion of the application/task performed by the machinery in accordance with the stored machinery model and application/task; a movement prediction module 245 may predict movements of the machinery and/or the human operator within a defined future interval (e.g., 0.1 sec, 0.5 sec, 1 sec, etc.) based on, for example, the current state (e.g., position, orientation, velocity, acceleration, etc.) thereof; a mapping module 246 may map or identify the POEs of the machinery and/or the human operator within the workspace; a state determination module 247 may determine an updated state of the machinery such that the machinery can be operated in a safe state; a path determination module 248 may determine a path along which the machinery can perform the activity; and a workspace modeling module 249 may model the workspace parameters (e.g., the dimensions, workflow, locations of the equipment and/or resources). The result of the classification, object recognition and simulation as well as the POEs of the machinery and/or human, the determined optimal path and workspace parameters may be stored in a space map 250, which contains a volumetric representation of the workspace 100 with each voxel (or other unit of representation) labeled, within the space map, as described herein. Alternatively, the space map 250 may simply be a 3D array of voxels, with voxel labels being stored in a separate database (in memory 210 or in mass storage 212).")
Regarding claim 11, where all the limitations of claim 1 are discussed above, Kriveshko further teaches:
11. (Original) The device of claim 1, wherein the robot is external to the device. (Paragraph 0056, "In addition, the control system 112 may communicate with the robot controller 108 to control operation of the machinery in the workspace 100 (e.g., performing a task/application programmed in the controller 108 or the control system 112) using conventional control routines collectively indicated at 252. As explained below, the configuration of the workspace may well change over time as persons and/or machines move about; the control routines 252 may be responsive to these changes in operating machinery to achieve high levels of safety. All of the modules in system memory 210 may be coded in any suitable programming language, including, without limitation, high-level languages such as C, C++, C#, Java, Python, Ruby, Scala, and Lua, utilizing, without limitation, any suitable frameworks and libraries such as TensorFlow, Keras, PyTorch, Caffe or Theano. Additionally, the software can be implemented in an assembly language and/or machine language directed to the microprocessor resident on a target device.")
Regarding claim 26, Kriveshko further teaches:
26. (Currently Amended) A non-transitory, computer readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to:
determine a safety envelop of a robot (Paragraph 0079, "In some embodiments, path optimization includes creation of a 3D “keep-in” zone (or volume) (i.e., a zone/volume to which the robot is restricted during operation) and/or a “keep-out” zone (or volume) (i.e., a zone/volume from which the robot is restricted during operation). Keep-in and keep-out zones restrict robot motion through safe limitations on the possible robot axis positions in Cartesian and/or joint space. Safety limits may be set outside these zones so that, for example, their breach by the robot in operation triggers a stop. Conventionally, robot keep-in zones are defined as prismatic bodies. For example, referring to FIG. 9A, a keep-in zone 902 determined using the conventional approach takes the form of a prismatic volume; the keep-in zone 902 is typically larger than the total swept volume 904 of the machinery during operation (which may be determined either by simulation or characterization using, for example, scanning data acquired by the sensor system 101). Based on the determined keep-in zone 902, the robot controller may implement a position-limiting function to enforce the position limiting of the machinery to be within the keep-in zone 902.") based on a planned movement of the robot and based on state information about a load carried by a robot, (Paragraph 0057, " The process of modeling the robot dynamics and mapping the safe region, however, may be simplified by assuming that the robot's current position is fixed and estimating the region that any portion of the robot may conceivably occupy within a short future time interval only. Thus, various embodiments of the present invention include approaches to modeling the robot dynamics and/or human activities in the workspace 100 and mapping the human-robot collaborative workspace 100 (e.g., calculating the safe and/or unsafe regions) over short intervals based on the current states (e.g., current positions, velocities, accelerations, geometries, kinematics, expected positions and/or orientations associated with the next action in the task/application) associated with the machinery (including the robot 106 and/or other industrial equipment) and the human operator. In addition, the modeling and mapping procedure may be repeated (based on, for example, the scanning data of the machinery and the human acquired by the sensor system 101 during performance of the task/application) over time, thereby effectively updating the safe and/or unsafe regions on a quasi-continuous basis in real time.") …
…
determine a safety risk based on a detected object with respect to the fine-tuned safety envelope; and
generate a mitigating action to the planned movement if the safety risk exceeds a threshold value. (Paragraph 0018, "In various embodiments, the object-monitoring system is further configured to computationally detect a predetermined degree of proximity between the second potential occupancy envelope and the updated first potential occupancy envelope and to thereupon cause the controller to put the machinery in a safe state. For example, the predetermined degree of proximity may correspond to a protective separation distance. The object-monitoring system may be configured to (i) detect a current state of the machinery, (ii) compute parameters for putting the machinery in the safe state from the current state, and (iii) communicate the parameters to the controller when the predetermined degree of proximity is detected.")
Kriveshko does not specifically teach state information being a dynamic status of a load or using a machine learning model when determining safe operation limits of the system. However, Guy, in the same field of endeavor of robotic control, teaches:
… wherein the state information comprises a dynamic status of the load(Paragraph 0075, "Another inventive subject matter includes a method of stably transporting a load by a first and a second robot. FIG. 10 depicts flow chart 1000 of one embodiment of the method. In this embodiment, the method begins with step 1010, which provides a first and second robot, each having a motive mechanism that is independently operable from the other, as described above with step 510. In step 1020, each of the robots obtains estimates of a load width, a load length, and a load height in a manner as described in step 520, and in step 1030 each robot obtains an estimate of load stability. In step 1040, each robot autonomously determines how to engage the load for stable transportation, as described with respect to FIG. 8, and in step 1050 each robot autonomously cooperates with the other robot to stably transport the load. It is contemplated that as the load is transported by the robots, the stability of the load may change. In step 1060, each of the robots autonomously reconfigures its engagement with the load in response to changes in load stability during transport.") …
However, Hopkinson, in the same field of endeavor of robotics, teaches:
… fine-tuning the safety envelope for the robot and one or more other tracked objects based on a machine learning model, giving rise to a fine-tuned safety envelope; … (Paragraph 0104, “The rule analyzer 359 determines or assesses a likelihood or probability that a motion or transition (represented by an edge in a graph) will result in the processor-based workcell safety system triggering a stoppage, slowdown or precautionary occlusion or other inhibition of robot operation. For example, the rule analyzer 359 may evaluate or simulate a motion plan or portion thereof (e.g., an edge) of one or more robots, determining whether any transitions will violate a safety rule (e.g., result in the robot(s) or portion thereof passing too close to a human as defined by the safety monitoring rules 125c (FIG. 1) implemented by the processor-based workcell safety system). For example, the rule analyzer 359 may evaluate or simulate a position and/or path or trajectory of an object (e.g., human) or portion thereof, determining whether any position or movements of the object will violate a safety rule (e.g., result in a human or portion thereof passing too close to a robot or robots as defined by the safety monitoring rules 125c (FIG. 1) implemented by the processor-based workcell safety system). For instance, where the processor-based workcell safety system employs a laser scanner that sections a portion of the operational environment into a grid, and a rule enforced by the processor-based workcell safety causes a stoppage, slowdown or precautionary occlusion when a human is within one grid position of the position of a portion of the robot, the rule analyzer 359 may identify transitions that would bring a portion of the robot within one grid of the position of a human, or predicted position of a human, so that weights associated with edges corresponding to those identified transitions can be adjusted (e.g., increased).” As well as Paragraphs 0148, “In response to the validation indicating that an anomalous system status does not exist for the processor-based workcell safety system 200 (e.g., all sensors 132 operating within defined operational parameters, a sufficient number of sensors 132 operating within defined operational parameters, a majority of sensors 132 operating consistently with one another within defined operational parameters), at 516 at least one processor 222 (FIG. 2) of the processor-based workcell safety system 200 (FIG. 2) monitors the operational environment 104 (FIG. 1) for occurrences of violations of safety monitoring rules 125c (FIG. 1) To monitor the operational environment for safety rule violations the processor(s) 222 may employ sensor data that represents objects in the operational environment 104 (FIG. 1). The processor(s) 222 may identify objects that are, or that appear to be, humans. The processor(s) 222 may determine a current position of one or more humans in the operational environment and/or a three-dimensional area occupied by the human(s). The processor(s) 222 may, optionally predict a path or a trajectory of the human over a period of time and/or a three-dimensional area occupied by the human(s) over the period of time. For instance, the processor(s) 222 may determine the path or trajectory or three-dimensional area based on a current position of the human(s) and based on previous movements of the human(s), and/or based on predicted behavior or training of the human(s). The processor(s) 222 may employ artificial intelligence or machine-learning to predict the path or trajectory of the human. The processor(s) 222 may determine a current position of one or more robots and/or a three-dimensional area occupied by the robot(s) over the period of time. For instance, the processor(s) 222 may determine the path or trajectory or three-dimensional area based on a current position of the robot(s) and a motion plan for the robot.” and 0173, “Optionally at 806, at least one processor 322 of the processor-based robot control system 300 determines a predicted behavior of a human (e.g., operator) in the workcell or operational environment 104 or who appears to be likely to enter the workcell or operational environment 104. The at least one processor 322 may, for example, determine the predicted behavior of the person in the workcell or operational environment 104 using machine-learning or artificial intelligence, being trained on a dataset of similar operational environments and robot scenarios. The at least one processor 322 may, for example, determine the predicted behavior of the human in the workcell or operational environment 104 based at least in part on a set of operator training guidelines, which specify positions or locations and times and/or speed of movement of operators and other humans when present in the operational environment 104. The at least one processor 222 may, for example, determine a predicted trajectory (e.g., path, speed) of a human at least partially through the workcell or operational environment 104.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and control methods as taught by Kriveshko with the ability to continuously monitor the state of the load and react accordingly as taught by Guy and to utilize machine learning when determining the allowable limits for safe operation of the system as taught by Hopkinson. Guy teaches monitoring and estimating the load dynamics information of an object being transported by a robotic system. This information allows the system to transport objects stably and avoid tip over events from occurring, thereby increasing the safety and effectiveness of the system.
Regarding claim 28, where all the limitations of claim 26 are discussed above, Kriveshko further teaches:
28. (Previously Presented) The non-transitory, computer readable medium of claim 26, wherein the planned movement of the robot (Paragraph 0057, " The process of modeling the robot dynamics and mapping the safe region, however, may be simplified by assuming that the robot's current position is fixed and estimating the region that any portion of the robot may conceivably occupy within a short future time interval only. Thus, various embodiments of the present invention include approaches to modeling the robot dynamics and/or human activities in the workspace 100 and mapping the human-robot collaborative workspace 100 (e.g., calculating the safe and/or unsafe regions) over short intervals based on the current states (e.g., current positions, velocities, accelerations, geometries, kinematics, expected positions and/or orientations associated with the next action in the task/application) associated with the machinery (including the robot 106 and/or other industrial equipment) and the human operator. In addition, the modeling and mapping procedure may be repeated (based on, for example, the scanning data of the machinery and the human acquired by the sensor system 101 during performance of the task/application) over time, thereby effectively updating the safe and/or unsafe regions on a quasi-continuous basis in real time.") comprises at least one of a movement of the robot along a planned trajectory, (Paragraph 0100, "In one embodiment, the mapping module 246 can receive data from a conventional computer vision system that monitors the machinery, the sensor system that scans the machinery and the operator, and/or the robot (e.g., joint position data, keep-in zones and/or or intended trajectory), in step 1432. The computer vision system utilizes the sensor system to track movements of the machinery and the operator during physical execution of the task. The computer vision system is calibrated to the coordinate reference frame of the workspace and transmits to the mapping module 246 coordinate data corresponding to the movements of the machinery and the operator. In various embodiments, the tracking data is then provided to the movement-prediction module 245 for predicting the movements of the machinery and the operator in the next time interval (step 1428). Subsequently, the mapping module 246 transforms this prediction data into voxel-level representations to produce the POEs of the machinery and the operator in the next time interval (step 1430). Steps 1428-1432 may be iteratively performed during execution of the task.") a velocity of the movement of the robot along the planned trajectory, or a position of the robot along the planned trajectory. (Paragraph 0090, "Additionally or alternatively, once the machinery's current state (e.g., payload, position, orientation, velocity and/or acceleration) is acquired, a PSD (generally defined as the minimum distance separating the machinery from the operator for ensuring safety) and/or other safety-related measures can be computed. For example, the PSD may be computed based on the POEs of the machinery and the human operator as well as any keep-in and/or keep-out zones. Again, because the machinery's state may change during execution of the task, the PSD may be continuously updated throughout the task as well. This can be achieved by, for example, using the sensor system 101 to periodically acquire the updated state of the machinery and the operator, and, based thereon, updating the PSD. In addition, the updated PSD may be compared to a predetermined threshold; if the updated PSD is smaller than the threshold, the control system 112 may adjust (e.g., reduce), for example, the speed of the machinery as further described below so as to bring the robot to a safe state. In various embodiments, the computed PSD is combined with the POE of the human operator to determine the optimal speed or robot path (or choosing among possible paths) for executing a task. For example, referring to FIG. 12A, the envelopes 1202-1206 represent the largest POEs of the operator at three instants, t.sub.1-3, respectively, during execution of a human-robot collaborative application; based on the computed PSDs 1208-1212, the robot's locations 1214-1218 that can be closest to the operator at the instants t.sub.1-t.sub.3, respectively, during performance of the task (while avoiding safety hazards) can be determined. As a result, an optimal path 1220 for the robot movement including the instants t.sub.1-t.sub.3 can be determined. Alternatively, instead of determining the unconstrained optimal path, the POE and PSD information can be used to select among allowed or predetermined paths given programmed or environmental constraints—i.e., identifying the path alternative that provides greatest efficiency without violating safety constraints.")
Regarding claim 29, where all the limitations of claim 26 are discussed above, Kriveshko further teaches:
29. (Previously Presented) The non-transitory, computer readable medium of claim 26, wherein the instructions further cause the one or more processors to determine the safety envelope (Paragraph 0079, "In some embodiments, path optimization includes creation of a 3D “keep-in” zone (or volume) (i.e., a zone/volume to which the robot is restricted during operation) and/or a “keep-out” zone (or volume) (i.e., a zone/volume from which the robot is restricted during operation). Keep-in and keep-out zones restrict robot motion through safe limitations on the possible robot axis positions in Cartesian and/or joint space. Safety limits may be set outside these zones so that, for example, their breach by the robot in operation triggers a stop. Conventionally, robot keep-in zones are defined as prismatic bodies. For example, referring to FIG. 9A, a keep-in zone 902 determined using the conventional approach takes the form of a prismatic volume; the keep-in zone 902 is typically larger than the total swept volume 904 of the machinery during operation (which may be determined either by simulation or characterization using, for example, scanning data acquired by the sensor system 101). Based on the determined keep-in zone 902, the robot controller may implement a position-limiting function to enforce the position limiting of the machinery to be within the keep-in zone 902.") based on at least one of a threshold braking distance of the robot with the load, (Paragraph 0090, "Additionally or alternatively, once the machinery's current state (e.g., payload, position, orientation, velocity and/or acceleration) is acquired, a PSD (generally defined as the minimum distance separating the machinery from the operator for ensuring safety) and/or other safety-related measures can be computed. For example, the PSD may be computed based on the POEs of the machinery and the human operator as well as any keep-in and/or keep-out zones. Again, because the machinery's state may change during execution of the task, the PSD may be continuously updated throughout the task as well. This can be achieved by, for example, using the sensor system 101 to periodically acquire the updated state of the machinery and the operator, and, based thereon, updating the PSD. In addition, the updated PSD may be compared to a predetermined threshold; if the updated PSD is smaller than the threshold, the control system 112 may adjust (e.g., reduce), for example, the speed of the machinery as further described below so as to bring the robot to a safe state. In various embodiments, the computed PSD is combined with the POE of the human operator to determine the optimal speed or robot path (or choosing among possible paths) for executing a task. For example, referring to FIG. 12A, the envelopes 1202-1206 represent the largest POEs of the operator at three instants, t.sub.1-3, respectively, during execution of a human-robot collaborative application; based on the computed PSDs 1208-1212, the robot's locations 1214-1218 that can be closest to the operator at the instants t.sub.1-t.sub.3, respectively, during performance of the task (while avoiding safety hazards) can be determined. As a result, an optimal path 1220 for the robot movement including the instants t.sub.1-t.sub.3 can be determined. Alternatively, instead of determining the unconstrained optimal path, the POE and PSD information can be used to select among allowed or predetermined paths given programmed or environmental constraints—i.e., identifying the path alternative that provides greatest efficiency without violating safety constraints.") a threshold turn radius of the robot with the load, (Paragraph 0098, FIG. 14A illustrates an exemplary approach for computing a POE of the machinery and/or human operator based at least in part on simulation of the machinery's operation in accordance herewith. In a first step 1402, the sensor system is activated to acquire information about the workspace, machinery and/or human operator. In a second step 1404, based on the scanning data acquired by the sensor system, the control system generates a 3D spatial representation (e.g., voxels) of the workspace (e.g., using the analysis module 242) and recognize the human and the machinery and movements thereof in the workspace (e.g., using the object-recognition module 243). In a third step 1406, the control system accesses the system memory to retrieve a model of the machinery that is acquired from the machinery manufacturer (or the conventional modeling tool) or generated based on the scanning data acquired by the sensor system. In a fourth step 1408, the control system (e.g., the simulation module 244) simulates operation of the machinery in a virtual volume in the workspace for performing a task/application. The simulation module 244 typically receives parameters characterizing the geometry and kinematics of the machinery (e.g., based on the machinery model) and is programmed with the task that the machinery is to perform; that task may also be programmed in the machinery (e.g., robot) controller. In one embodiment, the simulation result is then transmitted to the mapping module 246. (The division of responsibility between the modules 244, 246 is one possible design choice.) In addition, the control system (e.g., the movement-prediction module 245) may predict movement of the operator within a defined future interval when performing the task/application (step 1410). The movement prediction module 245 may utilize the current state of the operator and identification parameters characterizing the geometry and kinematics of the operator to predict all possible spatial regions that may be occupied by any portion of the human operator within the defined interval when performing the task/application. This data may then be passed to the mapping module 246, and once again, the division of responsibility between the modules 245, 246 is one possible design choice. Based on the simulation results and the predicted movement of the operator, the mapping module 246 creates spatial maps (e.g., POEs) of points within a workspace that may potentially be occupied by the machinery and the human operator (step 1412).") a velocity of the planned movement of the robot with the load, a trajectory of the planned movement of the robot with the load, or an acceleration of the planned movement of the robot with the load. (Paragraph 0090, "Additionally or alternatively, once the machinery's current state (e.g., payload, position, orientation, velocity and/or acceleration) is acquired, a PSD (generally defined as the minimum distance separating the machinery from the operator for ensuring safety) and/or other safety-related measures can be computed. For example, the PSD may be computed based on the POEs of the machinery and the human operator as well as any keep-in and/or keep-out zones. Again, because the machinery's state may change during execution of the task, the PSD may be continuously updated throughout the task as well. This can be achieved by, for example, using the sensor system 101 to periodically acquire the updated state of the machinery and the operator, and, based thereon, updating the PSD. In addition, the updated PSD may be compared to a predetermined threshold; if the updated PSD is smaller than the threshold, the control system 112 may adjust (e.g., reduce), for example, the speed of the machinery as further described below so as to bring the robot to a safe state. In various embodiments, the computed PSD is combined with the POE of the human operator to determine the optimal speed or robot path (or choosing among possible paths) for executing a task. For example, referring to FIG. 12A, the envelopes 1202-1206 represent the largest POEs of the operator at three instants, t.sub.1-3, respectively, during execution of a human-robot collaborative application; based on the computed PSDs 1208-1212, the robot's locations 1214-1218 that can be closest to the operator at the instants t.sub.1-t.sub.3, respectively, during performance of the task (while avoiding safety hazards) can be determined. As a result, an optimal path 1220 for the robot movement including the instants t.sub.1-t.sub.3 can be determined. Alternatively, instead of determining the unconstrained optimal path, the POE and PSD information can be used to select among allowed or predetermined paths given programmed or environmental constraints—i.e., identifying the path alternative that provides greatest efficiency without violating safety constraints.")
Regarding claim 30, where all the limitations of claim 26 are discussed above, Kriveshko further teaches:
30. (Previously Presented) The non-transitory, computer readable medium of claim 26, wherein the instructions further cause the one or more processors to determine a predicted trajectory of the detected object based on at least one of … a velocity of the detected object, an acceleration of the detected object, (Paragraph 0098, "FIG. 14A illustrates an exemplary approach for computing a POE of the machinery and/or human operator based at least in part on simulation of the machinery's operation in accordance herewith. In a first step 1402, the sensor system is activated to acquire information about the workspace, machinery and/or human operator. In a second step 1404, based on the scanning data acquired by the sensor system, the control system generates a 3D spatial representation (e.g., voxels) of the workspace (e.g., using the analysis module 242) and recognize the human and the machinery and movements thereof in the workspace (e.g., using the object-recognition module 243). In a third step 1406, the control system accesses the system memory to retrieve a model of the machinery that is acquired from the machinery manufacturer (or the conventional modeling tool) or generated based on the scanning data acquired by the sensor system. In a fourth step 1408, the control system (e.g., the simulation module 244) simulates operation of the machinery in a virtual volume in the workspace for performing a task/application. The simulation module 244 typically receives parameters characterizing the geometry and kinematics of the machinery (e.g., based on the machinery model) and is programmed with the task that the machinery is to perform; that task may also be programmed in the machinery (e.g., robot) controller. In one embodiment, the simulation result is then transmitted to the mapping module 246. (The division of responsibility between the modules 244, 246 is one possible design choice.) In addition, the control system (e.g., the movement-prediction module 245) may predict movement of the operator within a defined future interval when performing the task/application (step 1410). The movement prediction module 245 may utilize the current state of the operator and identification parameters characterizing the geometry and kinematics of the operator to predict all possible spatial regions that may be occupied by any portion of the human operator within the defined interval when performing the task/application. This data may then be passed to the mapping module 246, and once again, the division of responsibility between the modules 245, 246 is one possible design choice. Based on the simulation results and the predicted movement of the operator, the mapping module 246 creates spatial maps (e.g., POEs) of points within a workspace that may potentially be occupied by the machinery and the human operator (step 1412).") a type of detected object, (Paragraph 0086, "Typically, the robot controller 1004 itself does not have a safe way to govern (e.g., modify) the state (e.g., speed, position, etc.) of the robot; rather, it only has a safe way to enforce a given state. To govern and enforce the state of the robot in a safe manner, in various embodiments, an object-monitoring system (OMS) 1010 is implemented to cooperatively work with the safety-rated component 1006 and non-safety-rated component 1008 as further described below. In one embodiment, the OMS 1010 obtains information about objects from the sensor system 1001 and uses this sensor information to identify relevant objects in the workspace 1000. For example, OMS 1010 may, based on the information obtained from the sensor system (and/or the robot), monitor whether the robot is in a safe state (e.g., remains within a specific zone (e.g., the keep-in zone), stays below a specified speed, etc.), and if not, issues a safe-action command (e.g., stop) to the robot controller 1004.") or a pose of the detected object. (Paragraph 0098, "FIG. 14A illustrates an exemplary approach for computing a POE of the machinery and/or human operator based at least in part on simulation of the machinery's operation in accordance herewith. In a first step 1402, the sensor system is activated to acquire information about the workspace, machinery and/or human operator. In a second step 1404, based on the scanning data acquired by the sensor system, the control system generates a 3D spatial representation (e.g., voxels) of the workspace (e.g., using the analysis module 242) and recognize the human and the machinery and movements thereof in the workspace (e.g., using the object-recognition module 243). In a third step 1406, the control system accesses the system memory to retrieve a model of the machinery that is acquired from the machinery manufacturer (or the conventional modeling tool) or generated based on the scanning data acquired by the sensor system. In a fourth step 1408, the control system (e.g., the simulation module 244) simulates operation of the machinery in a virtual volume in the workspace for performing a task/application. The simulation module 244 typically receives parameters characterizing the geometry and kinematics of the machinery (e.g., based on the machinery model) and is programmed with the task that the machinery is to perform; that task may also be programmed in the machinery (e.g., robot) controller. In one embodiment, the simulation result is then transmitted to the mapping module 246. (The division of responsibility between the modules 244, 246 is one possible design choice.) In addition, the control system (e.g., the movement-prediction module 245) may predict movement of the operator within a defined future interval when performing the task/application (step 1410). The movement prediction module 245 may utilize the current state of the operator and identification parameters characterizing the geometry and kinematics of the operator to predict all possible spatial regions that may be occupied by any portion of the human operator within the defined interval when performing the task/application. This data may then be passed to the mapping module 246, and once again, the division of responsibility between the modules 245, 246 is one possible design choice. Based on the simulation results and the predicted movement of the operator, the mapping module 246 creates spatial maps (e.g., POEs) of points within a workspace that may potentially be occupied by the machinery and the human operator (step 1412).")
Regarding claim 31, where all the limitations of claim 1 are discussed above, Kriveshko does not specifically teach the use of machine learning to determine safe operation limits based on a predicted environment. However, Hopkinson, in the same field of endeavor of robotics, teaches:
31. (New) The device of claim 1, wherein the machine-learning model is configured to utilize predicted trajectories of the one or more other tracked objects to fine-tune the safety envelope. (Paragraph 0173, “Optionally at 806, at least one processor 322 of the processor-based robot control system 300 determines a predicted behavior of a human (e.g., operator) in the workcell or operational environment 104 or who appears to be likely to enter the workcell or operational environment 104. The at least one processor 322 may, for example, determine the predicted behavior of the person in the workcell or operational environment 104 using machine-learning or artificial intelligence, being trained on a dataset of similar operational environments and robot scenarios. The at least one processor 322 may, for example, determine the predicted behavior of the human in the workcell or operational environment 104 based at least in part on a set of operator training guidelines, which specify positions or locations and times and/or speed of movement of operators and other humans when present in the operational environment 104. The at least one processor 222 may, for example, determine a predicted trajectory (e.g., path, speed) of a human at least partially through the workcell or operational environment 104.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and methods of operation as taught by Kriveshko with the ability to use a machine learning model to determine optimal operation limits and control as taught by Hopkinson. This would allow the system to operate efficiently based on future environmental states while maintaining a high level of safety.
Regarding claim 32, where all the limitations of claim 1 are discussed above, Kriveshko does not specifically teach the use of machine learning to determine safe operation limits based on a predicted environment. However, Hopkinson, in the same field of endeavor of robotics, teaches:
32. (New) The device of claim 1, wherein the machine-learning model is configured to improve prediction of human motion within an environment of the robot and wherein the one or more other tracked objects include one or more humans detected within the environment. (Paragraph 0173, “Optionally at 806, at least one processor 322 of the processor-based robot control system 300 determines a predicted behavior of a human (e.g., operator) in the workcell or operational environment 104 or who appears to be likely to enter the workcell or operational environment 104. The at least one processor 322 may, for example, determine the predicted behavior of the person in the workcell or operational environment 104 using machine-learning or artificial intelligence, being trained on a dataset of similar operational environments and robot scenarios. The at least one processor 322 may, for example, determine the predicted behavior of the human in the workcell or operational environment 104 based at least in part on a set of operator training guidelines, which specify positions or locations and times and/or speed of movement of operators and other humans when present in the operational environment 104. The at least one processor 222 may, for example, determine a predicted trajectory (e.g., path, speed) of a human at least partially through the workcell or operational environment 104.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and methods of operation as taught by Kriveshko with the ability to use a machine learning model to determine optimal operation limits and control as taught by Hopkinson. This would allow the system to operate efficiently while maintaining a high level of safety for humans within the environment.
Regarding claim 33, where all the limitations of claim 1 are discussed above, Kriveshko does not specifically teach the use of machine learning to determine safe operation limits based on a predicted environment and possible collisions. However, Hopkinson, in the same field of endeavor of robotics, teaches:
33. (New) The device of claim 1, wherein the processor is further configured to determine potential collision risks between the robot and the one or more other tracked objects in a near future time period based on the fine-tuned safety envelope. (Paragraphs 0169, “The processor-based workcell safety system 200 evaluates safety conditions based on a set of safety monitoring rules 125c (FIG. 1) which include a number of conditions in which the processor-based workcell safety system 200 triggers at least one of a slow down or a stoppage of operation of the at least one robot 102 that operates in the workcell or operational environment 104. For example, the processor-based workcell safety system 200 may trigger a stoppage or slowdown, or even cause a portion of the operational environment 104 to be indicated as occluded as a precaution in response to detection of a transient object (e.g., a human or potentially a human) located within a defined distance of a portion of the robot(s) 102 or within a defined distance of a projected trajectory of a portion of the robot(s) 102. The distance may or may not be a straight line distance, and may, for example take into account a resolution of the particular sensor 132 (FIG. 1). Also for example, the processor-based workcell safety system 200 may trigger a stoppage or slowdown, or even cause a portion of the operational environment 104 to be indicated as occluded as a precaution in response to detection of a predicted collision or close approach of a trajectory of a transient object (e.g., a human or potentially a human) with a projected trajectory of a portion of one or more robots 102.” as well as Paragraph 0173, “Optionally at 806, at least one processor 322 of the processor-based robot control system 300 determines a predicted behavior of a human (e.g., operator) in the workcell or operational environment 104 or who appears to be likely to enter the workcell or operational environment 104. The at least one processor 322 may, for example, determine the predicted behavior of the person in the workcell or operational environment 104 using machine-learning or artificial intelligence, being trained on a dataset of similar operational environments and robot scenarios. The at least one processor 322 may, for example, determine the predicted behavior of the human in the workcell or operational environment 104 based at least in part on a set of operator training guidelines, which specify positions or locations and times and/or speed of movement of operators and other humans when present in the operational environment 104. The at least one processor 222 may, for example, determine a predicted trajectory (e.g., path, speed) of a human at least partially through the workcell or operational environment 104.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and methods of operation as taught by Kriveshko with the ability to use a machine learning model to determine optimal operation limits and control as taught by Hopkinson. This would allow the system to avoid possible collision with humans in the environment while maintaining a high level of efficiency and avoiding slowdown or stoppage.
Claim(s) 2 and 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kriveshko in view of Guy and Hopkinson and in further view of Shein et al. (US 20110190933 A1), hereinafter Shein.
Regarding claim 2, where all the limitations of claim 1 are discussed above, Kriveshko does not specifically teach state information being dynamic status of a load. However, Guy, in the same field of endeavor of robotic control, teaches:
2. (Currently Amended) The device of claim 1, wherein the dynamic status of the load (Paragraph 0075, "Another inventive subject matter includes a method of stably transporting a load by a first and a second robot. FIG. 10 depicts flow chart 1000 of one embodiment of the method. In this embodiment, the method begins with step 1010, which provides a first and second robot, each having a motive mechanism that is independently operable from the other, as described above with step 510. In step 1020, each of the robots obtains estimates of a load width, a load length, and a load height in a manner as described in step 520, and in step 1030 each robot obtains an estimate of load stability. In step 1040, each robot autonomously determines how to engage the load for stable transportation, as described with respect to FIG. 8, and in step 1050 each robot autonomously cooperates with the other robot to stably transport the load. It is contemplated that as the load is transported by the robots, the stability of the load may change. In step 1060, each of the robots autonomously reconfigures its engagement with the load in response to changes in load stability during transport.")
However, Shein, in the same field of endeavor of robotic control, teaches:
… a distance of the load from a center of gravity of the robot, … (Paragraph 0005, “In some implementations, the anti-tip behavior determines a payload deck position relative to the chassis and provides the outcome evaluation based at least in part on the payload deck position. The anti-tip behavior may determine a position of a center of gravity of the payload deck relative to a center of gravity of the chassis and provide the outcome evaluation based at least in part on the position of the center of gravity of the payload deck. In some examples, the anti-tip behavior determines a position of a center of gravity of the entire robot relative to an operating envelope, and provides the outcome evaluation based at least in part on the position of the center of gravity of the entire robot.” This demonstrates the tracking of the position of the load, which is carried on the payload deck, relative to the position of the chassis of the robot. This allows the system to react to this difference and ensure the system does not tip over.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and control methods as taught by Kriveshko with the ability to continuously monitor the state of the load and react accordingly as taught by Guy as well as with the ability to track the relative position of the load against the center of gravity of the robot as taught by Shein. Guy teaches monitoring and estimating the load dynamics information of an object being transported by a robotic system. This information allows the system to transport objects stably, thereby increasing the safety and effectiveness of the system.
Regarding claim 27, where all the limitations of claim 26 are discussed above, Kriveshko does not specifically teach state information being dynamic status of a load. However, Guy, in the same field of endeavor of robotic control, teaches:
27. (Currently Amended) The non-transitory, computer readable medium of claim 26, wherein the dynamic status of the load (Paragraph 0075, "Another inventive subject matter includes a method of stably transporting a load by a first and a second robot. FIG. 10 depicts flow chart 1000 of one embodiment of the method. In this embodiment, the method begins with step 1010, which provides a first and second robot, each having a motive mechanism that is independently operable from the other, as described above with step 510. In step 1020, each of the robots obtains estimates of a load width, a load length, and a load height in a manner as described in step 520, and in step 1030 each robot obtains an estimate of load stability. In step 1040, each robot autonomously determines how to engage the load for stable transportation, as described with respect to FIG. 8, and in step 1050 each robot autonomously cooperates with the other robot to stably transport the load. It is contemplated that as the load is transported by the robots, the stability of the load may change. In step 1060, each of the robots autonomously reconfigures its engagement with the load in response to changes in load stability during transport.")
However, Shein, in the same field of endeavor of robotic control, teaches:
… a distance of the load from a center of gravity of the robot, … (Paragraph 0005, “In some implementations, the anti-tip behavior determines a payload deck position relative to the chassis and provides the outcome evaluation based at least in part on the payload deck position. The anti-tip behavior may determine a position of a center of gravity of the payload deck relative to a center of gravity of the chassis and provide the outcome evaluation based at least in part on the position of the center of gravity of the payload deck. In some examples, the anti-tip behavior determines a position of a center of gravity of the entire robot relative to an operating envelope, and provides the outcome evaluation based at least in part on the position of the center of gravity of the entire robot.” This demonstrates the tracking of the position of the load, which is carried on the payload deck, relative to the position of the chassis of the robot. This allows the system to react to this difference and ensure the system does not tip over.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and control methods as taught by Kriveshko with the ability to continuously monitor the state of the load and react accordingly as taught by Guy as well as with the ability to track the relative position of the load against the center of gravity of the robot as taught by Shein. Guy teaches monitoring and estimating the load dynamics information of an object being transported by a robotic system. This information allows the system to transport objects stably, thereby increasing the safety and effectiveness of the system.
Claim(s) 6 and 34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kriveshko in view of Guy and Hopkinson and in further view of Johnson et al. (US 20200086487 A1), hereinafter Johnson.
Regarding claim 6, where all the limitations of claim 5 are discussed above, Kriveshko does not specifically teach using a machine learning model to process historical data to be used in predicting the state of objects. However, Johnson, in the same field of endeavor of robotic control, teaches:
6. (Original) The device of claim 5, wherein the processor is configured to receive the past trajectories from a machine learning model associated with the other objects. (Paragraph 0076, "Returning to FIG. 4, to continue the method 440, the 3D human pose 445 is processed using a deep recurrent neural network (RNN) to predict future motion (e.g., generate the 3D motion prediction 447) of the human based on the past motion. In an example embodiment of the present disclosure, the method predicts the human motion as described in Martinez et al., “On Human Motion Prediction Using Recurrent Neural Networks.”")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and control methods as taught by Kriveshko with the ability to utilize a machine learning model for predicting the state of objects in the workspace as taught by Johnson. This would improve the reliability of the prediction thereby increasing the safety of the humans within the robotic systems operating environment.
Regarding claim 34, where all the limitations of claim 26 are discussed above, Kriveshko does not specifically teach using a machine learning model to process historical data to be used in predicting the state of objects. However, Johnson, in the same field of endeavor of robotic control, teaches:
34. (New) The non-transitory, computer readable medium of claim 26, wherein the machine-learning model is configured to utilize predicted trajectories of the one or more other tracked objects to fine-tune the safety envelope. (Paragraph 0076, "Returning to FIG. 4, to continue the method 440, the 3D human pose 445 is processed using a deep recurrent neural network (RNN) to predict future motion (e.g., generate the 3D motion prediction 447) of the human based on the past motion. In an example embodiment of the present disclosure, the method predicts the human motion as described in Martinez et al., “On Human Motion Prediction Using Recurrent Neural Networks.”")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and control methods as taught by Kriveshko with the ability to utilize a machine learning model for predicting the state of objects in the workspace as taught by Johnson. This would improve the reliability of the prediction thereby increasing the safety of the humans within the robotic systems operating environment.
Conclusion
The Examiner has cited particular paragraphs or columns and line numbers in the referencesapplied to the claims above for the convenience of the Applicant. Although the specified citations arerepresentative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATHER KENIRY whose telephone number is (571)270-5468. The examiner can normally be reached M-F 7:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.J.K./Examiner, Art Unit 3657
/ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657