DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed on 08/27/2025 have been fully considered but are moot because the additional/modified claim language by the amendment necessitates new grounds of rejection. Examiner has augmented the prior art rejections in light of Applicant's amendments and/or arguments, as indicated below.
Claim Objections
Claims 11-16 are objected to because of the following informalities:
Regarding claim 11, the claim recites “…the processor-executable instructions which cause the robot to identify a first fault condition of the robot cause the robot to: ….”. Based on the claim language, the claim should read, “…the processor-executable instructions which cause the robot to identify the first fault condition of the robot cause the robot to: ….”.
Regarding claim 12-16, it is objected for the same reasons as provided in the objection of claim 11 mutandis mutatis.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 11-15, 17-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Avraham (US 20190359424 A1).
Regarding claim 1, Avraham teaches a robot selectively operable in a plurality of control modes including a full autonomy mode and a graphical user interface (GUI) control mode having less autonomy than the full autonomy mode, and an analogous teleoperation control mode having less autonomy that the GUI control mode (Abstract, Fig 3A, para 0005 wherein the robot is controlled using various control modes; “[0005] One exemplary embodiment of the disclosed subject matter is system comprising: a plurality of robotic picking apparatuses configured to pick an item from a container and place the item in a different place, wherein each robotic picking apparatus is configured to operate in one of at least three operational modes: an autonomous mode, a human-assisted mode, and a remote-controlled mode”), the robot comprising:
at least one processor (para 0091);
a communication interface that communicatively couples the at least one processor to a tele-operation system (Fig 4, para 0092 wherein the I/O module for communication and data exchange is provided; “I/O Module 405 may be utilized to provide an output to and receive input from a user, such as, for example obtain information from Sensors 450 at the vicinity of the robotic picking apparatus, to communicate with, send data to and receive instructions”);
at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing processor-executable instructions which, when executed by the at least one processor (para 0100), cause the robot to:
operate in the full autonomy mode (para 0074, fig 3B wherein the robot is operated in autonomous mode; “[0074] On Step 320, in case an adequate operation plan was automatically devices, the plan is executed by the robotic picking apparatus in autonomous mode”);
identify a first fault condition of the robot during operation of the robot in the full autonomy mode (para 0071-0076, fig 3 wherein the inadequate plan to execute task is identified when the robot is operating autonomously; “[0071] It is noted that in some cases, an operation plan may appear to be adequate and upon execution, such plan may fail. In such a case, Step 310 may be re-performed and in such a case, the determination may be that there is no adequate plan at hand”);
in response to identifying the first fault condition during operation of the robot in the full autonomy mode, change the control mode from the full autonomy mode to the GUI control mode (Fig 3, para 0040, 0074-0083 wherein the control mode is changes to human assistance when inadequate plan is identified; “0040 Additionally or alternatively, different human-input may require use of different input devices, such as a touchscreen or other pointing devices for human-assisted mode in providing assistance”; “[0080] On Step 350, a human user is prompted to provide the human assisted mode. In some exemplary embodiments, the user may be located in a remote site, such as in a service center located in a different country”);
transmit a request for GUI-based action instructions (para 0080 wherein “[0080] On Step 350, a human user is prompted to provide the human assistance”);
receive GUI-based action instructions from the teleoperation system (Fig 3A, para 0040, 0082 wherein “[0082] The user may provide her assistance, and based thereon, a plan to pick up the item may be determined (Steps 300-310) and potentially used autonomously (320)”;
execute the GUI-based action instructions (Fig 3A, para 0082 wherein robot generates and executes picking plan based on human assistance”);
while the robot is executing the GUI-based action instructions, identify a second fault condition of the robot, the second fault condition indicating that the first fault condition has not been resolved (Fig 3A, Para 0083 wherein in step 330 it is determined that partial assistance is insufficient to resolve failure to create adequate plan; “[0083] If, on the other hand, it is determined that partial assistance is insufficient to create an adequate plan, then the robotic picking apparatus may be operated in remote-controlled mode (370)”;
in response to identifying the second fault condition of the robot:
change the control mode of the robot to the analogous teleoperation control mode (Fig 3A, Para 0083 wherein “[0083] If, on the other hand, it is determined that partial assistance is insufficient to create an adequate plan, then the robotic picking apparatus may be operated in remote-controlled mode (370)”;
transmit a request to the teleoperation system for analogous teleoperation-based action instructions (para 0084 wherein “[0084] On Step 370, a user may be prompted to telecontrol the robotic picking apparatus”);
receive analogous teleoperation-based action instructions from the teleoperation system (para 0084 wherein “[0084] On Step 370, a user may be prompted to telecontrol the robotic picking apparatus”);
execute the analogous teleoperation action instructions (Fig 3A, para 0084 wherein robot is operated in remote controlled mode; “[0084] On Step 370, a user may be prompted to telecontrol the robotic picking apparatus”);
train the GUI control mode based on the analogous teleoperation action instructions to increase a level of robot autonomy of the GUI control mode (Fig 3A, para 0085, 0093 wherein the planning module generating the operation plan during human assistance is also trained using inputs received during remote controlled mode; “0085 Once the task is completed, such operation plan may be used to train the planning module to improve it so that if a similar scenario is encountered again, the planning module may be capable of providing an adequate operation plan without user intervention”; “0093 Memory 407 may retain the order, including the ordered items, the order of picking up items, or the like. Memory 407 may retain the operational plan. Additionally or alternatively, Memory 407 may retain history of operational plans, outcomes thereof, training dataset obtained based on user input”); and
train the full autonomy mode based on both the analogous teleoperation instructions and the GUI-based action instructions (Fig 3A, para 0085 wherein inputs received during remote controlled and human assist mode is used for training the planning module; “Once the task is completed, such operation plan may be used to train the planning module to improve it so that if a similar scenario is encountered again, the planning module may be capable of providing an adequate operation plan without user intervention. Additionally or alternatively, input of the human-assisted mode may be utilized to improve the algorithms used by the planning module. For example, computer vision algorithms may be trained using the image shown to the user and the human input provided by the user, so that in future cases, if similar cases are encountered again, the system may solve the problem in a high enough confidence without human assistance”).
Regarding claim 2, Avraham teaches wherein operation of the robot in the full autonomy mode requires no input from the operator of the tele-operation system (para 0037 wherein “A first operational mode may be an autonomous mode. In the autonomous mode, the picking operation may be executed without any human intervention or assistance”).
Regarding claim 11, Avraham teaches at least one sensor, wherein: the processor-executable instructions, when executed by the at least one processor further cause the robot to capture, by the at least one sensor, sensor data representing an environment of the robot (para 0053 wherein “In some exemplary embodiments, Robotic Picking Apparatus 130 may be controlled by a controller (not shown) that is configured to receive input from sensors (not shown), such as video cameras, pressure sensors, Infrared (IR) cameras, or the like, and control the operation of Robotic Picking Apparatus 130”) and
the processor-executable instructions which cause the robot to identify a first fault condition of the robot cause the robot to: identify, by the at least one processor based on the sensor data, that the robot has failed to complete an action to be performed by the robot (para 0054 wherein it is identified that the robot has failed to locate the item to be handled; “In some cases, additional human input, may be sufficient to devise the plan. For example, the plan may not be devised because the item cannot be located by the computer vision algorithm in picture of the container. In such a case, human assistance stations in a remote site may be contacted via a Network (140)”).
Regarding claim 12, Avraham teaches at least one sensor, wherein: the processor-executable instructions, when executed by the at least one processor further cause the robot to capture, by the at least one sensor, sensor data representing an environment of the robot (para 0053 wherein “In some exemplary embodiments, Robotic Picking Apparatus 130 may be controlled by a controller (not shown) that is configured to receive input from sensors (not shown), such as video cameras, pressure sensors, Infrared (IR) cameras, or the like, and control the operation of Robotic Picking Apparatus 130”) and
the processor-executable instructions which cause the robot to identify a first fault condition of the robot cause the robot to: identify, by the at least one processor based on the sensor data, that the robot is unable to complete an action to be performed by the robot (para 0054 wherein it is identified that the robot has unable to locate the item to be handled; “In some cases, additional human input, may be sufficient to devise the plan. For example, the plan may not be devised because the item cannot be located by the computer vision algorithm in picture of the container. In such a case, human assistance stations in a remote site may be contacted via a Network (140)”).
Regarding claim 13, Avraham teaches at least one sensor, wherein: the processor-executable instructions, when executed by the at least one processor further cause the robot to capture, by the at least one sensor, sensor data representing an environment of the robot (para 0053 wherein “In some exemplary embodiments, Robotic Picking Apparatus 130 may be controlled by a controller (not shown) that is configured to receive input from sensors (not shown), such as video cameras, pressure sensors, Infrared (IR) cameras, or the like, and control the operation of Robotic Picking Apparatus 130”) and
the processor-executable instructions which cause the robot to identify a first fault condition of the robot cause the robot to: identify, by the at least one processor based on the sensor data, that the robot has improperly completed an action to be performed by the robot (para 0071 wherein it is identified that the robot has improperly executed the task; “[0071] It is noted that in some cases, an operation plan may appear to be adequate and upon execution, such plan may fail. In such a case, Step 310 may be re-performed and in such a case, the determination may be that there is no adequate plan at hand”).
Regarding claim 14, Avraham teaches the processor-executable instructions which cause the robot to identify a first fault condition of the robot cause the robot to: identify, by the at least one processor, that the at least one processor is unable to determine an action or movement to be performed by the robot (para 0072-0074 wherein “[0072] Additionally or alternatively, instead of attempting to devise a plan automatically and determine that such attempt was unsuccessful, it may be a-priori estimated, given the sensor inputs available and the task at hand, whether the system is capable to autonomously determine the operation plan”; “[0074] On Step 320, in case an adequate operation plan was automatically devices, the plan is executed by the robotic picking apparatus in autonomous mode. Otherwise, the task at hand may not be performed in the autonomous mode and Step 330 may be performed”).
Regarding claim 15, Avraham teaches the processor-executable instructions which cause the at least one processor to identify a first fault condition of the robot cause the robot to: identify, by the at least one processor, that the at least one processor is unable to determine an action or movement to be performed by the robot with sufficient confidence to perform the determined action or movement (para 0007 wherein “determine confidence measurement for the operation plan, wherein the operation plan is automatically determined using the sensor input; in case the confidence measurement is above a threshold, provide the operation plan for execution by the robotic picking apparatus in the autonomous mode; in case the confidence measurement is below the threshold, select one of human-assisted mode or remote-controlled mode for the robotic picking apparatus”).
Regarding claim 17, Avraham teaches wherein the processor-executable instructions when executed by the at least one processor further cause the robot to: in response to identifying the first fault condition of the robot during operation of the robot in the full autonomy mode, output a first fault indication (para 0076 wherein the fault indication is provided; “[0076] In some exemplary embodiments, the failure to formulate an operation plan may be caused due to specific failure of a specific algorithm or lack of confidence in the products of such algorithm. In such a case, human input can be solicited to overcome the deficiencies of such algorithm and generate the operation plan using the human input. For example, a human user may be contacted and shown the image of the bulk container. The user may then identify contours of the item to be picked”).
Regarding claim 18, Avraham teaches wherein the processor-executable instructions which cause the robot to output the fault indication cause the robot to: send, by the communication interface, the fault indication to be received by the tele-operation system (Fig 4, para 0076 wherein the fault indication is received by the tele-operation system; “[0076] In some exemplary embodiments, the failure to formulate an operation plan may be caused due to specific failure of a specific algorithm or lack of confidence in the products of such algorithm. In such a case, human input can be solicited to overcome the deficiencies of such algorithm and generate the operation plan using the human input. For example, a human user may be contacted and shown the image of the bulk container. The user may then identify contours of the item to be picked”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 16, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Avraham (US 20190359424 A1) in view of Mathieu (US 20210394359 A1).
Regarding claim 16, Avraham teaches the processor-executable instructions which cause the at least one processor to identify a first fault condition of the robot (para 0007 wherein “determine confidence measurement for the operation plan, wherein the operation plan is automatically determined using the sensor input; in case the confidence measurement is above a threshold, provide the operation plan for execution by the robotic picking apparatus in the autonomous mode; in case the confidence measurement is below the threshold, select one of human-assisted mode or remote-controlled mode for the robotic picking apparatus”) and receiving operator input from the operator of the tele-operation system (Fig 4, para 0011 wherein “[0011] Optionally, said controller is configured to utilize human input to improve capabilities of the robotic picking apparatus in autonomous mode”).
However, Avraham fails to explicitly teach identifying that the robot has received operator input from the operator which indicates a fault condition of the robot.
Mathieu teaches identifying that the robot has received operator input from the operator which indicates a fault condition of the robot (0084 wherein “The risk balancing module may monitor the execution of the tasks and support dynamic changes in autonomy level. This change in autonomy can be triggered either by human user inputs or in response to unplanned interference or safety situations with no autonomous mitigation in place”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Avraham’s teachings of receiving operator input from the operator and identifying the fault to incorporate Mathieu’s teachings of identifying that the robot has received operator input from the operator which indicates a fault condition of the robot in order to allow the user to modify the autonomy level as desired by the user.
Regarding claim 19, Avraham teaches wherein the processor-executable instructions further cause the robot to: in response to identifying the fault condition of the robot during operation of the robot in the first mode, output a fault indication (para 0076 wherein the fault indication is provided; “[0076] In some exemplary embodiments, the failure to formulate an operation plan may be caused due to specific failure of a specific algorithm or lack of confidence in the products of such algorithm. In such a case, human input can be solicited to overcome the deficiencies of such algorithm and generate the operation plan using the human input. For example, a human user may be contacted and shown the image of the bulk container. The user may then identify contours of the item to be picked”).
However, Avraham fails to teach outputting the fault indication by an audio output device.
Mathieu teaches outputting the audio signals by an audio output device (para 0161“[0161] The audio I/O interface component 1046 is configured to provide audio input and/or output capabilities to the computing device. In some configurations, the audio I/O interface component 1046 includes a microphone configured to collect audio signals. In some configurations, the audio I/O interface component 1046 includes a headphone jack configured to provide connectivity for headphones or other external speakers”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Avraham’s teachings of outputting a fault indication to incorporate Mathieu’s teachings of outputting the audio signals by an audio output device in order to output the fault indication by an audio output device. Doing so would allow the user to be notified of any faults and appropriate actions can be taken without delay.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAGAR KC whose telephone number is (571)272-7337. The examiner can normally be reached M-F 8:30 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAGAR KC/Examiner, Art Unit 3657
/ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657