DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/29/2026 has been entered.
Response to Amendment
The amendment filed on 01/29/2026, has been received and made of record. In response to the Final Office Action, dated on 10/01/2025. Claims 1, 4-5, 9-21 and 23 are pending in the current application. Claims 2-3, 6-8 and 22 have been cancelled. Claim 23 is newly added.
Response to Arguments
Applicant’s arguments filed on 01/29/2026, have been fully considered.
In the Arguments/Remarks:
Re: Rejection of the Claims Under 35 U.S.C. 112(b)
Rejection of the claims under 35 U.S.C. 112(b) has been withdrawn in view of applicant’s amendments.
Re: Rejection of the Claims Under 35 U.S.C. 102(a)(1) and 103
Applicant’s arguments regarding the rejection of the claims under 35 U.S.C. 102(a)(1) and 103 have been fully considered, but in view of applicant’s amendments the arguments are rendered moot under the new grounds of rejection (see below) necessitated by the applicant’s amendments.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 9-11, 13, 15-20 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Sureka (WO 2018/021044 A1) in view of Morey (US 2019/0351548 A1).
Regarding claim 1, Sureka teaches a method for controlling operation of at least one machine configured to carry out pick-and-place or singulation tasks on objects, wherein the at least one machine comprises at least one functional device that comprises at least one functional element for carrying out at least one task, the method comprising: displaying a first selection menu on a graphical user interface via at least one output device [(see at least paragraphs 17-20) As in 17 “Various embodiments of the invention relate to performing an operation with an industrial robot(s). The operation (process) may be painting, packaging, assembly etc., which may need one or several industrial robots working together. An operation may include one or more tasks such as, but not limited to, pick, drop, grab, and move. The operation may be performed by programming (or configuring) a controller(s) of the industrial robot (programming the robot). The controller configuration is performed using a robot programming system (or environment). Such a system can have an interface and a processor to generate one or more control logic for control of the industrial robot to perform the operation. For instance, the interface can display graphical objects, which can be selected, edited as required and combined by the user to generate the one or more control logic. The generated logic can then be provided to the controller of the robot, for controlling the robot to perform the operation.”],
Sureka teaches controlling the operation of the at least one machine on the basis of control information in order to carry out the at least one task [(see at least paragraph 17) “For instance, the interface can display graphical objects, which can be selected, edited as required and combined by the user to generate the one or more control logic. The generated logic can then be provided to the controller of the robot, for controlling the robot to perform the operation.”], wherein the control information is generated on the basis of the first task parameter type and the second task parameter type that relate to the operation of the at least one machine in order to carry out the at least one task. [(see at least paragraphs 17-20) As in 17 “The operation (process) may be painting, packaging, assembly etc., which may need one or several industrial robots working together. An operation may include one or more tasks such as, but not limited to, pick, drop, grab, and move. The operation may be performed by programming (or configuring) a controller(s) of the industrial robot (programming the robot).” As in 20 “Further, the method 100 includes at 104 defining a control logic (one or more control logic) for the operation with the industrial robot, and linking the control logic. Each graphical object is associated with a task. Accordingly, a control logic can be defined for the task, and stored in a memory (e.g. as part of a library). Thus, when a selection of multiple graphical objects is performed in a specific order / flow, an operation can be defined. Here, according to the defined order / flow, the one or more control logic corresponding to the selected tasks can be fetched and linked (e.g. with a linker).”]
Sureka does not explicitly teach the first selection menu including a first selection option corresponding to a first task parameter type: in response to receiving a user selection of the first selection option of the first selection menu, displaying a second selection menu including a second selection option on the graphical user interface via the at least one output device, wherein the second selection option corresponds to a second task parameter type, wherein the first task parameter type and the second task parameter type are stored on at least one data storage device in a predefined linked manner on the basis of at least one hierarchy criterion that describes a hierarchically ordered link of the first task parameter type to the second task parameter type such that the second selection option is displayed and selectable in response to user selection of the first selection option and such that only task parameter types within a set of predefined linked task parameter types can be linked and selected to generate control information.
However, Morey teaches the first selection menu including a first selection option corresponding to a first task parameter type: in response to receiving a user selection of the first selection option of the first selection menu, displaying a second selection menu including a second selection option on the graphical user interface via the at least one output device, wherein the second selection option corresponds to a second task parameter type, wherein the first task parameter type and the second task parameter type are stored on at least one data storage device in a predefined linked manner on the basis of at least one hierarchy criterion that describes a hierarchically ordered link of the first task parameter type to the second task parameter type such that the second selection option is displayed and selectable in response to user selection of the first selection option and such that only task parameter types within a set of predefined linked task parameter types can be linked and selected to generate control information [(see at least Figs.1-3B, paragraphs 46-61 ) As in 47 “Providing a task-based selection of robotic platforms allows choice of robotic devices based tasks that the robotic device is expected to perform. This task-based approach can simplify decisions related to obtaining and using robotic devices, while optimizing costs and components related to the robotic device by using tasks-specific configurations. Ordering robotic devices based on tasks allows relatively-novice owners to obtain customized robotic devices, as the owner does not have to know about differences in robotic components, designs, or other criteria that are unrelated to the task(s) to be performed by the robotic device. Further, as components change, a robotic device purchaser can specifying robotic devices based on tasks eases the introduction of new components into robotic devices as ordering robotic devices. Thus, these techniques enable easy customization of robotic devices based on task information readily available to a (future) robotic device owner or user.” As in 61 “Task selections 310 include a question “What is your robot going to do inside the home?” that prompts for task selections related to environment selection 130 of “Inside the Home” and environment sub-selections 210 “Kitchen”, “Bedrooms”, and “Living room/dining room/hallways” chosen earlier in scenario 100. FIG. 3A shows that task selections 310 include “Vacuum floors”, “Sweep/mop floors”, “Clean counters/work surfaces”, “Move objects”, and “Home safety/monitoring”. In other scenarios and/or embodiments of robotic ordering system 120, more, fewer, and/or different task selections 310 can be provided and/or chosen. In scenario 100 and as shown in FIG. 3A, specific task selections 310 of “Vacuum floors”, “Clean counters/work surfaces”, and “Move objects” are chosen, indicating that a robotic device is selected to be used to vacuum floors, clean counters and/or work surfaces, and move objects in kitchen(s), bedroom(s), living room(s), dining room(s) and hallway(s) of a home environment as indicated by an X in a box preceding each chosen task selection. In other scenarios and/or embodiments, more, fewer, and/or different information, selections, and/or user-interface controls can be provided by task selection window 300.”] Examiner submits that in Fig.1 the first parameter is chosen. Fig.1 depicts possible choices for where the robot will be stationed. Of the possible choices “inside the home” was selected as the first parameter type. Fig.2 depicts more specific possible choices inside the home where the robot would be stationed. The selections include kitchen, bathrooms, bedroom, living room/dining room/hallways and garage. This is the second parameter, dependently linked to the choice of the first parameter that was chosen. If the first parameter selected was restaurant, the bedroom option for the second parameter would be different.
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to have modified the teachings of Sureka to incorporate the teachings of Morey of the first selection menu including a first selection option corresponding to a first task parameter type: in response to receiving a user selection of the first selection option of the first selection menu, displaying a second selection menu including a second selection option on the graphical user interface via the at least one output device, wherein the second selection option corresponds to a second task parameter type, wherein the first task parameter type and the second task parameter type are stored on at least one data storage device in a predefined linked manner on the basis of at least one hierarchy criterion that describes a hierarchically ordered link of the first task parameter type to the second task parameter type such that the second selection option is displayed and selectable in response to user selection of the first selection option and such that only task parameter types within a set of predefined linked task parameter types can be linked and selected to generate control information in order to enable easy customization/controlling of robotic devices based on task information readily available to a robotic device owner or user. [(Morey 47)]
Regarding claim 9, In view of the above combination of references, Sureka further teaches wherein a task parameter type relates to the at least one functional element of the functional device to be used for carrying out the task. [(see at least paragraphs 17-20, 29) As in 29 “Accordingly, if the user may wish to create a program specific to a functionality of the industrial robot, he may select a category 305C, and corresponding to that functionality all the graphical objects common within that category will be displayed to the user in the toolbox (304). For example, Figure 3B illustrates categories 'BEGF AND END TASK', 'POSITION AND REFERNCE FRAME', 'PATH AND TARGET, 'MOTION', and 'CONTROL' as functions of the pick and drop / place type industrial operation to be defined for the industrial robot. Further, as illustrated in Figure 3B, when 'PATH AND TARGET' is selected from the categories, a plurality of graphical objects 405 are displayed associated with the 'PATH AND TARGET' category.”]
Regarding claim 10, In view of the above combination of references, Sureka further teaches wherein a task parameter type relates to at least one object to be transferred from a first state to at least one further state within the context of carrying out the task. [(see at least paragraphs 20-21, 29) As in 20 “Accordingly, a control logic can be defined for the task, and stored in a memory (e.g. as part of a library). Thus, when a selection of multiple graphical objects is performed in a specific order / flow, an operation can be defined. Here, according to the defined order / flow, the one or more control logic corresponding to the selected tasks can be fetched and linked (e.g. with a linker). It is possible that a parameter value (e.g. path, speed etc.) is provided in the selection. The corresponding control logic can accordingly be updated based on the provided values.” As in 29 “The categorizing and/or sub-categorizing are based on various functionalities of the industrial robot being programmed. Accordingly, if the user may wish to create a program specific to a functionality of the industrial robot, he may select a category 305C, and corresponding to that functionality all the graphical objects common within that category will be displayed to the user in the toolbox (304). For example, Figure 3B illustrates categories 'BEGF AND END TASK', 'POSITION AND REFERNCE FRAME', 'PATH AND TARGET, 'MOTION', and 'CONTROL' as functions of the pick and drop / place type industrial operation to be defined for the industrial robot. Further, as illustrated in Figure 3B, when 'PATH AND TARGET' is selected from the categories, a plurality of graphical objects 405 are displayed associated with the 'PATH AND TARGET' category.”]
Regarding claim 11, In view of the above combination of references, Sureka further teaches wherein a task parameter type relates to at least one action to be carried out before carrying out the task. [(see at least paragraph 21) “The linking of control logic is thus formed (e.g. by two graphical objects 305A and 305B) as a result of being linked together using the environment 300 (e.g. with programming canvas 306). In one implementation, only those graphical objects can be linked which are semantically correct, i.e., are logical to be combined. For example, while defining the control logic for the operation to be performed with the industrial robot, one graphical object corresponding to identifying an object at a first target location is selected, and another graphical object corresponding to dropping the object at a second target location is selected. In this case, as the action of picking is required. This is because if the object was not picked, it cannot be dropped. Therefore, linking the two graphical objects would not be possible, and hence the control logic for pick and drop will not be defined”]
Regarding claim 13, In view of the above combination of references, Sureka further teaches wherein a task parameter type relates to at least one action to be carried out after carrying out the task. [(see at least paragraph 20) “Here, according to the defined order / flow, the one or more control logic corresponding to the selected tasks can be fetched and linked (e.g. with a linker). It is possible that a parameter value (e.g. path, speed etc.) is provided in the selection. The corresponding control logic can accordingly be updated based on the provided values.”]
Regarding claim 15, In view of the above combination of references, Sureka further teaches wherein a plurality of machines is controlled, wherein control of the plurality of machines to respectively carry out at least one task is controlled based on control information generated based on a plurality of task parameter types relating to operation of the plurality of machines to carry out the respective at least one task. [(see at least paragraph 17) “Various embodiments of the invention relate to performing an operation with an industrial robot(s). The operation (process) may be painting, packaging, assembly etc., which may need one or several industrial robots working together. An operation may include one or more tasks such as, but not limited to, pick, drop, grab, and move. The operation may be performed by programming (or configuring) a controller(s) of the industrial robot (programming the robot). The controller configuration is performed using a robot programming system (or environment). Such a system can have an interface and a processor to generate one or more control logic for control of the industrial robot to perform the operation. For instance, the interface can display graphical objects, which can be selected, edited as required and combined by the user to generate the one or more control logic. The generated logic can then be provided to the controller of the robot, for controlling the robot to perform the operation.”]
Regarding claim 16, In view of the above combination of references, Sureka further teaches wherein a number of first task parameter types relate to the operation of the at least one first machine for carrying out the task to be carried out by means of the at least one first machine, and a number of further task parameter types relate to operation of at least one further machine for carrying out the task to be carried out by means of the at least one further machine, wherein the further task parameter types are linked to the first task parameter types on the basis of predefined task parameter type-specific links. [(see at least paragraphs 17,29) As in 17 “Various embodiments of the invention relate to performing an operation with an industrial robot(s). The operation (process) may be painting, packaging, assembly etc., which may need one or several industrial robots working together. An operation may include one or more tasks such as, but not limited to, pick, drop, grab, and move. The operation may be performed by programming (or configuring) a controller(s) of the industrial robot (programming the robot). The controller configuration is performed using a robot programming system (or environment). Such a system can have an interface and a processor to generate one or more control logic for control of the industrial robot to perform the operation.” As in 29 “programming environment 300, wherein the toolbox 302 includes a plurality of graphical objects (405). The plurality of graphical objects are displayed according to a selection of one or more categories and/or subcategories of graphical objects (305C). The categorizing and/or sub-categorizing are based on various functionalities of the industrial robot being programmed. Accordingly, if the user may wish to create a program specific to a functionality of the industrial robot, he may select a category 305C, and corresponding to that functionality all the graphical objects common within that category will be displayed to the user in the toolbox (304). For example, Figure 3B illustrates categories 'BEGF AND END TASK', 'POSITION AND REFERNCE FRAME', 'PATH AND TARGET, 'MOTION', and 'CONTROL' as functions of the pick and drop / place type industrial operation to be defined for the industrial robot. Further, as illustrated in Figure 3B, when 'PATH AND TARGET' is selected from the categories, a plurality of graphical objects 405 are displayed associated with the 'PATH AND TARGET' category.”]
Regarding claim 17, In view of the above combination of references, Sureka further teaches wherein the output device is an optical output device. [(see at least paragraph 35) “The system (400) may further include a displaying module (not shown). The displaying module is configured to display the robot programming environment 300 in accordance with the teachings of the present invention. The displaying module is further configured to display the robot programming environment 300 (e.g. remotely) as a programming interface on a display of a device of the user (e.g. on a laptop, computer, smartphone or tablet device of the user).”]
Regarding claim 18, In view of the above combination of references, Sureka further teaches wherein at least one of the first selection menu or the second selection menu is a drop-down menu. [(see at least paragraph 27) “The robot programming system (environment) 300 may include an interface and processor, which provide a variety of tools to assist in performing the operation with the industrial robot using the method described hereinabove. In Figure 3, a toolbar 302 is shown. Such a toolbar may include various selectable buttons to select functions and options which are common to all the programs created in the offline integrated robot programming environment. These selectable buttons may include one or more of, but not limited to, a 'NEW, 'OPEN', ' SAVE, 'UNDO', 'REDO' ' SHOW CODE' 'CONTACT', ' SETTING' and 'HELP' options. The 'NEW option is to create a new file for creating a program in the programming environment 300. Once, the file is created it can be saved using 'SAVE' option. Also, the file can be opened using 'OPEN' option. During creation of a program by using the robot programming environment 300, the user's actions can be undone, redone using 'UNDO', and 'REDO' options respectively. The option ' SHOW CODE' can be used to see the software code(s) corresponding to the graphical objects (305) in the programming canvas (306). However, the described selectable buttons are not restrictive and can be customized to utilize many other functionalities of the robot programming environment 300. The functionality implemented with the graphical objects (305) linked together in the programming canvas (106) may be tested with the robot simulator 308 or with the controller.”]
Regarding claim 19, In view of the above combination of references, Sureka further teaches wherein a handling device of a collaborative industrial robot comprising at least one handling element that can be moved in at least one degree of freedom of movement, is used as the functional device of the at least one machine. [(see at least paragraph 17) “The operation (process) may be painting, packaging, assembly etc., which may need one or several industrial robots working together. An operation may include one or more tasks such as, but not limited to, pick, drop, grab, and move. The operation may be performed by programming (or configuring) a controller(s) of the industrial robot (programming the robot).”]
Regarding claim 20, In view of the above combination of references, Sureka further teaches the at least one machine comprising: the at least one functional device that comprises the at least one functional element for carrying out the at least one task [(see at least paragraph 17) “Various embodiments of the invention relate to performing an operation with an industrial robot(s). The operation (process) may be painting, packaging, assembly etc., which may need one or several industrial robots working together. An operation may include one or more tasks such as, but not limited to, pick, drop, grab, and move. The operation may be performed by programming (or configuring) a controller(s) of the industrial robot (programming the robot). The controller configuration is performed using a robot programming system (or environment).”], and at least one control device implemented in hardware and/or software, for controlling the operation of the at least one machine on the basis of the control information [(see at least paragraph 17) “The operation may be performed by programming (or configuring) a controller(s) of the industrial robot (programming the robot). The controller configuration is performed using a robot programming system (or environment). Such a system can have an interface and a processor to generate one or more control logic for control of the industrial robot to perform the operation. For instance, the interface can display graphical objects, which can be selected, edited as required and combined by the user to generate the one or more control logic. The generated logic can then be provided to the controller of the robot, for controlling the robot to perform the operation.”], wherein the control device is configured for carrying out the method according to claim 1. [(see rationale for claim 1 above)]
Regarding claim 23, In view of the above combination of references, Sureka further teaches further comprising displaying a tree structure on the graphical user interface, the tree structure comprising: the first selection menu including the first selection option; the second selection menu including the second selection option; and a graphical link between the first selection menu and the second selection menu. [(see at least Fig.3B-305C, paragraph 29) “Figure 3B illustrates another embodiment of the robot programming environment 300, wherein the toolbox 302 includes a plurality of graphical objects (405). The plurality of graphical objects are displayed according to a selection of one or more categories and/or subcategories of graphical objects (305C). The categorizing and/or sub-categorizing are based on various functionalities of the industrial robot being programmed. Accordingly, if the user may wish to create a program specific to a functionality of the industrial robot, he may select a category 305C, and corresponding to that functionality all the graphical objects common within that category will be displayed to the user in the toolbox (304). For example, Figure 3B illustrates categories 'BEGF AND END TASK', 'POSITION AND REFERNCE FRAME', 'PATH AND TARGET, 'MOTION', and 'CONTROL' as functions of the pick and drop / place type industrial operation to be defined for the industrial robot. Further, as illustrated in Figure 3B, when 'PATH AND TARGET' is selected from the categories, a plurality of graphical objects 405 are displayed associated with the 'PATH AND TARGET' category.”]
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Sureka in view of Morey and in further view of Linnell (US 2015/0273685 A1).
Regarding claim 4, Modified Sureka has all of the elements of claim 1 as discussed above.
Sureka does not explicitly teach wherein the first task parameter type and the second task parameter type are linked on the basis of at least one safety criterion that describes an ordered link of the first task parameter type to the second task parameter type with regard to safety relevant aspects of the operation of the machine.
However, Linnell teaches wherein the first task parameter type and the second task parameter type are linked on the basis of at least one safety criterion that describes an ordered link of the first task parameter type to the second task parameter type with regard to safety relevant aspects of the operation of the machine. [(see at least paragraph 54) “In further examples, safety systems 90 may be provided for preventative safety in detecting potential collisions between device actors in modeling the motion of the actors through a global timeline. Further, such modeling through a global timeline may be used to set safety parameters for safety systems 90. Modeling of locations and velocities of device actors through a global timeline may enable identification of unsafe zones and unsafe times in an area of a physical workcell. Such an identification may be used to set sensing triggers of object detectors that are part of an example safety system. For example, if an area within 5 feet of a certain device actor is determined to be at risk of collision, and a buffer zone of 10 additional feet is required to insure safety during operation, a LIDAR detector may be configured to detect unexpected objects and movement within a 15 foot area of the device actor during operation, and to automatically create a safety shutdown if an object is detected.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date before the claimed invention to have modified the teachings of modified Sureka to incorporate the teachings of Linnell of the first task parameter type and the second task parameter type are linked on the basis of at least one safety criterion that describes an ordered link of the first task parameter type to the second task parameter type with regard to safety relevant aspects of the operation of the machine in order to reduce risk of collision, and create a buffer zone to insure safety during operation. [(Linnell 54)]
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Sureka in view of Morey and in further view of Hazan (US 2019/0344443 A1).
Regarding claim 5, Modified Sureka has all of the elements of claim 1 as discussed above.
Sureka does not explicitly teach wherein the first task parameter type and the second task parameter type are linked on the basis of at least one efficiency criterion that describes an ordered link of the first task parameter type to the second task parameter type with regard to efficiency-relevant aspects of the operation of the machine.
However, Hazan teaches wherein the first task parameter type and the second task parameter type are linked on the basis of at least one efficiency criterion that describes an ordered link of the first task parameter type to the second task parameter type with regard to efficiency-relevant aspects of the operation of the machine. [(see at least paragraphs 39-42) As in 42 “In order to determine the optimal sequence of tools/chains for the twin robot 20 in the first embodiment of FIG. 3, available robot configurations are determined and then the shortest path is determined from all valid robotic paths represented by a graph.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date before the claimed invention to have modified the teachings of Sureka to incorporate the teachings of Hazan of the first task parameter type and the second task parameter type are linked on the basis of at least one efficiency criterion that describes an ordered link of the first task parameter type to the second task parameter type with regard to efficiency-relevant aspects of the operation of the machine in order to determine the most efficient multiple robot configuration for the full location sequence so that the multiple robots can optimally execute its tasks. [(Hazan 23)]
Claims 12 and 14 is rejected under 35 U.S.C. 103 as being unpatentable over Sureka in view of Morey and in further view of Battles (US 2017/0369244 A1).
Regarding claim 12, Modified Sureka has all of the elements of claim 11 as discussed above.
Sureka does not explicitly teach wherein: an action to be performed before carrying out the task is a detection of an object, wherein the task parameter type describes at least one object detection parameter relating to the detection of at least one object to be transferred from a first state to at least one further state by means of a detection device in the course of carrying out the task; or an action to be carried out before carrying out the task is a feed of at least one object to be transferred from a first state into at least one further state within the context of the task, wherein the task parameter type describes at least one object feed parameter relating to the feed of an object into an action area of the at least one industrial robot.
However, Battles teaches wherein: an action to be performed before carrying out the task is a detection of an object, wherein the task parameter type describes at least one object detection parameter relating to the detection of at least one object to be transferred from a first state to at least one further state by means of a detection device in the course of carrying out the task; or an action to be carried out before carrying out the task is a feed of at least one object to be transferred from a first state into at least one further state within the context of the task, wherein the task parameter type describes at least one object feed parameter relating to the feed of an object into an action area of the at least one industrial robot. [(see at least paragraphs 50-55) As in 50 “Adjacent the transition section is a pick conveyor 314 configured to convey singulated items from the transition section 310 to the pick area 315. Positioned overhead and above the pick area is an imaging element 316. The imaging element 316 is oriented so that the pick area 315 is within a field of view of the imaging element 316. In some implementations, the pick area 315 includes an illuminated surface or backlight 317 that illuminates toward the imaging element 316.” As in 51 “One or more item detection components 319-1, 319-2 may be positioned along the pick conveyor to detect a presence of an item on the pick conveyor. The item detection component may be, for example, a photoelectric sensor (aka photo eye), pressure sensor, imaging element, and/or other component that can be used to detect a presence or absence of an item on the pick conveyor. In some implementations, the overhead camera 316 may be used as the item detection component.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date before the claimed invention to have modified the teachings of modified Sureka to incorporate the teachings of Battles of an action to be performed before carrying out the task is a detection of an object, wherein the task parameter type describes at least one object detection parameter relating to the detection of at least one object to be transferred from a first state to at least one further state by means of a detection device in the course of carrying out the task in order to detect a presence of an object in the pick area. [(Battles 53)]
Regarding claim 14, Modified Sureka has all of the elements of claim 13 as discussed above.
Sureka does not explicitly teach wherein: an action to be carried out after carrying out the task is a detection of an object, wherein the task parameter type describes at least one object detection parameter relating to the detection of at least one object transferred from a first state to at least one further state by means of a detection device in the course of carrying out the task; or an action to be carried out after carrying out the task is a placing of at least one object transferred from a first state into at least one further state within the context of the task, wherein the task parameter type describes at least one object placing parameter relating to the placing of an object, in at least one placing area.
However, Battles teaches wherein: an action to be carried out after carrying out the task is a detection of an object, wherein the task parameter type describes at least one object detection parameter relating to the detection of at least one object transferred from a first state to at least one further state by means of a detection device in the course of carrying out the task; or an action to be carried out after carrying out the task is a placing of at least one object transferred from a first state into at least one further state within the context of the task, wherein the task parameter type describes at least one object placing parameter relating to the placing of an object, in at least one placing area. [(see at least paragraphs 47-55) As “the end effector may be a physical surface that is positioned adjacent an item and used to swipe, push, or otherwise move an item from the pick area away from the singulation station. In still another example, the second robotic unit 204 may be a UAV that is configured with an end effector that can pick or otherwise move items from the pick area.” As in 51 “One or more item detection components 319-1, 319-2 may be positioned along the pick conveyor to detect a presence of an item on the pick conveyor. The item detection component may be, for example, a photoelectric sensor (aka photo eye), pressure sensor, imaging element, and/or other component that can be used to detect a presence or absence of an item on the pick conveyor. In some implementations, the overhead camera 316 may be used as the item detection component.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date before the claimed invention to have modified the teachings of modified Sureka to incorporate the teachings of Battles of an action to be carried out after carrying out the task is a detection of an object, wherein the task parameter type describes at least one object detection parameter relating to the detection of at least one object transferred from a first state to at least one further state by means of a detection device in the course of carrying out the task in order to detect a presence of an object in the pick area and obtain an image of the object for processing a pick point. [(Battles 53)]
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Sureka in view of Morey and in further view of Haddadin (US 2018/0354141 A1).
Regarding claim 21, Modified Sureka has all of the elements of claim 1 as discussed above.
Sureka does not explicitly teach wherein the hierarchically ordered link of the first task parameter type to the second task parameter type cannot be changed by a user of the method.
However, Haddadin teaches wherein the hierarchically ordered link of the first task parameter type to the second task parameter type cannot be changed by a user of the method. [(see at least paragraphs) As in 25 “In a preferred embodiment of the robotic system according to the invention, the operator can use the keys 13, 14, 15, 16 and 17 of the input device to select the desired operations, that the robotic system should perform to accomplish a given task, from a menu displayed on the graphical user interface, in that the operator moves e.g. by means of the D-pad short stroke key, in the menu to the corresponding operation icon and then, after having selected this icon, confirming this icon by pressing one of the four operating keys 13, 14, 15 and 16, which keys have been previously set with a corresponding function.” As in 28 “ In a further embodiment, the robotic system according to the invention can also be designed in such a way that the control unit is designed to, for each operation, display in the graphical user interface during the parameterization of an operation a predetermined parameterization submenu (context menu) stored in the control unit, in which submenu the various predetermined parameterization options are shown, which can then be selected with the input device on the pilot head 9 via the keys 13, 14, 15, 16, 17 and/or 20 by means of a control of the graphical user interface of the parameterization submenu in order to perform a parameterization.” As in 30 “In a further embodiment, the control unit stores all possible operations of the robotic system and all possible parameterization submenus aimed for these operations, which are structured such that the operator can conduct all programming of the robotic system at the input device with a very limited number of input elements, e.g. keys, so that the programming can be done without the aid of external input devices such as computer keyboards. Ideally, with the pilot head as shown in FIG. 1, this can even be done with only one hand, so that the operator's second hand is free to be used for other functions, e.g. the actuation of an EMERGENCY STOP switch.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of modified Sureka to incorporate the teachings of Haddadin of the hierarchically ordered link of the first task parameter type to the second task parameter type cannot be changed by a user of the method in order to execute a plurality of predefined operations with very limited number of input operations. [(Haddadin 30)]
The Examiner has cited particular paragraphs or columns and line numbers in the references applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED YOUSEF ABUELHAWA whose telephone number is (571)272-3219. The examiner can normally be reached Monday-Friday 8:30-5:00 with flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at 571-270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED YOUSEF ABUELHAWA/Examiner, Art Unit 3656
/WADE MILES/Supervisory Patent Examiner, Art Unit 3656